jQuery Conference and The Ajax Experience

September 12, 2009 12:04 pm | Comments Off on jQuery Conference and The Ajax Experience

I’m heading out on the red-eye tonight for Boston. I’ll be there for three action-packed days!

Sunday morning I’m speaking at jQuery Conference ’09. Monday afternoon I’m doing a talk at The Ajax Experience. Both conferences look great. I’m excited to share what I know and talk with other web developers to find out their latest discoveries and also pain points, especially with regard to web performance. I’ll be giving away free copies of Even Faster Web Sites and will be announcing two new open source projects.

If you’re at either conference, please say “hi”.

Comments Off on jQuery Conference and The Ajax Experience

Even Faster Web Sites in Skiathos

September 9, 2009 10:10 pm | 4 Comments

Ioannis Cherouvim, a software engineer from Greece, sent me this photo:

He took Even Faster Web Sites with him on his vacation to Skiathos, a Greek island in the Aegean Sea. In his email, Ioannis mentioned that applying the tips from my previous book to the newspaper web site Eleftherotypia (“free press” in Greek) saved “10GB traffic per day which really made a difference in our datacenter bill.” In addition, the users of the site got a better user experience.

Improved web performance and views of the Aegean Sea from the shores of a Greek island – now that’s heaven. Thanks, Ioannis – you made my day!

4 Comments

Doloto: JavaScript download optimizer

September 8, 2009 10:02 pm | 11 Comments

One of the speakers at Velocity 2008 was Ben Livshits from Microsoft Research. He spoke about Doloto, a system for splitting up huge JavaScript payloads for better performance. I talk about Doloto in Even Faster Web Sites. When I wrote the book, Doloto was an internal project, but that all changed last week when Microsoft Research released Doloto to the public.

The project web site describes Doloto as:

…an AJAX application optimization tool, especially useful for large and complex Web 2.0 applications that contain a lot of code, such as Bing Maps, Hotmail, etc. Doloto analyzes AJAX application workloads and automatically performs code splitting of existing large Web 2.0 applications. After being processed by Doloto, an application will initially transfer only the portion of code necessary for application initialization.

Anyone who has tried to do code analysis on JavaScript (a la Caja) knows this is a complex problem. But it’s worth the effort:

In our experiments across a number of AJAX applications and network conditions, Doloto reduced the amount of initial downloaded JavaScript code by over 40%, or hundreds of kilobytes resulting in startup often faster by 30-40%, depending on network conditions.

Hats off to Ben and the rest of the Doloto team at Microsoft Research on the release of Doloto. This and other tools are sorely needed to help with some of the heavy lifting that now sits solely on the shoulders of web developers.

11 Comments

The skinny on cookies

August 21, 2009 1:56 pm | 4 Comments

I just finished Eric Lawrence’s post on Internet Explorer Cookie Internals. Eric works on the IE team as well as owning Fiddler. Everything he writes is worth reading. In this article he answers FAQs about how IE handles cookies, for example:

  • If I don’t specify a leading dot when setting the DOMAIN attribute, IE doesn’t care?
  • If I don’t specify a DOMAIN attribute when [setting] a cookie, IE sends it to all nested subdomains anyway?
  • How many cookies will Internet Explorer maintain for each site?

Another cookie issue is the effect extremely large cookies have on your web server. For example, Apache will fail if it receives a cookie header that exceeds 8190 bytes (as set by the LimitRequestLine directive). 8K seems huge! But remember, all the cookies for a particular web page are sent in one Cookie: header. So 8K is a hard limit for the total size of cookies. I wrote a test page that demonstrates the problem.

Keep your cookies small – it’s good for performance as well as uptime.

4 Comments

OSCON and Page Responsiveness videos

August 15, 2009 5:01 pm | 1 Comment

I had a great time at OSCON a few weeks back. It was in San Jose this year. (Pro: I don’t have to travel and my wife can go to the parties. Con: I miss Portland.) Just as I wrote about last year, Gregg Pollack was there asking speakers to summarize their talks in 30 seconds. He published the results in the video series 5 Days of OSCON. I’m in the video for Day 3.

Gregg also pointed me to his Page Responsiveness webcast/video, where he talks about YSlow and the Google Ajax Libraries API. I really like this video. It’s informative, engaging, and short. They remind me of Aza Raskin’s webcasts on Ubiquity and Jetpack. These two guys are very talented in how they convey complex information in a hands-on way. I encourage you to take a look.

1 Comment

F5 and XHR deep dive

August 11, 2009 1:31 pm | 13 Comments

In Ajax Caching: Two Important Facts from the HttpWatch blog, the author points out that:

…any Ajax derived content in IE is never updated before its expiration date – even if you use a forced refresh (Ctrl+F5). The only way to ensure you get an update is to manually remove the content from the cache.

I found this hard to believe, but it’s true. If you hit Reload (F5), IE will re-request all the unexpired resources in the page, except for XHRs. This can certainly cause confusion for developers during testing, but I wondered if there were other issues. What was the behavior in other major browsers? What if the expiration date was in the past, or there was no Expires header? Did adding Cache-Control max-age (which overrides Expires) have any effect?

So I created my own Ajax Caching test page.

My test page contains an image, an external script, and an XMLHttpRequest. The expiration time that is used depends on which link is selected.

  • Expires in the Past adds an Expires response header with a date 30 days in the past, and a Cache-Control header with a max-age value of 0.
  • no Expires does not return any Expires nor Cache-Control headers.
  • Expires in the Future adds an Expires response header with a date 30 days in the future, and a Cache-Control header with a max-age value of 2592000 seconds.

The test is simple: click on a link (e.g., Expires in the Past), wait for it to load, and then hit F5. Table 1 shows the results of testing this page on major browsers. The result recorded in the table is whether the XHR was re-requested or read from cache, and if it was re-requested what was the HTTP status code.

Table 1. When XHR is cached, what happens when you hit F5?
Past Expires No Expires Future Expires
Chrome 2 304 304 304
Firefox 3.5 304 304 304
IE 7 304 cache cache
IE 8 304 cache cache
Opera 10 304 cache 304
Safari 4 200 200 200

Here’s my summary of what happens when F5 is hit:

  • All browsers re-request the image and external script. (This makes sense.)
  • All browsers re-request the XHR if the expiration date is in the past. (This makes sense – the browser knows the cached XHR is expired.)
  • The only variant behavior has to do with the XHR when there is no Expires or a future Expires. IE 7&8 do not re-request the XHR when there is no Expires or a future Expires, even if control-F5 is hit. Opera 10 does not re-request the XHR when there is no Expires. (I couldn’t find an equivalent for control-F5 in Opera.)
  • Both Opera 10 and Safari 4 re-request the favicon.ico in all situations. (This seems wasteful.)
  • Safari 4 does not send an If-Modified-Since request header in all situations. As a result, the response is a 200 status code and includes the entire contents of the original response. This is true for the XHR as well as the image and external script. (This seems wasteful and deviates from the other browsers.)

Takeaways

Here are my recommendations on what web developers and browser vendors should takeaway from these results:

  1. Developers should either set a past or future expiration date on their XHRs, and avoid the ambiguity and variant behavior when no expiration is specified.
  2. If XHR responses should not be cached, developers should assign them an expiration date in the past.
  3. If XHR responses should be cached, developers should assign them an expiration date in the future. When testing in IE 7&8, developers have to remember to clear their cache when testing the behavior of Reload (F5).
  4. IE should re-request the XHR when F5 is hit.
  5. Opera and Safari should stop re-requesting favicon.ico when F5 is hit.
  6. Safari should send If-Modified-Since when F5 is hit.

13 Comments

OmniTI and performance koolaid

July 28, 2009 11:17 pm | Comments Off on OmniTI and performance koolaid

In YSlow! to YFast! in 45 minutes, Theo Schlossnagle (CEO of OmniTI) delivers a play-by-play about how he made his corporate web site 35% faster. The amazing revelation in his commentary is that he was able to complete all of these improvements while sitting in my workshop at Velocity (ahem).

OmniTI is a full service web house, specializing in web performance and scalability. The irony of the fact that their corp web site received a YSlow “F” wasn’t wasted on Theo. The cobbler’s children syndrome. (Same is true on my web site – I’ve got to optimize WordPress!)

Theo walks through his changes one-by-one: adding a far future Expires header, removing ETags, compressing text responses especially scripts and stylesheets, and moving resources to a CDN without cookies. With less than 45 minutes work, his site went from a load time of 486 milliseconds down to 315 milliseconds.

There’s more low hanging fruit – consolidating scripts, consolidating stylesheets, and CSS sprites. But it’s great to get this early case study on specific improvements and their corresponding impact on performance. I hope he’ll share the results from the next wave of optimizations.

Comments Off on OmniTI and performance koolaid

Wikia: fast pages retain users

July 27, 2009 10:52 pm | 13 Comments

At OSCON last week, I attended Artur Bergman’s session about Varnish – A State of the Art High-Performance Reverse Proxy. Artur is the VP of Engineering and Operations at Wikia. He has been singing the praises of Varnish for awhile. It was great to see his experiences and resulting advice in one presentation. But what really caught my eye was his last slide:

Wikia measures exit rate – the percentage of users that leave the site from a given page. Here they show that exit rate drops as pages get faster. The exit rate goes from ~15% for a 2 second page to ~10% for a 1 second page. This is another data point to add to the list of stats from Velocity that show that faster pages is not only better for users, it’s better for business.

13 Comments

back in the saddle: EFWS! Velocity!

July 20, 2009 11:32 am | Comments Off on back in the saddle: EFWS! Velocity!

The last few months are a blur for me. I get through stages of life like this and look back and wonder how I ever made it through alive (and why I ever set myself up for such stress). The big activities that dominated my time were Even Faster Web Sites and Velocity.

Even Faster Web Sites

Even Faster Web Sites is my second book of web performance best practices. This is a follow-on to my first book, High Performance Web Sites. EFWS isn’t a second edition, it’s more like “Volume 2”. Both books contain 14 chapters, each chapter devoted to a separate performance topic. The best practices described in EFWS are completely new:

  1. Understanding Ajax Performance
  2. Creating Responsive Web Applications
  3. Splitting the Initial Payload
  4. Loading Scripts Without Blocking
  5. Coupling Asynchronous Scripts
  6. Positioning Inline Scripts
  7. Writing Efficient JavaScript
  1. Scaling with Comet
  2. Going Beyond Gzipping
  3. Optimizing Images
  4. Sharding Dominant Domains
  5. Flushing the Document Early
  6. Using Iframes Sparingly
  7. Simplifying CSS Selectors

An exciting addition to EFWS is that six of the chapters were contributed by guest authors: Doug Crockford (Chap 1), Ben Galbraith and Dion Almaer (Chap 2), Nicholas Zakas (Chap 7), Dylan Schiemann (Chap 8), Tony Gentilcore (Chap 9), and Stoyan Stefanov and Nicole Sullivan (Chap 10). Web developers working on today’s content rich, dynamic web sites will benefit from the advice contained in Even Faster Web Sites.

Velocity

Velocity is the web performance and operations conference that I co-chair with Jesse Robbins. Jesse, former “Master of Disaster” at Amazon and current CEO of Opscode, runs the operations track. I ride herd on the performance side of the conference. This was the second year for Velocity. The first year was a home run, drawing 600 attendees (far more than expected – we only made 400 swag bags) and containing a ton of great talks. Velocity 2009 (held in San Jose June 22-24) was an even bigger success: more attendees (700), more sponsors, more talks, and an additional day for workshops.

The bright spot for me at Velocity was the fact that so many speakers offered up stats on how performance is critical to a company’s business. I wrote a blog post on O’Reilly Radar about this: Velocity and the Bottom Line. Here are some of the excerpted stats:

  • Bing found that a 2 second slowdown caused a 4.3% reduction in revenue/user
  • Google Search found that a 400 millisecond delay resulted in 0.59% fewer searches/user
  • AOL revealed that users that experience the fastest page load times view 50% more pages/visit than users experiencing the slowest page load times
  • Shopzilla undertook a massive performance redesign reducing page load times from ~7 seconds to ~2 seconds, with a corresponding 7-12% increase in revenue and 50% reduction in hardware costs

I love optimizing web performance because it raises the quality of engineering, reduces inefficiencies, and is better for the planet. But to get widespread adoption we need to motivate the non-engineering parts of the organization. That’s why these case studies on web performance improving the user experience as well as the company’s bottom line are important. I applaud these companies for not only tracking these results, but being willing to share them publicly. You can get more details from the Velocity videos and slides.

Back in the Saddle

Over the next six months, I’ll be focusing on open sourcing many of the tools I’ve soft launched, including UA Profiler, Cuzillion, Hammerhead, and Episodes. These are already “open source” per se, but they’re not active projects, with a code repository, bug database, roadmap, and active contributors. I plan on fixing that and will discuss this more during my presentation at OSCON this week. If you’re going to OSCON, I hope you’ll attend my session. If not, I’ll also be signing books at 1pm and providing performance consulting (for free!) at the Google booth at 3:30pm, both on Wednesday, July 22.

As you can see, even though Velocity and EFWS are behind me, there’s still a ton of work left to do. We’ll never be “done” fixing web performance. It’s like cleaning out your closets – they always fill up again. As we make our pages faster, some new challenge arises (mobile, rich media ads, emerging markets with poor connectivity) that requires more investigation and new solutions. Some people might find this depressing or daunting. Me? I’m psyched! ‘Scuse me while I roll up my sleeves.

Comments Off on back in the saddle: EFWS! Velocity!

Firefox 3.5 at the top

June 30, 2009 8:23 am | 21 Comments

The web world is abuzz today with the release of Firefox 3.5. On the launch page, Mozilla touts the results of running SunSpider. Over on UA Profiler, I’ve developed a different set of tests that count the number of critical performance features browsers do, or don’t, have. Currently, there are 11 traits that are measured. Firefox 3.5 scores higher than any other browser with 10 out of 11 of the performance features browsers need to create a fast user experience.

Firefox 3.5 is a significant improvement over Firefox 3.0, climbing from 7/11 to 10/11 of these performance traits. Among the major browsers, Firefox 3.5 is followed by Chrome 2 (9/11), Safari 4 (8/11), IE 8 (7/11), and Opera 10 (6/11). Unfortunately, IE 6 and 7 have only 4 out of these 11 performance features, a sad state of affairs for today’s web developers and users.

The performance traits measured by UA Profiler include number of connections per hostname, maximum number of connections, parallel loading of scripts and stylesheets, proper caching of resources including redirects, the LINK PREFETCH attribute, and support for data: URLs. When I started UA Profiler, none of the browsers were scoring very high. But there’s great progress in the last year. It’s time to raise the bar! I plan on adding more tests to UA Profiler this summer, and hope the browser development teams will continue to rise to the challenge in an effort to make the Web a faster place for all of us.

21 Comments