Add your site & custom fonts
The Nov 15 2011 crawls for the HTTP Archive and HTTP Archive Mobile are done. Two new things were added.
Add your site
Our goal is to crawl the world’s top 1,000,000 URLs. This month we doubled the number of URLs from 17K to 35K. We’re still a ways away but making progress. But what if you’d like your website to be in the HTTP Archive but it isn’t in the top 1M?
Now you can add your site to the HTTP Archive. If it’s already in the list we’ll tell you and point you to any data that’s been gathered so far. If it’s not in the list we’ll queue it up for the next crawl. We moderate all additions to make sure the URL is valid. We also have a limit of 1 URL per website. We strive to crawl a site’s main URL (e.g., https://stevesouders.com/) but not all the subpages within a site (https://stevesouders.com/about.php, http://www.example.com/videos.php, etc.).
Custom Fonts
I’ve been thinking more about custom fonts after Typekit‘s acquisition by Adobe and seeing Jeff Veen at Velocity Europe. (Make sure to watch the video of Jeff’s talk – it’s an amazing presentation with a humorous start.) So this week I added a chart to track the adoption of custom fonts:
Typekit is clearly on to something – the use of custom fonts has tripled in one year. I warn against using @font-face for performance reasons, but performance isn’t all that matters. (Gasp!) Custom fonts obviously have aesthetic benefits that are attractive to website owners.
Fortunately, Typekit has several performance optimizations in how they load fonts. They combine all the fonts in a single stylesheet for browsers that support data: URIs. The fonts are served over a CDN. The font’s are only cacheable for 5 minutes which hurts repeat visits, but I believe they’re working on longer cache times.
For truly fast and robust font loading we need to lean on browser developers to implement better caching for fonts and better timeout choices during rendering. I’ll be talking about this during my High Performance HTML5 session at QCon on Friday.
HTTP Archive growing
Today the number of URLs analyzed was doubled in both the HTTP Archive (from 17K to 34K URLs) and in the HTTP Archive Mobile (from 1K to 2K URLs).


This is a small step toward our goal of 1 million URLs, but it validates numerous code changes that landed recently:
- 22: update URL lists – Previously the list of URLs to crawl was manually created (by me) from multiple other lists (Alexa, Quantcast, Fortune 500, etc.). Because it was manually created it wasn’t updated frequently. Now the list is based on the Alexa Top 1,000,000 Sites and is updated every crawl.
- 243: handle non-ranked URLs – Some of the URLs crawled up until now are NOT in the Alexa Top 1M. In order to support looking at long term trends (by selecting “intersection“) I wanted to continue crawling these outliers. So the list of URLs that is crawled supports crawling non-ranked websites. This will allow many other nice features that you’ll hear about next week.
- 242: rewrite batch_process.php – There’s a bunch of code for doing the crawl that needed to be made more efficient as we increase two orders of magnitude.
- 68: cache aggregate stats for trends.php – Again, in order to deal with a larger number of URLs and still generate charts quickly, I introduced a caching layer for the aggregate stats.
- #196: Publish a mysql schema dump – Exploring the data is now easier. Instead of having to setup an entire instance of the code, you simply create the tables based on the schema dump and download data that is of interest.
With these and other changes behind us, we’ll continue to increase the number of URLs to reach our goal. There are still some big tasks to tackle including changing the DB schema, increasing the capacity on mobile with more devices or switching to an emulator, and combining these two sites into a single site for easier comparison of desktop & mobile data.
No blog post about HTTP Archive would be complete without some observations. As mentioned earlier, whenever looking at long term trends I choose the intersection – which means the exact same URLs are included in every data point.
The main trend I’ve been noticing is how the size of resources is growing much faster than the number of resources. This growth is most evident in scripts and images. It’s no surprise – the Web is getting bigger. But now we can see where that’s happening and explore solutions.


I also wanted to shout out to Pat Meenan and Guy (“Guypo”) Podjarny. Pat works at Google and is the creator of WebPagetest, which is the foundation for the HTTP Archive (Mobile). Guypo works at Blaze and provides additional infrastructure and devices for all the mobile testing. In addition, there are a growing number of contributors to the open source project. And none of this would be happening without support from our sponsors: Google, Mozilla, New Relic, O’Reilly Media, Etsy, Strangeloop, and dynaTrace Software.
Watch for a fun announcement next week.
Velocity Europe – High Performance Berlin!
Velocity Europe is less than two weeks away. It’s happening November 8-9 in Berlin at the Hotel Maritim ProArte. I’ve heard good things about the venue and am excited to get there and check it out.
This event has been a long time coming. A handful of web performance and operations savants (including members of the Program Committee) have been encouraging us for years to bring Velocity to Europe, and now it’s actually happening. And (drum roll please) the price is only EUR 600 (excl. VAT) if you use the 20% discount code veu11sts
. (And don’t forget about the free Velocity Online Conference this week – see more below.)
The Velocity Europe speaker line-up is exceptional. Some highlights include:
- Jon Jenkins from Amazon.com is talking about their approach to the challenges of mobile browsing. Jon is the Director of Software Development for Amazon Silk. I’m looking forward to more details about Silk’s split architecture.
- Tim Morrow delivers the background for Betfair’s promise to deliver a fast experience to their customers, and their progress on that promise.
- Theo Schlossnagle is a recognized leader at Velocity. He’s giving two talks on web operations careers and monitoring.
- Estelle Weyl joins Velocity for the first time talking about the nuances of mobile rendering performance. I learn something new everytime I hear Estelle speak, so am excited to welcome her to Velocity.
- Ivo Teel discusses the balance we all face between features and performance and how they’re handling that at Spil Games.
- Jeff Veen knows the importance of 3rd party performance and availability as the CEO of Typekit. Jeff’s an amazing, engaging speaker. Reading his session description gave me goosebumps with anticipation: Jeff sat on a couch in the Typekit offices, staring out the window, and wondering if everything their company had been working towards was about to slip through their fingers…
There’s much much more – lightning demos, browser vendor talks, Allspaw on anticipating failure, Mandelin on JavaScript performance – I’ve got to stop here but please check out the entire schedule.
I want to give a shout out to the Velocity Europe Program Committee: Patrick Debois, Aaron Peters, Schlomo Schapiro, Jeroen Tjepkema, and Sean Treadway. They’ve participated in numerous video concalls (yay Google Hangouts!) to review proposals, build the program, and shape Velocity to be a European conference. And they might have one more card up their sleeve – more on that later.
You can get a free warm-up for Velocity Europe at the Velocity Online Conference this week. It’s Wednesday October 26 9-11:30am PDT. John Allspaw, Velocity co-chair, has rounded up four speakers to cover several hot topics including monitoring, global DNS, and making yourself even more awesome(!). It’s free, but you have to register for Velocity OLC if you want to get in on the conversation.
If you’re heading to Berlin you should also check out CounchConf Berlin on Nov 7. NoSQL has great performance benefits and Couchbase is a good choice for many mobile apps. Use couchconf_discount
for 10% off registration.
The last time I was in Berlin was for JSConf.eu 2009. The city had a high tech vibe and the crowd was extremely knowledgeable and enthusiastic. I’m excited to get back to Berlin for Velocity Europe and do the web performance and operations deep dives that are the core of Velocity. If you want to have a website that’s always fast and always up, Velocity Europe is the place to be. I hope to see you there.
HTTP Archive: new code, new charts
The HTTP Archive is a permanent record of web performance information started in October 2010. The world’s top 17,000 web pages are analyzed twice each month to collect information such as the number and size of HTTP requests, whether responses are cacheable, the percent of pages with errors, and the average Page Speed score. The code is open source and all the data is downloadable.
The next big step is to increase the number of URLs to 1 million. The biggest task to get to this point is improving the database schema and caching. This past week I made some significant code contributions around caching aggregate stats across all the web sites. Even with only 17K URLs the speed improvement for generating charts is noticeable.
The new stats cache allows me to aggregate more data than before, so I was able to add several trending charts. (The increases/decreases are Nov 15 2010 to Oct 15 2011.)
- percent of sites using Google Libraries API – up 6%
- percent of sites using Flash – down 2%
- percent of responses with caching headers – up 4%
- percent of requests made using HTTPS – up 1%
- percent of pages with one or more errors – down 2%
- percent of pages with one or more redirects – up 7%
Most of the news is good from a performance perspective, except for the increase in redirects. Here’s the caching headers chart as an example:
I dropped the following charts:
- popular JavaScript libraries – I created this chart using handcrafted regular expressions that attempted to find requests for popular frameworks such as jQuery and YUI. Those regexes are not always accurate and are hard to maintain. I recommend people use the JavaScript Usage Statistics from BuiltWith for this information.
- popular web servers – Again, BuiltWith’s Web Server Usage Statistics is a better reference for this information.
- sites with the most (JavaScript | CSS | Images | Flash) – These charts were interesting, but not that useful.
- popular scripts – This was a list of the top 5 most referenced scripts based on a specific URL. The problem is that the same script can have a URL that varies based on hostnames, querystring parameters, etc.
The new stats cache is a great step forward. I have a few more big coding sessions to finish but I hope to get enough done that we can start increasing the number of URLs in the next run or two. I’ll keep you posted.
frontend SPOF survey
Pat Meenan had a great blog post yesterday, Testing for Frontend SPOF. “SPOF” means single point of failure. I coined the term frontend SPOF to describe the all-too-likely situation where the HTML document returns successfully, but some other resource (a stylesheet, script, or font file) blocks the entire website from loading. This typically manifests itself as a blank white screen that the user stares out for 20 seconds or longer.
Frontend SPOF happens most frequently with third party content. If the HTML document returns successfully, then all the resources from the main website are likely to return successfully, as well. Third party content, however, isn’t controlled by the main website and thus could be suffering an outage or overload while the main website is working fine. As a result, the uptime of a website is no greater than the availability of the third party resources it uses that are in a position to cause frontend SPOF.
In my blog post of the same name I describe how Frontend SPOF happens and ways to avoid it, but I don’t provide a way for website owners to determine which third party resources may cause frontend SPOF. This is where Pat comes in. He’s created a public blackhole server: blackhole.webpagetest.org
with the static IP address 72.66.115.13
. Pointing your third party resources to this blackhole and reloading the page tells you if those resources cause frontend SPOF. Since Pat is the creator of WebPagetest.org, he has integrated this into the scripting capabilities of that tool so website owners can load their website and determine if any third party resources cause frontend SPOF.
/etc/hosts
I took a different approach outlined by Pat: I added the following lines to my /etc/hosts
file (your location may vary) mapping these third party hostnames to point to the blackhole server:
72.66.115.13 apis.google.com 72.66.115.13 www.google-analytics.com 72.66.115.13 connect.facebook.net 72.66.115.13 platform.twitter.com 72.66.115.13 s7.addthis.com 72.66.115.13 l.addthiscdn.com 72.66.115.13 cf.addthis.com 72.66.115.13 api-public.addthis.com 72.66.115.13 widget.quantcast.com 72.66.115.13 ak.quantcast.com 72.66.115.13 assets.omniture.com 72.66.115.13 www.omniture.com 72.66.115.13 scripts.omniture.com 72.66.115.13 b.voicefive.com 72.66.115.13 ar.voicefive.com 72.66.115.13 c.statcounter.com 72.66.115.13 www.statcounter.com 72.66.115.13 www-beta.statcounter.com 72.66.115.13 js.revsci.net
After restarting my browser all requests to these hostnames will timeout. Pat’s blog post mentions 20 seconds for a timeout. He was running on Windows. I’m running on my Macbook where the timeout is 75 seconds! Any website that references third party content on these hostnames in a way that produces frontend SPOF will be blank for 75 seconds – an easy failure to spot.
survey says
THE GOOD: At this point I started loading the top 100 US websites. I was pleasantly surprised. None of the top 20 websites suffered from frontend SPOF. There were several that loaded third party content from these hostnames, but they had safeguarded themselves:
- MSN makes a request to
ar.voicefive.com
, but does it asynchronously using a document.write technique. - AOL references
platform.twitter.com
, but puts the SCRIPT tag at the very bottom of the BODY so page rendering isn’t blocked. - IMDB uses the async version of Google Analytics, and puts the
platform.twitter.com
widget in an iframe. - LiveJournal goes above and beyond by wrapping the Google +1 and Facebook widgets in a homegrown async script loader.
THE BAD: Going through the top 100 I found five websites that had frontend SPOF:
- CNET loads
http://platform.twitter.com/widgets.js
in the HEAD as a blocking script. - StumbleUpon loads
http://connect.facebook.net/en_US/all.js
at the top of BODY as a blocking script. - NFL loads
http://connect.facebook.net/en_US/all.js
in the HEAD as a blocking script. - Hulu, incredibly, loads Google Analytics in the HEAD as a blocking script. Please use the async snippet!
- Expedia loads
http://connect.facebook.net/en_US/all.js
as a blocking script in the middle of the page, so the right half of the page is blocked from rendering.
These results, although better than I expected, are still alarming. Although I only found five websites with frontend SPOF, that’s 5% of the overall sample. The percentage will likely grow as the sample size grows because best practices are more widely adopted by the top sites. Also, my list of third party hostnames is a small subset of all widgets and analytics available on the Web. And remember, I didn’t even look at ads.
Is it really worth blocking your site’s entire page for a widget button or analytics beacon – especially when workarounds exist? If you’re one of the five sites that faltered above, do yourself and your users a favor and find a way to avoid frontend SPOF. And if you’re outside the top 100, test your site using Pat’s blackhole server by editing /etc/hosts
or following Pat’s instructions for testing frontend SPOF on WebPagetest.org.
Improving app cache
I recently found out about the W3C Workshop on The Future of Off-line Web Applications on November 5 in Redwood City. I won’t be able to attend (I’ll be heading to Velocity Europe), but I feel like app cache needs improving so I summarized my thoughts and sent it to the workshop organizers. I also pinged some mobile gurus and got their thoughts on app cache.
My Thoughts
SUMMARY: App cache is complicated and frequently produces an unexpected user experience. It’s also being (ab)used as a workaround for the fact that the browser’s cache does not cache in an effective way – this is just an arms race for finite resources.
DETAILS: I’ve spoken at many mobile-specific conferences and meetups in the last few months. When I explain the way app cache actually works, developers come up afterward and say “now I finally understand what was happening with my offline app.” These are the leading mobile developers in the world.
John Allsopp does a great job of outlining the gotchas, and I’ve added some (slides 50&51):
- HTML responses with the MANIFEST attribute are stored in app cache by default, even if they’re not in the CACHE: section of the manifest file.
- If a CACHE: resource 404s then none of the resources are cached.
- The manifest file must be changed in order for changed CACHE: resources to be updated.
- Modified CACHE: resources aren’t seen by the user until the second time they load the app – even if they’re online.
It’s easy to point out problems – you folks have the more difficult job of finding solutions. But I’ll make a few suggestions:
- Use updated resources on first load – The developer needs a way to say “if the user is online, then fetch (some/all) of the CACHE: resources that have changed before rendering the app”. I would vote to make this the default behavior, and provide a way to toggle it (in the manifest file or HTML attribute). Perhaps this should also be done at the individual resource level – “I want updated scripts to block the initial rendering, but nothing else”. The manifest file could have an indicator of which resources to check & download before doing the initial rendering.
- 404s – I haven’t tested this myself, but it seems like overkill. Every response in the CACHE: section should be cached, independent of the other responses. Perhaps this is browser-specific?
- updateReady flag – It’s great that developers can use the updateReady event to prompt the user to reload the app if any CACHE: resources have changed underneath them, but the bar is too high. In addition, have a flag that indicates that the browser should prompt the user automatically if any CACHE: resources were updated.
Finally, on the topic of arms race, I know many websites that are using app cache as a way to store images, scripts, and stylesheets. Why? It’s because the browser’s disk cache is poorly implemented. App cache provides a dedicated amount of space for a specific website (as opposed to a common shared space). App cache allows for prioritization – if I have 10M of resources I can put the scripts in the CACHE: section so they don’t get purged at the expense of less painful images.
Certainly a better solution would be for the browsers to have improved the behavior of disk cache 5 years ago. But given where we are, an increasing number of websites are consuming the user’s disk space. In most cases the user doesn’t have a way or doesn’t know how to clear app cache. Better user control over app cache is needed. I suggest that clearing “data” clears both the disk cache as well as app cache. Alternatively, we extend the browser UI to have an obvious “clear app cache” entry. Currently in Firefox and Chrome you can only clear app cache on a site-by-site basis, and the UI isn’t obvious. In Firefox it’s under Tools | Options | Advanced | Network | Remove. In Chrome it’s under chrome://appcache-internals/.
The most important near term fix is better patterns and examples.
- My first offline app had a login form on the index.html – how should I handle that?
- What if the JSON data in app cache requires authentication and the user is offline – use it or not?
- I’ve never seen an example that uses the FALLBACK: section.
Adoption of current app cache would go much more smoothly with patterns and examples that address these gaps, and perhaps a JS helper lib to wrap updateReady and other standard dev tasks.
Mobile Gurus
A great email thread resulted when I asked a bunch of mobile gurus for their thoughts about app cache. Here’s a summary of the comments that resulted:
Scott Jehl | Agreed on app cache’s clumsiness. It’s so close though! The cache clearing is terrible for both users and developers. |
Nicholas Zakas | +1 for AppCache clumsiness. My big complaint is requiring a special MIME type for the manifest file. This effectively limits its use to people who have access to their server configuration. |
Yehuda Katz | My biggest concern is the lack of a feature that would make it possible to load the main index.html from cache, but only if the user agent is offline.Currently, if the user agent is online, the entire cache manifest, including the main index.html, is used. As a result, developers are required to come up with some non-standard UI to let the application user know that they should refresh the page in order to get more updated information.This is definitely the way to get the most performance, even when the user agent is online, but it creates an extremely clumsy workflow which significantly impedes adoption. I have given a number of talks on the cache manifest, and this caveat is the one that change the audience reaction from nodding heads to “oh no, another thing I have to spend time working out how to rebuild my application in order to use”.
Again, I understand the rationale for the design, but I think a way to say “if the user agent is online, block until the cache manifest is downloaded” would significantly improve adoption and widen the appropriate use-cases for the technology. |
Scott Jehl | I agree – the necessary refresh is the biggest downfall for me, too. It’s really prohibitive for using appcache in progressive enhancement approaches (where there’s actually HTML content in the page that may update regularly).It’d be great if you could set up appcache to kick-in when the user is actually offline, but otherwise stay out of the way and let the browser defer to normal requests and caching. |
Yehuda Katz | I actually think we can get away with a more aggressive approach. When the device is online, first request the application manifest. If the manifest is identical, continue using the app cache. This means a short blocking request for the app manifest, but the (good) atomic cache behavior. If the manifest is not identical, fall back to normal HTTP caching semantics.
It needs to be a single flag in the manifest I think. |
Dion Almaer | Totally agree. In a recent mobile project we ended up writing our own caching system that had us use HTTP caching… It was very much a pain to have to do this work. |
I like Yehuda’s suggestion about a blocking manifest check when the user is online controlled by a flag in the manifest file. We need more thinking around how to improve app cache. Please checkout the W3C Workshop on The Future of Off-line Web Applications website and send them your thoughts.
UA switching: be careful
At least once a day I’m in a conversation, email thread, or twitter exchange about monitoring websites. Lately this has focused on mobile. Tools like WebPagetest make it easier to monitor websites from the perspective of a desktop browser, but doing this from the perspective of a mobile device is still a significant challenge.
This issue is a current topic of discussion around HTTP Archive Mobile. Blaze.io supports the project through its Mobitest framework: every two weeks I submit 1000 URLs to their framework which downloads each URL 3 times on a real iPhone. I love that the resultant waterfall chart and screenshots are gathered from a real phone. But our next step is to scale this up to 100K and then 1M URLs. It’s going to be hard to scale up to this using real phones due to cost and maintenance issues.
Another alternative is to use an emulator. The timings won’t be identical to the actual mobile device, but HTTP Archive Mobile is more focused on HTTP headers, size and number of responses, etc. These structural aspects of the page should be identical between the actual phone and its emulator. We’ll soon do side-by-side tests to confirm this.
But this post is about a third alternative: UA switching. Changing the User-Agent string of a desktop browser to mimic a mobile browser and using that to gather the data could be accomplished this afternoon. The issue is that the results might differ from what is seen on an actual phone. Websites that don’t do anything special for mobile would probably be similar enough. And websites that look at the UA string on the serverside to determine how to alter the page would also be okay. But websites that adapt the page based on browser feature detection on the clientside, e.g. responsive web design, would differ dramatically.
When asked for an example of such a site I recalled seeing Scott Jehl at Breaking Development Conference talking about the recent launch of Boston Globe using responsive web design. It’s an amazing feat of engineering. Its adaptability from a single code base across so many devices is beautiful to watch in this promo video.
Because the Boston Globe uses responsive web design, the UI varies depending on the browser – not the UA string. Here’s a screenshot from my iPhone. The content has been modified to fit on the iPhone’s smaller screen.
Figure 1. iPhone browser
Figure 2 shows the Boston Globe as rendered inside Chrome on my laptop. Since the screen is much bigger the content is laid out quite differently compared to the iPhone. We see three columns instead of one, a wider horizontal navigation bar, etc.
Figure 3 is also from Chrome, but in this case I changed the User-Agent string to mimic an iPhone:
Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_1 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5
Even though the UA string says “iPhone”, the page is laid out exactly as it is for normal Chrome. (I confirmed the UA string by inspecting HTTP headers.)
Figure 3. Chrome with iPhone User-Agent string
Responsive web design is fairly new. There are still a number of websites that modify the HTML layout based on serverside UA detection. In fact, while generating the screenshot for Figure 3 I tried several other websites and most of them returned a layout customized for iPhone.
If you’re working on a framework to monitor mobile websites, be careful about taking the UA switching approach. If the websites you’re monitoring do serverside UA detection, you’ll probably be okay. But if the web app is based on clientside feature detection, the results you receive from UA switching won’t match what is seen on real mobile devices. As the adoption of responsive web design increases so will the number of websites that fall into this gap of mismeasurement. Real devices or emulators are a safer bet.
App cache & localStorage survey
In preparation for my talk at HTML5 Dev Conf I surveyed the Alexa US Top 10 websites to see which ones use app cache and localStorage. I mostly focus on mobile these days so it’s natural to think I ran these tests from a mobile browser, which I did. But I also tested with a desktop browser.
Some people might ask, Why a desktop browser?
To which I would reply, Why not a desktop browser?
I blogged previously about how Google and Bing use localStorage on mobile devices. It’s a powerful performance technique. They break up their JavaScript and CSS into smaller blocks and save them in localStorage. Simultaneously they set a cookie so that the server knows not to send that payload on subsequent searches, a savings of 150-170 kB before gzipping.
In the comments on that post Simon asked:
I’m curious why the techniques with LocalStorage are used for the mobile versions of the search sites but not for the standard desktop versions – I would think that this technique would work well [on] all clients, no?
I agree – this and other HTML5 web storage techniques make sense for the desktop, too. There are some reasons why we’re seeing these used first on mobile:
- Mobile latencies are higher and connection speeds are lower, so clientside caching is more important on mobile.
- Mobile disk cache sizes are smaller than desktop sizes, so a better alternative is needed for mobile.
- There are still desktop browsers with significant market share that are missing many HTML5 capabilities, whereas mobile browsers have more support for HTML5.
Even though the motivation for app cache and localStorage are stronger for mobile, they should also be used when the website is accessed from a desktop browser. I did a quick survey to see which of the top 10 websites were using app cache or localStorage on either mobile or desktop. Here are the results:
Website | Mobile | Desktop | ||
---|---|---|---|---|
app cache | localStorage | app cache | localStorage | |
Google Search | no | yes | no | yes [1] |
Bing | no | yes | no | no |
no | yes | no | no | |
Yahoo! Front Page | no | yes [2] | no | no |
YouTube | no | yes | no | no |
Amazon | no | no | no | yes [3] |
yes | yes | no | yes | |
no | no | no | no | |
eBay | no | no | no | no |
MSN.com | no | no | no | no |
[2] Yahoo! Front Page only stores two numbers.
[3] Amazon on the desktop only stores a ~30 character string.
Ignoring the small uses of localStorage, 5 of these top 10 websites use localStorage on mobile, but only 2 out of 10 use localStorage (or sessionStorage) on desktop. None of them use app cache, either on mobile or desktop except for Twitter on mobile. I’m surprised no one is using app cache. It’s not appropriate for all applications, such as search, but I would enjoy catching up on Twitter, Facebook, and LinkedIn on the plane – potentially from my laptop in addition to my mobile device. App cache not only brings this offline capability, but provides better caching than the browser’s disk cache.
I’ll repeat this survey in a few months to track the progress. I expect we’ll see the use of localStorage and app cache increase, and for desktop to catch up to mobile.
Making a mobile connection
I just returned from Breaking Development Conference, an amazing gathering of many of the brightest minds in mobile web development. On the flight home I watched the video ($$) and slides from Rajiv Vijayakumar’s talk on Understanding Mobile Web Browser Performance at Velocity 2011. Rajiv works at Qualcomm where his team has done extensive performance analysis of the Android browser. Some of their findings include:
- Android 2.2 has a max of only 4 HTTP connections which limits parallel downloads. (This was increased to 8 in Android 2.3 and 35 in Android 3.1 according to Browserscope.)
- It supports pipelining for reduced HTTP overhead.
- Android’s cache eviction is based on expiration date. This is a motivation for setting expiration dates 10+ years in the future.
- Android closes TCP sockets after 6 seconds of inactivity.
This last bullet leads to an interesting discussion about the tradeoffs between power consumption and web performance.
Radio link power consumption
3G devices surfing the Web (do people still say “surfing”?) establish a radio link to the carrier’s cell tower. Establishing and maintaining the radio link consumes battery power. The following graph from Rajiv’s slides shows power consumption for an Android phone while loading a web page. It rises from a baseline of 200 mA to ~400 mA as the radio link is initialized. After the page is loaded the phone drops to 300 mA while the network is inactive. After 10 seconds of inactivity, the radio link reaches an idle state and power consumption returns to the 200 mA baseline level.
The takeaway from this graph is that closing the radio link sooner consumes less battery power. This graph shows that the radio link continues to consume battery power until 10 seconds of inactivity have passed. The 10 second radio link timer begins once the web page has loaded. But there’s also a 6 second countdown after which Android closes the TCP connection by sending a FIN packet. When Android sends the FIN packet the radio link timer resets and continues to consume battery power for another 10 seconds, resulting in a total of 16 seconds of higher battery consumption.
One of the optimizations Rajiv’s team made for the Android browser running on Qualcomm chipsets is to close the TCP connections after the page is done loading. By sending the FIN packet immediately, the radio link is closed after 10 seconds (instead of 16 seconds) resulting in longer battery life. Yay for battery life! But how does this affect the speed of web pages?
Radio link promotion & demotion
The problem with aggressively closing the phone’s radio link is that it takes 1-2 seconds to reconnect to the cell tower. The way the radio link ramps up and then drops back down is shown in the following figure from an AT&T Labs Research paper. When a web page is actively loading, the radio link is at max power consumption and bandwidth. After the radio link is idle for 5 seconds, it drops to a state of half power consumption and significantly lower bandwidth. After another 12 seconds of inactivity it drops to the idle state. From the idle state it takes ~2 seconds to reach full power and bandwidth.
These inactivity timer values (5 seconds & 12 seconds in this example) are sent to the device by the cell tower and thus vary from carrier to carrier. The “state machine” for promoting and demoting the radio link, however, is defined by the Radio Resource Control protocol with the timer values left to the carrier to determine. (The protocol dubs these timer values “T1”, “T2”, and “T3”. I just find that funny.) If the radio link is idle when you request a web page, you have to wait ~2 seconds before that HTTP request can be sent. Clearly, the inactivity timer values chosen by the carrier can have a dramatic impact on mobile web performance.
What’s your carrier’s state machine?
There’s an obvious balance, sort of a yin and yang, between power consumption and web performance for 3G mobile devices. If a carrier’s inactivity timer values are set too short, users have better battery life but are more likely to encounter a ~2 second delay when requesting a web page. If the carrier’s inactivity timer values are set too long, users might have a faster web experience but shorter battery life.
This made me wonder what inactivity timer values popular carriers used. To measure this I created the Mobile State Machine Test Page. It loads a 1 kB image repeatedly with increasing intervals between requests: 2, 4, 6, 11, 13, 16, and 20 seconds. The image’s onload event is used to measure the load time of the image. For each interval the image is requested three times, and the median load time is the one chosen. The flow is as follows:
- choose the next interval
i
(e.g, “2” seconds) - wait
i
seconds - measure
t_start
- request the image
- measure
t_end
using the image’s onload - record
t_end - t_start
as the image load time - repeat steps 2-6 two more times and choose the median as the image load time for interval
i
- goto step 1 until all intervals have been tested
The image should take about the same time to load on every request for a given phone and carrier. Increasing the interval between requests is intended to see if the inactivity timer changes the state of the radio link. By watching for a 1-2 second increase in image load time we can reverse engineer the inactivity timer values for a given carrier.
I tweeted the test URL about 10 days ago. Since then people have run the test 460+ times across 71 carriers. I wrote some code that maps IP addresses to known carrier hostnames so am confident about 26 of the carriers; the others are self-reported. (Max Firtman recommended werwar for better IP-to-carrier mapping.) I’d love to keep gathering data so:
I encourage you to run the test!
The tabular results show that there is a step in image load times as the interval increases. (The load time value shown in the table is the median collected across all tests for that carrier. The number of data points is shown in the rightmost column.) I generated the chart below from a snapshot of the data from Sept 12.
The arrows indicate a stepped increase in image load time that could be associated with the inactivity timer for that carrier. The most pronounced one is for AT&T (blue) and it occurs at the 5 second mark. T-Mobile (yellow) appears to have an inactivity timer around 3 seconds. Vodafone is much larger at 15 seconds. Sprint and AT&T Verizon have similar profiles but the step is less pronounced.
There are many caveats about this study:
- This is a small sample size.
- The inactivity timer could be affected by other apps on the phone doing network activity in the background. I asked people to close all apps, but there’s no way to verify they did that.
- A given carrier might have different kinds of networks (3G, 4G, etc.). Similarly, they might have different inactivity timer values in different regions. All of those different conditions would be lumped together under the single carrier name.
What’s the point?
Hats off to Rajiv’s team at Qualcomm for digging into Android browser performance. They don’t even own the browser but have invested heavily in improving the browser user experience. In addition to closing TCP connections once the page is loaded, they increased the maximum number of HTTP connections, improved browser caching, and more.
I want to encourage this holistic approach to mobile performance and will write about that in more depth soon. This post is pretty technical, but it’s important that mobile web developers have greater insight into the parts of the mobile experience that go beyond HTML and JavaScript – namely the device, carrier network, and mobile browser.
For example, in light of this information about inactivity timers, mobile web developers might choose to do a 1 pixel image request at a set interval that keeps the radio link at full bandwidth. This would shorten battery life, so an optimization would be to only do a few pings after which it’s assumed the user is no longer surfing. Another downside is that doing this would use more dedicated channels at the cell tower, worsening everyone’s experience.
The right answer is to determine what the tradeoffs are. What is the optimal value for these inactivity timers? Is there a sweet spot that improves web performance with little or no impact on battery life? How did the carriers determine the current inactivity timer values? Was it based on improving the user’s web experience? I would bet not, but am hopeful that a more holistic view to mobile performance is coming soon.
Waterfall UI Conventions
I spend a lot of time looking at HTTP waterfall charts. Each browser has its own waterfall chart tool which makes sense since the way to track network events differs across browsers. There are some tools that span multiple browsers, like HttpWatch. Since I work across all browsers I’m impacted by the lack of UI consistency across these tools. This blog post summarizes the current state and proposes the adoption of waterfall UI conventions.
It’s easy to see the inconsistencies looking at these waterfall charts from the tools I use most frequently.* Each screenshot shows the waterfall for Wikipedia. The main areas I’m going to discuss are the colors and names for connection states, Content-Types, and loading events. There are many other areas where consistency would be nice – such as overall layout and default columns – but those are more subjective and the tool owner might feel their choices make their tool more preferred. The consistency changes I’m suggesting don’t effect the information shown, just how it looks.
One big difference is the information shown in the horizontal bar. Chrome Developer Tools uses the horizontal bar to reflect the Content-Type: blue for HTML, green for CSS, purple for images, etc. All the other tools use the bar to show the length of each connection state: DNS lookup, sending the request, downloading the response, etc. I find the Content-Type more useful, but rather than debate one over the other I most like HttpWatch’s approach where they show both: the connection states in the bar and the Content-Type in a tiny icon (see the “Type” column). Even if the other tools didn’t want to show icons, they could use font colors to reflect Content-Type. Let’s explore the connection states and Content-Type choices across the tools.
Connection States
The names and colors used for the different network connection states vary across the tools. In some cases, the granularity of connection states varies as well. The colors and names used by each tool are shown in this table:
Chrome Dev Tools | Blocking | DNS Lookup | Connecting | Sending | Waiting | Receiving |
Firebug | Blocking | DNS Lookup | Connecting | Sending | Waiting | Receiving |
HttpWatch | Blocked | DNS Lookup | Connect | Send | Wait | Receive |
IE9 Dev Tools | Wait | Start | Request | Response | ||
WebPagetest | DNS Lookup | Initial Connection | Time to First Byte | Content Download |
Let’s look at the names first. Chrome Dev Tools and Firebug use the same names for every connection state: Blocking, DNS Lookup, Connecting, Sending, Waiting, and Receiving. All of these names are progressive verb forms except “DNS Lookup” – saying “looking up DNS” would be painful. I’d prefer simple verb forms which would also give us consistent tense across all names: Block, Lookup DNS, Connect, Send, Wait, and Receive. It’s also important to get similar connection states across all tools: IE9 Dev Tools and WebPagetest don’t show blocking and combine send & wait into a single state.
The colors are much more inconsistent. Chrome Dev Tools uses the same color for all states. The rest of the tools have almost no overlap. Here’s my proposal:
- Block (gray) – because nothing is really happening
- Lookup DNS (yellow) – like the Yellow Pages
- Connect (red) – because this is the tricky negotiation part (red is caution)
- Send (blue) – it’s a good color that contrasts well with red
- Wait (light purple) – a mellow color while we wait
- Receive (green) – because this is the payment that’s been received (green like money – sorry for the U.S. bias)
These are subjective choices and I’m open to other proposals. I most care about gray for Block and yellow for Lookup DNS. I also defer to someone who understands the color wheel. (I painted for years but never learned.)
Content-Type
Chrome Dev Tools is the only tool that reflects the Content-Type in the waterfall’s horizontal bars. The choice of whether to use the horizontal bar to show Content-Type or connection states is up to the tool developer. My preference would be to follow Chrome Dev Tools and use the bar to show Content-Type. A pop-up or detail view could be used to see the connection state information. Chrome Dev Tools, Firebug, HttpWatch, and IE9 Dev Tools already display a detailed view of connection states when you select a bar.
Regardless of the information shown in the horizontal bars, users would benefit in other ways from consistent colors mapped to Content-Type. This color mapping could be used as the text color in the waterfall chart and in charts of total requests and download size broken out by Content-Type.
The color map from Chrome Dev Tools is:
- HTML (blue)
- JavaScript (orange)
- CSS (green)
- images (purple)
- text/plain (yellow)
- redirect (gray)
I’m fine with these colors. If it was up to me I’d make JavaScript red because I have so many errors in my JavaScript. I’d make CSS purple because that’s “fancier” (CSS is used to make pages look more stylish). I’d make images blue because they’re the most common resource type and my world is mostly blue (it’s a denim thing, not emotions). That leaves green for HTML. But again, purely subjective.
Load Events
Many of the tools draw a vertical line to mark the DOMContentLoaded and window load events. Again, the names and colors vary across the tools:
Chrome dev tools | DOMContent | Load |
Firebug | DOMContentLoaded | load |
HttpWatch | Render Start | Page Load |
IE9 dev tools | DOMContentLoaded | Load |
WebPagetest | Start Render | Document Complete |
I like DOMContentLoaded and Load because I understand exactly what’s being measured. I’m less concerned about the colors; I’d pick blue and green if it was up to me.
Now what?
I’m working with Brian Pane and Jan Odvarko on some UI changes to Jan’s (Honza’s) HAR Viewer. I hope we’ll add Content-Type icons, in which case other tools could adopt those. If you’d be willing to create those icons please contact me.
As for the names and colors, I’m not sure how to proceed. Mark Nottingham suggested starting an “ether pad or wiki page”. I’d appreciate comments on these ideas and ways to move forward. Greater consistency across these tools will make it easier for developers to get on the web performance bandwagon, which is something I hope we all want.