Tuesday, December 19th, 2006

Using CNAMES to get around browser connection limits

Category: Ajax, Articles

<p>Ryan Breen has written up a detailed post on Circumventing browser connection limits for fun and profit in which he discusses the old-fashion limits of 2 connections per HTTP/1.1 per host, and the benefit you get from a simple CNAME hack.

The average load time when using 2 connections is 7.919 seconds. The average load time when using 6 connections is 4.629 seconds. That’s a greater than 40% drop in page load time. This technique will work anywhere that you have a large block of object requests currently served by one host.

There is plenty of precedent for this approach in real world Ajax apps. To exploit connection parallelism, the image tiles at Google Maps are served from mt0.google.com through mt3.google.com. Virtual Earth also uses this technique.

You can also use this connection management approach to sandbox the performance of different parts of your application. If you have page elements that require database access and may be more latent than static objects, keep them from clogging up the 2 connections for image content by putting them on a subdomain. This trick won’t cause a huge improvement in the total load time of your page, but it can significantly improve the perceived performance by allowing static content to load unfettered.

Related Content:

Posted by Dion Almaer at 10:11 am
14 Comments

++++-
4.1 rating from 61 votes

14 Comments »

Comments feed TrackBack URI

This is an excellent way to kill your webserver. It may work with 3 or 4 concurrent users, but large sites won’t work that way.

Comment by Anonymous Coward — December 19, 2006

Of course large sites will work that way – you just have to understand how to load balance the requests. Networking and system architecture 101.

Comment by Alex — December 19, 2006

Google Maps works this way, so I think large sites can make judicious use of this technique without fear.

A poster on the blog pointed out that you can get the same effect with a wildcard DNS entry rather than explicit CNAMEs. That’s certainly true — we just couldn’t figure out how to enable wildcard DNS through our provider for this specific example. :-)

Comment by Ryan Breen — December 19, 2006

Why is it always the anonymous ones who bash and run?!
Thanks for the info

Comment by Mark Holton — December 19, 2006

Great article! I love these performance ones! Thanks!

Comment by Oliver Tse — December 19, 2006

Yes, this is an important performance tip. However, I was under the impression that Firefox had (single host) connection limit of 4 and just IE limited at 2. Is it true that both IE and Firefox limit at 2?

Comment by Kris Zyp — December 19, 2006

Kris: They aren’t limited to 2 connections but they default to 2. This value can be increased in both browsers but the average user isn’t going to know how so you need to assume they only have 2.

Comment by Andy Kant — December 19, 2006

Interesting note…Firefox may default to its max of 8 connections now. I haven’t modified the value in Firefox 2 and I don’t have Fasterfox installed, but my setting is at 8 anyways.

Comment by Andy Kant — December 19, 2006

If your server gets too busy, you can point some of the CNAME DNS entries to another server, which serves just static content. Or if you don’t have another web server, then to Amazon S3 for static content or even to an Amazon EC2 instance.

It would be even cooler if it could be done based on server load.

Comment by Barry — December 19, 2006

http://www.die.net/musings/page_load_time/

Comment by See Also — December 19, 2006

Andy, looking at my Firefox settings, the 8 connections is for maximum connections to a host. There is a separate line for maximum persistent connections (the HTTP 1.1 connections I’m discussing here), and that is set to 2 in my Firefox.

Comment by Ryan Breen — December 19, 2006

I have done this years ago already, not only because of the 2 concurrent connection default limit. My problem sometimes was that you have more DNS lookups which was slow in some countries or with bad configured DNS servers.

Comment by Michael Schwarz — December 20, 2006

Using CNAMEs is a way to increase download parallelization without users having to change their configuration settings. The issue of increased DNS lookups has already been mentioned. In my research another issue is the amount of thrashing on the client. I tried using different numbers of CNAMEs: 1, 2, 4, and 10. Using 2 was the best. Using 4 and 10 were worse than 1.

As far as the browser config settings go, IE and Firefox are very different. In Firefox, the number of downloads that can happen for a single server depends on what version of HTTP is used. For HTTP/1.0 it is network.http.max-connections-per-server (defaults to 8). For HTTP/1.1 it’s network.http.max-persistent-connections-per-server (defaults to 2). This example is using HTTP/1.1, therefore max-persistent-connections-per-server is a gating factor in the number of simultaneous downloads that is achieved. Before thinking of increasing this configuration setting, or using more than 2 CNAMEs, keep in mind another configuration setting that comes into play: network.http.max-connections (defaults to 24). You can never achieve more than 24 simultaneous downloads. For example, if you had 40 images across 20 CNAMEs, you would hope to get 40 parallel downloads. In actuality, only 24 images would be downloaded in parallel.

IE, on the other hand, does not have a limit on the maximum number of parallel downloads. At least, not that I’ve found. I’ve been able to perform over 100 downloads in parallel (limited to two per hostname for HTTP/1.1, four per hostname for HTTP/1.0).

Comment by Steve Souders — December 22, 2006

Question: What happens when you use this strategy in a HTTPS site? Will the browser prompt with a warning saying that this page contains content from other sites?

Comment by Annonymous — November 5, 2007

Leave a comment

You must be logged in to post a comment.