Tuesday, March 11th, 2008

IE 8 and Performance

Category: Browsers, IE, Performance

Steve Souders has posted on IE 8 and performance improvements.

One new nugget of information that I haven’t seen anywhere else is the fact that scripts are now loaded in parallel (and execution is still serial of course):

Increasing parallel downloads makes pages load faster. (For users with slower CPUs or Internet connections it could possibly be worse, but for most users it’s faster.) The HTTP 1.1 spec recommends that browsers only download two items in parallel per hostname, but the spec was written in 1999. Today’s clients and servers can support more parallel downloads, so IE8 has increased the number of downloads per hostname from 2 to 6.

Increasing parallel downloads makes pages load faster, which is why downloading external scripts (.js files) is so painful. Firefox and IE7 and earlier won’t start any parallel downloads while downloading an external script. These days, with the greater adoption of Web 2.0 and DHTML, many sites contain multiple scripts which means those pages will have long periods where all other downloads are blocked. It’s understandable that these scripts need to executed sequentially (code dependencies) but there’s no reason they couldn’t be downloaded in parallel. And that’s exactly what IE8 has done. It’s the first browser I’ve seen that has implemented this critical improvement for load times. Facebook has got to be thankful for this. They have 17 external scripts on their page. In most browsers this causes the page to load slowly for users coming in with an empty cache. But for users coming in using IE8 the scripts load ~80% faster because they’re loaded in parallel. In this screenshot showing HTTP requests for Facebook we see parallel script loading, and we also see them loading 6 at-a-time. Both of these IE8 enhancements dramatically speed up pages.

This dove tails nicely with the other items that we have already heard about:

  • 6 downloads per host
  • data: URIs, which means you embed your rounded corners

I wonder if IE 8 has a total maxconnections limit?

Posted by Dion Almaer at 12:01 am
21 Comments

++++-
4.2 rating from 24 votes

21 Comments »

Comments feed TrackBack URI

Very nice changes we are seeing from Internet Explorer 8, but I’m still not convinced yet. I’ll admit that this is a good feature, and it’s starting to change my opinion on the browser, but I’m not expecting anything big until I see them achieve standards compliance.

Comment by musicfreak — March 11, 2008

How difficult would it be for Firefox and Webkit/Safari to do?

Comment by dkubb — March 11, 2008

Still lots to be done on IE performance, its just too early to make some general conclusions. Have to wait for at least Release Candidate 1.
Cheers

Comment by ajaxus — March 11, 2008

Well, for the clients it’s a good thing, but I don’t know if this is good for the server, because if each client is going to request 6 files at the same time, then the web server it’s going to create 6 processes, one for each request, if this works in that way, there’s going to be more requests per second then…, maybe someone can explain us that in detail?

Comment by sebasgt — March 11, 2008

sebasgt may have a point: are ie8 users expected to drop 6x the load of firefox and webkit users? If I have multiple hosts in my legacy app www1, www2, etc, ie8 may open as much as (6xnumber of hosts) while the rest of browsers will open (2xnumber of hosts), scripts excluded.

Is that a good thing?

Comment by icoloma — March 11, 2008

If you’re seeing enough users, then this shouldn’t matter, since although users may request more files at once, in the end they’re still transferring as much data, and therefore taking up as much server space in the larger scheme of things.

Comment by Joeri — March 11, 2008

I am using IE 7 at work and FF2 at home. Seriously, parallel download doesn’t impress me. If they could do parallel rendering/processing between tabs, I am truly impressed. If you go to facebook or iGoogle with 12 RSS portlet, your browse hangs there and you cannot switch to the others. One of my purposes of having many tabs is to have one loading while I am reading the others so I could save some time on waiting for one page to load with 10000000000000000 ads.

I hope FF3 has this feature.

Comment by rebecca20 — March 11, 2008

Isn’t it what you are doing when you change the Firefox in about:config
and edit:
network.http.pipelining network.http.proxy.pipelining network.http.pipelining.maxrequests

Google “speed up firefox”. Seems to me it’s the exact same thing… but in Firefox you get to change the setting and make it… 300 simultaneous connections.

Comment by JeromeLapointe — March 11, 2008

@ Jerome: Yep, you could….but from a responsibility point of view, this is not cool. It potentially puts way too much load on servers. Maybe it isn’t such a big deal if

Comment by Carbon43 — March 11, 2008

Edit: my less than symbol got parsed, and rest got cut off…oops :)

@ Jerome: Yep, you could….but from a responsibility point of view, this is not cool. It potentially puts way too much load on servers. Maybe it isn’t such a big deal if less than 1 percent of web users are implementing this hack, but if everyone started doing it, there would be some seriously major load problems for servers…which would cause a lot more downtime than there is right now. There is a good reason why it was set at 2 back in 99. 6, 8, or even 10 seems fairly reasonable now, but if you’re doing more than that, its selfish.

Comment by Carbon43 — March 11, 2008

The point I’m making is that it’s nothing new. It’s just a setting. 300 is an arbitrary number… if you’re website has over 60 HTTP requests you need to look into bundling things. 6 connection is probably way overly conservative… not to “break the web”.

A real innovation would have been to let their browser accept “archives”… compressed archives… and allow us to bundle content for delivery(in ways that make sense… cachable together… compressible together)

Comment by JeromeLapointe — March 11, 2008

I mean… it’s still nice that we’re moving forward with the number of connections thing… maybe they could have put in a specific header that lets the server augment or decrease the number of connections based on its own capabilities…

Comment by JeromeLapointe — March 11, 2008

I support parallel downloading.
The responsibility of keeping the web fast should be server/network administrators, not users’ browsers.

Comment by johnnymm — March 11, 2008

@Jerome: You are talking Apples and Oranges. Pipelining is something completely different, though related. Firefox will lock all downloads for a Javascript file OR CSS file. Only one will download at a time. It was an easier way to insure that the JS or CSS files were interpreted in the correct order. I had mentioned this to devs years ago. This also disables pipelining while a JS or CSS file is being downloaded.

Personally, I can’t believe this hasn’t been addressed before. And without doing the same for CSS, it is only somewhat useful (some initial scan of the content would be needed for references to @import — but they should be at the top by spec, so not a big deal).

It is kinda like http_only cookies in Firefox. It takes them like six years to do something because it would change the format of the cookie file. Those SixApart guys have been asking for that one for ages.

But I understand, all browser developers are interested only what interests them (cool to work on, demands of the boss), since web developers are not directly their clients.

Comment by Steve Roussey — March 11, 2008

For those saying that it will put a greater load on the server, it won’t. The server still has to deal with the same amount of requests in the end, it just has to do more at the same time, which is not a bad thing at all. In the end it doesn’t matter. The only time it would is if you clicked Stop while loading the page, in which case probably 5/6 of those requests would be wasted, and then yes, it would be unneeded load on the server.

Comment by musicfreak — March 11, 2008

Well it’s not a good thing for the server to serve all the request at the same time. e.g. if there are 50 IE 8 users requesting the same page at the same time, the server it’s going to need 300 processes to serve that!, 256 simultaneous requests are set by default in apache, so there’s 44 requests to append to the wait state. This doesn’t happen with the other browsers, maybe 6 is to much?

Comment by sebasgt — March 12, 2008

If you can’t configure your server to put a cap on the maximum number of requests handled in parallel, you’re not much of a server admin. I agree that dealing with this stuff is the server’s responsibility, not the browser’s. After all, if your server can’t deal with it gracefully, it is wide open to DoS.

Comment by Joeri — March 12, 2008

@musicfreak et al…
.
The point isn’t that the server will somehow have to serve out more information… we can all agree (it’s obvious) that it will serve out the same total information.
.
The issue is with peak load… its not having to serve X number of requests that brings servers to their knees, its having to serve X number of requests in a very short period of time. (The Digg effect)
.
If its just one or two greedy people trying the maxrequests bit, no biggy. But if everyone starts doing it…you’ll see a lot more downtime. I guarantee it.
.
Math: 50,000 users hitting a server in a span of say…30 seconds with requests per hostname at 2 equals 100,000 requests in 30 seconds (obviously a client won’t only be making two requests). Thats going to be a balanced load…and once a download has finished, the client begins the next download… it may take a tiny bit longer clientside, but the server won’t be hit as hard.
.
If on the other hand, everyone has their rph set to…lets say 20, the requests hitting a server in the given period of time is 1,000,000.
.
Its ignorant and irresponsible to suggest that server admins should just be able to handle it…heard of budgets? Although I do agree with Joeri that being able to handle DoS is a must these days in an enterprise environment.

Comment by Carbon43 — March 12, 2008

What a disaster! They have just tripled the connection load on the server. Where once you could handle 20k users with 40k connections, you will now need 120k connections! But wait! there are only 64k ports in TCP/IP v4! So a server that could handle 20k users can now only handle 10k.

they are so stupid. The increasing connection trick only works if you are the only bunny with lots of connection and everybody else has only 2. If everybody else has 6, then you’ll need to have 12 to get a better share of the server network capacity…. but then IE9 with trump that and go to 24 connections.

I guess it’s not a problem for IIS that chocks after 2k users

Comment by Greg Wilkins — March 12, 2008

@Steve Roussey
Thank you for the clarifications.

Comment by JeromeLapointe — March 13, 2008

I don’t know if there are specific technical reasons why servers don’t limit these connections themselves but it just seems wrong that this limit was never imposed from the server rather than just relying on the clients…

Comment by JeromeLapointe — March 13, 2008

Leave a comment

You must be logged in to post a comment.