Thursday, September 13th, 2007

replaceHTML for when innerHTML dogs you down

Category: JavaScript, Library

Steven Levithan, of RegexPal, ran into some performance issues with innerHTML due to the fact that “every keydown event potentially triggers the destruction and creation of thousands of elements” so he started to look into it.

He has a test page that demonstrates the issue. Here is some sample input:

1000 elements…
innerHTML (destroy only): 156ms
innerHTML (create only): 15ms
innerHTML (destroy & create): 172ms
replaceHtml (destroy only): 0ms (faster)
replaceHtml (create only): 15ms (~ same speed)
replaceHtml (destroy & create): 15ms (11.5x faster)

15000 elements…
innerHTML (destroy only): 14703ms
innerHTML (create only): 250ms
innerHTML (destroy & create): 14922ms
replaceHtml (destroy only): 31ms (474.3x faster)
replaceHtml (create only): 250ms (~ same speed)
replaceHtml (destroy & create): 297ms (50.2x faster)

The code for his replaceHTML is:

javascript

  1. /* This is much faster than using (el.innerHTML = str) when there are many
  2. existing descendants, because in some browsers, innerHTML spends much longer
  3. removing existing elements than it does creating new ones. */
  4. function replaceHtml(el, html) {
  5.     var oldEl = (typeof el === "string" ? document.getElementById(el) : el);
  6.     var newEl = document.createElement(oldEl.nodeName);
  7.     // Preserve the element's id and class (other properties are lost)
  8.     newEl.id = oldEl.id;
  9.     newEl.className = oldEl.className;
  10.     // Replace the old with the new
  11.     newEl.innerHTML = html;
  12.     oldEl.parentNode.replaceChild(newEl, oldEl);
  13.     /* Since we just removed the old element from the DOM, return a reference
  14.     to the new element, which can be used to restore variable references. */
  15.     return newEl;
  16. };

Posted by Dion Almaer at 7:04 am
28 Comments

++++-
4.3 rating from 95 votes

28 Comments »

Comments feed TrackBack URI

Thanks for the mention, Dion! Note that the above code is already outdated…you can see the new version at http://blog.stevenlevithan.com/archives/faster-than-innerhtml .

Two (fairly obvious) issues were pointed out on my blog, the main one being that I should’ve been using cloneNode() instead of createElement(), so that attributes are preserved.

Comment by Steven Levithan — September 13, 2007

Never thought I would say this, but it seems that IE is much faster than Firefox in all tests I did and even that in IE there is no difference in performance between innerHTML andf replaceHtml. Kind of strange but try it yourself.

Comment by Matt — September 13, 2007

It’s slower in opera 9.23:

15000 elements…
innerHTML (destroy only): 31ms
innerHTML (create only): 140ms
innerHTML (destroy & create): 172ms
replaceHtml (destroy only): 31ms (~ same speed)
replaceHtml (create only): 156ms (1.1x slower)
replaceHtml (destroy & create): 1328ms (7.7x slower)
Done.

Comment by Luca — September 13, 2007

@Luca, that depends on your system and background interference. On my work system (Xeon 2.8GHz, 1GB RAM, WinXP), Opera 9.23 consistently runs the 15,000 element “destory & create” test between 1.5 to 3.5 times faster with replaceHtml than with innerHTML. I believe it was also faster on my home system (Core 2 Duo 1.8GHz?, 2GB RAM, WinXP), but by a different margin. Note that, if you run the test several times, generally only the fastest times are relevant, since averaging the cost of background interference is not very enlightening.

Comment by Steven Levithan — September 13, 2007

Indeed, in Opera 9.23 it seems to be faster for more elements (5000, 10000 and 15000), but slower for the lower numbers. What amazes me though is how much faster IE (both 6 & 7) are than FF2 (and I guess that’s the problem that this solves: FF2 is dog slow at this).

On my system:
FF2
15000 elements…
innerHTML (destroy only): 26687ms
innerHTML (create only): 656ms
innerHTML (destroy & create): 27969ms
replaceHtml (destroy only): 110ms (242.6x faster)
replaceHtml (create only): 594ms (1.1x faster)
replaceHtml (destroy & create): 703ms (39.8x faster)
Done.

IE7
15000 elements…
innerHTML (destroy only): 94ms
innerHTML (create only): 391ms
innerHTML (destroy & create): 406ms
replaceHtml (destroy only): 78ms (1.2x faster)
replaceHtml (create only): 359ms (1.1x faster)
replaceHtml (destroy & create): 422ms (1.0x slower)

Opera 9
15000 elements…
innerHTML (destroy only): 47ms
innerHTML (create only): 250ms
innerHTML (destroy & create): 578ms
replaceHtml (destroy only): 94ms (2.0x slower)
replaceHtml (create only): 219ms (1.1x faster)
replaceHtml (destroy & create): 328ms (1.8x faster)
Done.

Comment by Marc — September 13, 2007

very interesting – i thought innerHTML was already faster than dom operations and replaceHTML is even faster. thanks for this tip and also for the benchmarks!

Comment by Harald — September 13, 2007

This seems ike a workaround for a bug in Firefox. Why not report/fix the bug instead of doing weird stuff that actually is slower in other browsers?

Comment by Gorm — September 13, 2007

I believe you would probably want to destroy/remove and event listeners from the old node before replacing it to avoid any potential memory leaks, unless replacing the old node with the new node does this implicitly(don’t think it does).

Comment by Brad Harris — September 13, 2007

@Gorm, the only browser it is always slower in is IE (and only by a narrow margin, which is easily avoidable using conditional compilation). Performance improvements comparable to those in Firefox can also be seen in Safari 3.0.3 beta. Luca’s numbers for Opera are clearly a case of background interference, since the length of the “destroy & create” test should not be much different than the length of the “destroy only” and “create only” tests added together.

If and when Firefox and Safari improve/fix the related issues, obviously this would have little benefit. Until then, it can make the difference between a “broken” application and blazing performance (or perhaps a less dramatic but still positive change in other apps).

Comment by Steven Levithan — September 13, 2007

It’s faster because updating a node that isn’t in the DOM is a lot faster than doing the same on a node that is in the page. Creating a new node isn’t needed – take the node out, update it, and put the original node back in – it will give you the same results without invalidating existing references to you node. I wrote it up on my blog: http://www.bigdumbdev.com/2007/09/replacehtml-remove-insert-put-back-is.html

Comment by Steve Brewer — September 13, 2007

Nice code. I thought I could speed up our virtual table with this hack but it seems that it only has this speed improvements if the parent element really has many child nodes. If you put a single div around the test string the speed improvement drops to just 1.3x no matter how many elements are inside this div.

In our case the parent element would only have one single child, the table, which is regenerated using innerHTML and this hack does not help us much :-(

Comment by Fabian Jakobs — September 13, 2007

Can we have “Email Article” on this site? I wanna email this article to my friend since he’s an Ajax developer but I have to manually copy and paste the URL. I thought “Email Article” is already a standard feature nowadays.

Just my 2 cents.

Comment by Andy — September 13, 2007

@Brad Harris: This function is not appropriate if you need to worry about cross-browser memory leak issues resulting from event listeners on child nodes, etc. However, the overhead of dealing with such issues when you don’t need to (which many frameworks do for you… e.g., with jQuery.html(value), etc.) can be just as much if not more of a problem.

@Steve Brewer:
I hope you realize that your currently posted test (which builds a single textNode consisting of the string “CONTENT” repeated 15,000 times) is not remotely comparable. The approach you describe is certainly faster than plain innerHTML, at least in Firefox, but it is not faster than replaceHtml (it is easily demonstrable that the “destroy” step is slower).

@Fabian Jakobs:
That is very interesting! In fact, upon initial tests, it appears that simply wrapping the new html string in a <div> element before using innerHTML is a very effective solution against the performance issues in both Firefox and Safari!

Comment by Steven Levithan — September 13, 2007

You should probably run a “for (attr in node)” to preserve other attributes, not just the id.

Comment by Japayth — September 13, 2007

Hey Steve,
My test used 15,000 spans, not one text node (didn’t escape my HTML in blogger – I’ve updated).

The perf gain is about manipulating a node that’s in the DOM vs. one that isn’t. This is a common performance trick (see Y!’s performance blog).

You’ll have to demonstrate “The destroy step is slower”. If you’re replacing a big chunck of HTML with another big chunk of HTML, your function and remove-update-replace are pretty much equal speed wise – except replaceHtml has gone and blow away the original node.

Comment by Steve Brewer — September 13, 2007

Dion, et al,

You guys should either a) Have a “performance” category that you tag articles with, and/or b) enable user tagging

Either way, it’d be great to come back to Ajaxian.com when I need to do some performance improvements, and find articles like this easily.

Thanks!!

– Matt

Comment by Matt — September 13, 2007

“didn’t escape my HTML in blogger – I’ve updated” –Steve Brewer

Gotcha. As for, “You’ll have to demonstrate…”, see here: http://stevenlevithan.com/demo/replaceHtml2.html

At least on my system, withRemove() is consistently slower in all of the four big browsers.

Comment by Steven Levithan — September 13, 2007

the question is: who in real life needs 1000+ innerHTMLs in a row?

Comment by anonymous — September 13, 2007

It should be pretty clear which approach to use. If you possibly have references to the element in question, than use Brewer’s, if not you can use Levithan’s.

Comment by Kris Zyp — September 13, 2007

Yes, and I’m sorry if I contributed towards turning this into a somewhat heated discussion. As noted in the parallel discussion with Brewer on my blog, his approach has less potential for unexpected side effects, so it is preferable for things like JavaScript frameworks.

Comment by Steven Levithan — September 13, 2007

OT: I second Matt’s motion for tagging, I swear I just had a similar thought 30 seconds ago.

Comment by Charles — September 13, 2007

No matter how hard I try, the results are unreliable or unsatisfying. Opera on my Vista home fluctutes weirdly and IE7 always seems to be slower.

Comment by deadcabbit — September 14, 2007

@deadcabbit: Alas, it’s often the case that such is life in the world of cross-browser JavaScript performance testing, especially if you or your OS have a lot going on in the background. Also note that many people give too much weight to (often meaningless) averages of repeated tests. The numbers I posted are based on best times for each discrete test on my personal system over numerous run-throughs. IE performance times are typically so fast that minor differences can seem significant if you only look at the comparison calculations. If you look at the code, you will will see that it cannot possibly be faster than simple use of innerHTML in IE, since that’s exactly what’s used in that browser.

Comment by Steven Levithan — September 14, 2007

i don’t know, what a computer do you have but for me, replaceHtml ALWAYS slower than innerHTML (Opera 9.23, FF 2.0.0.6, IE 6, Safari 2.0.4)

Comment by Vital — September 14, 2007

@Steven Levithan
I get slower results with the new Opera alpha from time to time. Actually it’s faster sometimes, but slower other times (not by any noteable margin though). My point is that you don’t know if the browser will change it’s implementation in the future. I would rather see vendors fix/optimize rather than doing unpredictable hacks.

Comment by Gorm — September 14, 2007

Anon: the question is: who in real life needs 1000+ innerHTMLs in a row?

I do. Next question.

Comment by Marty — September 14, 2007

About benchamrking,
maybe someone could take the trouble of looking up DOMs in popular apps like Gmail, google docs, Windows Live, sites with YUI, ZK and GWT, and scriptaculous-based and come up with a sort of report on the amount of elements of each type that you would habdle in a “typical web2.0” site. obviously one number, or a few, is useless.
So we could say that something like this is useful:
No. of divs, tables, td’s, tr’s,
level1, level2, level3 span’s, div’s,
elements with heavy css styling: 4-5 rules per element,
elements with minial styling: 1or 2 css rules
inherited rules, inline styles

Kinda like saying you *become* the browser and its DOM and then analyse everything in it from a benchmarking perspective.
Much like any other decently acceptable benchmarking system.

Anyone game?
PS: If you’re that confident about the quality of such a test that you perform, and document, you could sell the whitepaper ;-)

Comment by Tom — September 14, 2007

“Performance” would be a nice tag – so we could easily find this post later ;-)

Comment by Robert — September 14, 2007

Leave a comment

You must be logged in to post a comment.