Tuesday, April 7th, 2009

Qooxdoo Jumps into Taskspeed FTW (on IE)

Category: Performance, Qooxdoo

The Qooxdoo gang have created tests for Taskspeed with some surprising results:

On IE qooxdoo is by far the fastest framework.

Across browsers and frameworks, qooxdoo gained the highest ranks on all versions of IE (i.e. 6, 7 and 8), and made its lowest mark coming out third on Firefox 3.0. This exceptional IE performance also leads to the best overall score. The IE results are a big surprise and we’ll try to investigate, what we do different (better) than all the other JavaScript libraries.

As always performance tests should be taken with a grain of salt. It’s hard to judge whether all implementations are really equivalent. For example in the jQuery tests John Resig implemented all tests in a pure jQuery way. There are obvious optimizations he consciously omited, but it apparently reflects the genuine jQuery coding style. There is no official qooxdoo way to work with the DOM yet, so we modeled our tests closely after the Dojo and jQuery tests.

Fabian Jakobs analyzes why they’ve performed so well, speculating that because they built a GUI toolkit they’ve been optimizing DOM operations since the beginning to keep it fast–and because they use Sizzle, their lack of attention to CSS optimizations didn’t kill them.

Fabian also mentions that these results encourage their intention to make qooxdoo’s DOM API available stand-alone:

These results show that we have a good base and encourage us to move forward in this direction.

You can check out the tests for yourself, though as Fabian mentions in his post, they require a trunk build of qooxdoo.

Posted by Ben Galbraith at 9:19 am

4 rating from 67 votes


Comments feed TrackBack URI

I’ve been working with phiggins on taskspeed for a while (on and off). These results hilight a core flaw in how the current taskspeed tests work.

qooxdoo isn’t actually this fast.

They simply fire their actual dom manipulation code asynchronously. This causes the original functions to return immediately before the actual change has happened.

Don’t get me wrong, I’m not saying anything even remotely negative about qooxdoo at all. This is simply a flaw in taskspeed itself.

I am working on a new benchmarking tool on github. I’ll have to think of a way to test asynchronous functionality.

I also need to look into qooxdoo some more and see how it all works.

Isn’t javascript totally fun! ^_^

Comment by SubtleGradient — April 7, 2009

Interesting. So is it possible to get yourself in trouble with qooxdoo if you’re not expecting it to be async?

Comment by Nosredna — April 7, 2009

The results seems pretty fair even though there might be flaws in Taskspeed, it’s not just qooxdoo that got benchmarked with these tests. Nice work guys.

Comment by Jadet — April 7, 2009

If there is no standard way to access the DOM in qooxdoo, what do these results reflect? I thought Taskspeed is about profiling common DOM operations? Classical RIA vs. low-level JavaScript Framework … apples vs oranges.

Comment by digitarald — April 7, 2009

All the qooxdoo tests in Taskspeed use synchronous DOM operations. We do have an asynchronous DOM layer, which creates DOM nodes on demand but this is not used in Taskspeed.
The chart generated by Taskspeed (only IE8 and Firefox 3) however is a bit unfair because the x-axis starts at one second. Since qooxdoo requires a little over a second to run it appears the it takes almost no time. For this reason I’ve use the IE7 chart in my blog post, which starts at 0.

There is no standard way to access the DOM in the sense that until recently qooxdoo was not supposed to be used for DOM operations only. The DOM API was simply not promoted as public API. The real standard way was by using the widgets, which are implemented on top of our DOM API. Like ExtJS with ExtCore we are now opening this DOM API and I think its completely reasonable to compare this part of qooxdoo with low-level libraries.

I would appreciate the addition of your pure DOM tests. This could define the base line and we as framework developers could better identify performance issues. I would suggest you fork Taskspeed on github and commit your changes. I’m sure Peter Higgins will integrate your patches. This is how I did it.

Comment by fjakobs — April 7, 2009

Would you mind changing the image in the blog post to e.g. the IE7 charts? I think its too irritating.

Comment by fjakobs — April 7, 2009

What is this code garbage ??? – From the Mootools test (the jQuery test runs are lame-tastic as well, though):

new Element(‘ul’, { id:’setid’+i, ‘class’:’fromcode’})
new Element(‘li’, { html:’one’ }),
new Element(‘li’, { html:’two’ }),
new Element(‘li’, { html:’three’ })

That is just one example of many where the author is using slow ass stuff instead of faster methods, maybe the guy who wrote this should try using methods that someone using each framework correctly would, use instead of writing everything to take as long as possible.

What it should look like:

new Element(‘ul’, { id:’setid’+i, ‘class’:’fromcode’,

C’mon TestGyver, your better than that.

Comment by csuwldcat — April 7, 2009

Awe, it butchered my ‘correct version’ above:

new Element(’ul’, { id:’setid’+i, ‘class’:’fromcode’,

have to use () for < i guess, use your imagination.

Comment by csuwldcat — April 7, 2009

new Element(‘table’,{ ‘class’:’madetable’ })
new Element(‘tr’)
.grab( new Element(‘td’,{ html:’first’ }) )
.grab( new Element(‘td’), ‘top’)

Nice Test!!! …and the short way:

new Element(‘table’,{ ‘class’:’madetable’ ,

Comment by csuwldcat — April 7, 2009

That is just one example of many where the author is using slow ass stuff instead of faster methods, maybe the guy who wrote this should try using methods that someone using each framework correctly would, use instead of writing everything to take as long as possible.

That’s exactly the point of having the library authors write the test to their API. The API provided is expected to be an abstraction that developers will use. The purpose of the benchmark is to test real-world use of those abstractions—essentially, to test the performance of the API.

Perhaps you wouldn’t use the Element() DOM builder abstraction, but many of us do because it provides a lot of utility in a generally concise API. Replacing it with innerHTML is not only not a 1:1 replacement, but also more and more problematic as you go to incorporate user data into the DOM nodes you’re creating.

There’s a very valid reason for testing the various libraries DOM APIs as such: people are using them, and it’s good to know what kind of performance we’re getting.

Comment by eyelidlessness — April 7, 2009

csuwldcat, do you have any problem with lt and gt chars? Why do you want to re-invent the html via round brackets? I guess an XPath interaction is via div[first] rathern than normal specs, right?

Comment by WebReflection — April 7, 2009

div[first<>] (stripped chars!)

Comment by WebReflection — April 7, 2009

@csuwldcat … gotcha, well I guess Ajaxian guys could think about a web 2.0 blog where people cannot write < and > chars :P

Comment by WebReflection — April 7, 2009

@fjakobs: wow!
Please forgive me for jumping to that conclusion. Shows how much I know ;)

I really have to check out the qooxdoo codez nao!

Comment by SubtleGradient — April 7, 2009

@fjakobs: Replaced the IE8 chart with IE7 and changed title of story from “IE8” to “IE7”.

Comment by Ben Galbraith — April 7, 2009

The reason I made that initial comment was due to the differences between certain library test methods. If one is using the library’s Element creation for each node shouldn’t they all?

Dojo’s Test:
var n = dojo.doc.createElement(‘ul’);
dojo.addClass(n, “fromcode”);
n.id = “setid” + i;
n.innerHTML = “(li)one(/li)(li)two(/li)(li)three(/li)”

jQuery’s Test:

Moo’s Test:
new Element(’ul’, { id:’setid’+i, ‘class’:’fromcode’})
new Element(’li’, { html:’one’ }),
new Element(’li’, { html:’two’ }),
new Element(’li’, { html:’three’ })

See eyelidlessness, it looks to me like Dojo and jQuery are using a string insertion via innerHTML to add their li’s in this test round. I mean for apples to apples sake why would you use one library’s full Element creation method to create and inject sub nodes for a test, while skipping that additional logic processing on the same test for a different library? I have to think that skipping the steps to process the function of a library that creates an element and does its insertion and instead using innerHTML has to be a performance difference. If I am way off here just let me know…

Comment by csuwldcat — April 7, 2009

I have to agree with csuwlscat. Mostly because I don’t think there is a “preferred” way to do these things amongst, say, jQuery users. If there is, it’s just due to examples getting copied.

There are many ways to do the same thing in any of these languages, and which one is chosen is due to the situation and the programmer. So what are we testing the speed of? One programmer’s favorite solution, which may be based on clarity rather than speed.

Comment by Nosredna — April 7, 2009

Well it seems they are testing for speed, so if a lib wants higher result it should provide better tests based on speed, only then can you truly see wich one is faster on the DOM.

Tests that just set .innerHTML on something aren’t even using the library, code like that shouldn’t be in there, those tests should use update(), html() or whatever comes with the library.

Comment by Jadet — April 7, 2009

I agree. Tests should definitely be rated/marked somehow according to their level abstraction. It’s a wonder Mootools gets the score it gets considering ;)

Comment by rasmusfl0e — April 7, 2009

Yeah I tested some of the Mootools test snippets using standard methods of the mootools library done the way the examples right in the doc specify and the ms total were notably less (better score) on many tests. My buddy redid one of the jQuery tests – the one that does the (div[rel^=foo]) – something like that, forget the specific one, and it went from over 300ms in Google Chrome to 60ms. These tests are flawed severely enough to warrant rescinding the conclusions the author is coming to. Some libs get to use native methods mixed with the lib methods, some don’t, some tests grab waaay more elements in their tag name selector queries some don’t, just poor baselines for comparisons. A more relevant title for TaskSpeed in its current state would be RandomTaskSpeed…lol about as useful as an Austin Powers henchman!

Comment by csuwldcat — April 7, 2009

Hmm, I was under impression that tests were written/reviewed by people from respective communities. Weren’t they? If they were, I assume they reflect the culture and the spirit of those communities. If they weren’t, they should be reviewed — as far as I know all code is publicly available.

Am I too off the base here?

Comment by elazutkin — April 8, 2009

I think for the sake of comparability the tests should weed out innerHtml altogether (unless, maybe, a test cannot be implemented otherwise). It’s about library performance, not about bypassing the API.

Comment by qooxdoomonster — April 8, 2009

my pure DOM implementation does not use innerHTML except for one test where it is clear enough that innerHTML is allowed.

in jQuery, as example, if you use .html() rather than .append() performances are extremely different

Comment by WebReflection — April 8, 2009

@csuwldcat – I would love to see the JQ test that went faster. Provided it follows the tests “english description” in procedure order, number of iterations and return values there is no reason a better test should not be used. John Resig reviewed/rewrote the JQ tests, and expressed the same concerns about the iterations and whatnot – but the fact of the matter is everyone _should_ be doing identical “tasks”, under identical constraints. I would hardly call it random. I thought it interesting to see the code required to accomplish identical tasks in the different libraries.

As stated previously, TaskSpeed was not ‘ready’ to be released. I’d mentioned in the initial post regarding this suite about the use of .innerHTML in places, and either have addressed or plan to address a lot of the other above statements with the “real initial announcement” – hopefully which will include YUI, ExtJS, and @WebReflection’s “PureDom” library.

There are a _number_ of issues I want to address regarding the suite, the charts, the tests … Please hold off on bashing until I’ve had a chance to ‘justify my actions’.

Peter Higgins

Comment by phiggins — April 8, 2009

@WebReflection’s “PureDom” has been implemented, thanks to @phiggins ;)

Comment by WebReflection — April 8, 2009

Hey no prob man, I just want to see a true test of the best that the libs can do without commingling regular DOM methods, as well as tuned to use the library methods that are best suited for each task. In that spirit of constructive effort, I would be willing to redo some of the Mootools tests that were not using the most efficient Mootools methods available, and my co-worker, the one who got the huge performance boosts by tweaking the jQuery tests, would be willing to do that lib’s methods as well. We would use strictly library methods and try for the most efficient possible. Just say the word P-Higs!

Comment by csuwldcat — April 8, 2009

Leave a comment

You must be logged in to post a comment.