Friday, August 22nd, 2008

JavaScript JIT: The Dream Gets Closer (in Firefox)

Category: Firefox, JavaScript

>For years, many of us have been salivating over the idea of JIT’ed JavaScript in the browser. Adobe’s JIT’ing Flash VM showed a preview of tremendous speed gains to be had, but we’ve had to wait until SquirrelFish from WebKit to see anything dramatic happen in the browser.

Until now.

Mozilla just let the cat out of the bag on their new TraceMonkey project. Brendan Eich, Mozilla’s CTO, describes it thusly:

I’m extremely pleased to announce the launch of TraceMonkey, an evolution of Firefox’s SpiderMonkey JavaScript engine for Firefox 3.1 that uses a new kind of Just-In-Time (JIT) compiler to boost JS performance by an order of magnitude or more. [Emphasis ours.]

There are charts and graphs all over the Web; here’s one from Brendan’s blog:

As Brendan points out, the benchmarks can be generally quite misleading; the best results are the demos. And, here’s a link to Mike “schrep” Schroepfer’s blog entry where he put one together in a screencast:

Brendan goes into significant detail on how all of this came about, and notes some key points:

* We have, right now, x86, x86-64, and ARM support in TraceMonkey. This means we are ready for mobile and desktop target platforms out of the box.
* As the performance keeps going up, people will write and transport code that was “too slow” to run in the browser as JS. This means the web can accomodate workloads that right now require a proprietary plugin.
* As we trace more of the DOM and our other native code, we increase the memory-safe codebase that must be trusted not to have an exploitable bug.
* Tracing follows only the hot paths, and builds a trace-tree cache. Cold code never gets traced or JITted, avoiding the memory bloat that whole-method JITs incur. Tracing is mobile-friendly.

For even more details, check out Andreas Gal’s detailed blog entry on trace trees.

The first phase of Ajax has been all about leveraging the existing platforms as much as we can. This announcement is a major signpost towards the second phase: improving the existing platforms. We couldn’t be more excited.

Brendan puts it in his own understated way:

JS-driven <canvas> rendering, with toolkits, scene graphs, game logic, etc. all in JS, are one wave of the future that is about to crest.

John Resig voices similar sentiments in his excellent blog post on the subject:

[This] means that JavaScript is no longer confined by previously-challenging resource of processing power… I fully expect to see more massive projects being written in JavaScript…

The primary thing holding back most extensive Canvas development hasn’t been rendering – but the processor limitations of the language (performing the challenging mathematical operations related to vectors, matrices, or collision detection). I expect this area to absolutely explode after the release of Firefox 3.1 as we start to see this work take hold.

Speaking of Canvas and JS, we’ve got our own little project we’ve been hacking this year that we can’t wait to try this on… way to go, Mozilla!

Related Content:

16 Comments »

Comments feed TrackBack URI

For all who want to try it without reading through all those posts: The code is already included in in current nightlies, just go to about:config ans search for jit. There are two options, one for chrome and one for content. Unless you are an extension developer, you probably shouldn’t touch chrome jit acceleration until it’s thoroughly tested.

Comment by Hans Schmucker — August 22, 2008

Actually, that last comment about the canvas is not quite true. First, there still appears to be significant function call overhead with canvas, so invoking moveto/lineto/etc 10,000 times is not likely to get a huge performance boost until a) they can inline/dispatch to native C++ code faster and b) whatever road blocks are in the canvas renderer are reduced.

Let me give you an example from the iPhone. The following code:
var a = 0;
for(i=0; i<50; i++) {
ctx.strokeStyle = “red”;
ctx.save();
ctx.translate(i*10, i*10);
ctx.rotate(3.141529*2*((a + i)/10));
draw(ctx);
ctx.restore();
}
}

function draw(ctx) {
ctx.beginPath();
ctx.moveTo(0, 0);
ctx.lineTo(10, 0);
ctx.lineTo(10, 10);
ctx.lineTo(0, 10);
ctx.closePath();
ctx.stroke();
}

Which draws 50 rectangles, will run far far faster with a canvas width/height of 100×100 or 200×200 vs 400×400. The slow down does not occur during the actual canvas 2d calls. If you try to time the calls, they run very fast. The slow down comes after yielding the JS interpreter CPU (e.g. setTimeout). Whatever the browser is doing (copying offscreen buffer onto browser surface) when your drawing thread yields, it is enormously slow and scales by number of pixels.

That is, the Javascript code does the same amount of work (touches identical number of screen pixels), but using a larger canvas (most of it whitespace which is not rendered into) can slow down rendering by 4x, clearly, something is wrong with WebKit Canvas on the iPhone.

I would also like to add, that while the speed boost will encourage the size of Javascript applications to increase in scale, the Javascript language itself makes optimizing for downsize difficult, and the JITs can’t optimize away code before it is downloaded. So, the downside of the JIT performance improvements, in combination with the torpedoing of Javascript 2, effectively leaves nothing but laborious and non-standard hacks to glue together large JS applications (unless you use GWT that is)

Comment by cromwellian — August 22, 2008

BTW, the snippet above lets some rectangles get clipped, but if you change it to render all of the rectangles at (0,0) you get similar results, showing that clipping is not predominant explanation for the performance difference.

Comment by cromwellian — August 22, 2008

Also, for those curious, approximately 50% of the performance slowdown is related to scaling on the iPhone. If you set the viewport to a scale of 1.0, you gain back 50% (in my case, going from 90-100ms to render 50 rectangles unscaled, to 150ms-200ms to render with a non-1.0 viewport scale) It doesn’t explain why I can see times upwards of 300ms sometimes.

Comment by cromwellian — August 23, 2008

How does this compare with SquirrelFish?

Comment by ibolmo — August 23, 2008

This is great news, JIT for JS will definitely by orders of magnitudes make JS a very fast language :)
Though I am interested in how this affects the “dynamic parts” like closures, eval and so on…?
Obviously you cannot JIT the stuff in an eval statement unless you’re positively sure of that the content in the eval won’t change or haven’t been dynamically created etc…

Comment by ThomasHansen — August 23, 2008

>>unless you use GWT that is
.
Not just GWT.
.
Also Script#, which gives you C++ in the browsers. It’s very nice for .Netheads.
.
And Mascara, which is immature but gives you what was envisioned for ECMAScript 4.
.
Haxe, too, which gives you an Ecma-like language.
.
And the various Rubys and Lazslo. Plus all the new compilers people will write when JS is faster.
.
But, you know, I just wrote a giant JavaScript app, and once I got the hang of the language it was quite easy to manage. I just had to get used to variables and functions being part of an object.
.
Huge, huge programs were written in procedural C. JavaScript seems sane and simple compared to that. When you’re passing and returning objects as parameters, it’s pretty easy to keep things soild.

Comment by Nosredna — August 23, 2008

“the JITs can’t optimize away code before it is downloaded”

But with google gears localserver we can make the download cost less painful. I see the download problem as easily solved.

It’s becoming more and more obvious that the way to build web apps is going to be using a javascript framework to build all-javascript apps that connect to web services. It marries the benefits of server-hosted application code with the benefits of client-side code execution.

Comment by Joeri — August 23, 2008

For those interested I tested this on a real world application which deals with table processing and has an excel-like appearance. Obviously tons of loops, so far executions speed was “acceptable for the patient ones” ;-)

It now runs real time :-)

Big thanks to the traceMonkey team for making this happen!

Comment by FrankThuerigen — August 23, 2008

“Also Script#, which gives you C++ in the browsers.”

Hmm, not strictly true – it lets you write code in C# which is then used to generate Javascript, to be deployed as usual.

Comment by jeromew — August 23, 2008

>>Hmm, not strictly true – it lets you write code in C# which is then used to generate Javascript, to be deployed as usual.

Which is pretty much the way all these work. They compile to JavaScript.

Comment by Nosredna — August 23, 2008

“Right now there isn’t any tracing being done into DOM methods (only across pure-JavaScript objects) – but that is something that will be rectified. Being able to trace through a DOM method would successfully speed up, not only, math and object-intensive applications (as it does now) but also regular DOM manipulation and property access.”
– John Resig
.
My apps don’t use a lot of intensive calculations so I’m particularly looking forward to the DOM method tracing. Along with the new implementations of the Selectors API this is pretty exciting.
.
What is unclear to me is how the benchmarks of the different aspects of generating content compare to each other. For instance, if my app takes .1 seconds to calculate, .5 seconds to manipulate the DOM, and 10 seconds to render to HTML, making the calculation part a 1000 times faster would still not really change the speed of my application. It would still be about 10 and a half seconds. Making the DOM part a 1000 times faster would not increase my speed much either. It would be down to about 10 seconds.
.
It would be nice to see a breakdown for a canvas animation for instance and perhaps a couple other use cases where one might emphasize calculation, another DOM manipulation, and another rendering. For each use case, how much time is spent calculating, how much time interacting with the DOM, and how much time is actually spent rendering the result to HTML? I don’t really have any sense of how these different benchmarks compare with each other.

Comment by GregHouston — August 24, 2008

Great!

As for Canvas, it very much depends on what you’re doing.

For JS heavy things that require little to no rendering such as per pixel manipulations ( convolutions, morphoMath operations, and other color manipulations ) JIT can provide a significant improvement. However for rendering heavy things ( think polygon rendering, whole Canvas based post-processing, … ) JIT won’t do much as in such cases the bottle neck is clearly — even today — the graphic engine of the browser.

Comment by p01 — August 25, 2008

Pixel manipulation operations would be better done via a higher level Canvas API that provided colorspace transforms, convolutions, lookup ops, etc, not just because it will be far faster than raw JS manipulation, but that the underlying implementation can use optimized memory representations (e.g. image tiling) such that reasonably large images can be worked on without neccessarily wasting tons of heap keeping N copies of the entire image in the JS heap.

I was always hoping Gears would took the mantle on this, as well as extending Canvas support to IE6. :)

Comment by cromwellian — August 25, 2008

The availability of JIT compilers raises 3 questions.

(1) Why not packaging and releasing separate JDK and JRE (here, J is for JavaScript), that is, outside Mozilla releases ?

While more speed gives JS a greater interest as a language, it deserves separate JDK and JRE. Then, it might be a candidate for other script language (Perl ? may be more Tcl/TK ?) replacement, for scripting outside browsers.

(2) The first point raises the question of the JavaScript standard library.

I have suggested, when commenting a previous post (Comparing the evolution of Java and JavaScript), that JS needs a more standard library like Java (a CPAN web site ?). If one wants to push JS higher, and, as one example, outside browsers for more scripting, then JS needs a API giving more access to system resources, like file resources.

(3) A JIT compiler is better efficient while having more type information. JS may need optional (!) type information for execution speed improvement.

Currently, one could define a counter like “var i = 1;” while a “int i = 1;” definition could be more interesting for a compiler :
(a) the first form just says “i” is a variable, currently initialized as an integer, but with no constant type attached.
(b) the latter option just says that “i” is always an integer and if no conversion could be done when assigning later this variable, then an error should be raised.
As a scripting language, the constraint is that both forms should be accepted by JS. Questions remain about introducing more (optional!) typed styles for defining variables into JS as a language.

Comment by dmdevito — August 26, 2008

That was a very intresing read, the commets as much as the blog.
I will look into looking into the app.

Comment by Remedies — December 8, 2008

Leave a comment

You must be logged in to post a comment.