Monday, July 20th, 2009

A detailed look at how tracing, and TraceMonkey works

Category: JavaScript, Performance

<p>David Mandelin has generously detailed an overview of tracing and TraceMonkey in particular.

He starts out by explaining the problem at hand: making a dynamic language such as JavaScript fast is hard.

How do you get type info in dynamic type land?

Our goal in TraceMonkey is to compile type-specialized code. To do that, TraceMonkey needs to know the types of variables. But JavaScript doesn’t have type declarations, and we also said that it’s practically impossible for a JS engine to figure out the types ahead of time. So if we want to just compile everything ahead of time, we’re stuck.

So let’s turn the problem around. If we let the program run for a bit in an interpreter, the engine can directly observe the types of values. Then, the engine can use those types to compile fast type-specialized code. Finally, the engine can start running the type-specialized code, and it will run much faster.

There are a few key details about this idea. First, when the program runs, even if there are many if statements and other branches, the program always goes only one way. So the engine doesn’t get to observe types for a whole method — the engine observes types through the paths, which we call traces, that the program actually takes. Thus, while standard compilers compile methods, TraceMonkey compiles traces. One side benefit of trace-at-a-time compilation is that function calls that happen on a trace are inlined, making traced function calls very fast.

Second, compiling type-specialized code takes time. If a piece of code is going to run only one or a few times, which is common with web code, it can easily take more time to compile and run the code than it would take to simply run the code in an interpreter. So it only pays to compile hot code (code that is executed many times). In TraceMonkey, we arrange this by tracing only loops. TraceMonkey initially runs everything in the interpreter, and starts recording traces through a loop once it gets hot (runs more than a few times).

Tracing only hot loops has an important consequence: code that runs only a few times won’t speed up in TraceMonkey. Note that this usually doesn’t matter in practice, because code that runs only a few times usually runs too fast to be noticeable. Another consequence is that paths through a loop that are not taken at all never need to be compiled, saving compile time.

Finally, above we said that TraceMonkey figures out the types of values by observing execution, but as we all know, past performance does not guarantee future results: the types might be different the next time the code is run, or the 500th next time. And if we try to run code that was compiled for numbers when the values are actually strings, very bad things will happen. So TraceMonkey must insert type checks into the compiled code. If a check doesn’t pass, TraceMonkey must leave the current trace and compile a new trace for the new types. This means that code with many branches or type changes tends to run a little slower in TraceMonkey, because it takes time to compile the extra traces and jump from one to another.

Then it gets practical, with examples of tracing in action, and even discussing what items are not traced yet:

  • Recursion. TraceMonkey doesn’t see repetition that occurs through recursion as a loop, so it doesn’t try to trace it. Refactoring to use explicit for or while loops will generally give better performance.
  • Getting or setting a DOM property. (DOM method calls are fine.) Avoiding these constructs is generally impossible, but refactoring the code to move DOM property access out of hot loops and performance-critical segments should help.

Before you optimize for that…. there is already work on the way to trace here too. Finally, we end up with a comparison of various JIT approaches, for example how Nitro/V8 do their thang.

A really nice article.

Related Content:

Posted by Dion Almaer at 1:06 pm
7 Comments

++++-
4.4 rating from 30 votes

7 Comments »

Comments feed TrackBack URI

JS interpreters are finally catching up with Lisp technology of more than 20 years of age.

Ban hashes, use symbols and proper environments. Ban weak typing, prefer strong. Ban “scope activation objects”, use explicit static or dynamic typing (yes, both). Now get rid of annoying C-like syntax and you got Lisp, tada! :) If only JS interpreters were as advanced as Lisp ones…

Dynamic specialization via “tracing” is a cool hack nevertheless.

Comment by chiaroscuro — July 20, 2009

It’s strange how bad the js engines are understood. Thanks for the article it makes the TraceMonkey approach clear! Even though we all use javascript with it’s full advantages (I think this language still is misunderstood and the majority of the ‘coders’ don’t like it) the js engines keep the mystery full.

Comment by stoimen — July 21, 2009

Well that’s an interesting read and it is amazing the efforts everyone is making to improve performance. The pessimist in me says that these vast improvements in speed have got to hit a brick wall at some point.
But keep up the good work!

Comment by McDaid — July 21, 2009

As cool as this is, it’s probably best to remain engine-agnostic in your optimization strategy, since this certainly isn’t going to be the last good idea in implementing JS engines, and in any case TraceMonkey is only one engine among many.

Comment by greim — July 21, 2009

So… how does Rhino rank into this? While Rhino is written in Java, that does not mean there’s a Rhino JIT for JS like Java gets JIT’d.

How much faster is the TraceMonkey interpreter relative to Rhino?

What could that mean for server-side JS apps?

Comment by khakman2 — July 21, 2009

@khakman2 – As I understand it, JIT closes the gap between Java and true compiled languages like C++. So if TraceMonkey is written in C++ to begin with (in an idealized, simplified world) JIT would only level the playing field. Rhino would still have to implement an optimization strategy to approach the speed of TraceMonkey.

Comment by greim — July 22, 2009

Leave a comment

You must be logged in to post a comment.