Thursday, November 6th, 2008

Razor Optimizer: Runtime analysis of your Ajax code for optimization

Category: Performance

Coach Wei has updated Razor Optimizer, “a JavaScript optimization tool for reducing code footprint and increasing runtime performnace. As a cross-browser web application itself, Razor Optimizer can be access either online as a service, or to be downloaded to run locally.

Razor Optimizer is based on a new approach for JavaScript optimization called “razor”. While other optimization techniques such as JS minimization and concatenttion are based on static lexical analysis, Razor uses dynamic runtime profile information to achieve breakthrough results of 60% to 90% savings.”

How it works

Razor Optimizer itself is a web based JavaScript application that runs in any browser. It contains a server component and a client component. Razor Optimizer client is an Ajax application based on Dojo 1.1. Razor Optimizer Server is a Java web application that runs inside any Java Servlet container. The following figure shows the architecture of Razor Optimizer.

The Idea Behind Razor Optimizer

Razor is based on the following observations:

  1. JavaScript functions are the basic low level building blocks of JavaScript code. Though typical JavaScript applications are made up of JavaScript files, functions are at a lower level than files because each JavaScript file is composed of JavaScript functions. While current JavaScript optimization techniques operates on a “file” level, performing optimization at the function level could yield much better result;
  2. At any moment of time, the browser needs only one function because only one JavaScript function is executed at any moment of time.
  3. Theoretically, the application would work fine if we download only one function at a time, right before the function is going to be called. Other functions are not needed. They can stay on the server side without being downloaded until they are going to be called. There is no need to download all the code up front, and there no need to download them at once;
  4. If only one function needs to be downloaded and stay on the client side, we can achieve breakthrough savings in both download size as well as client memory/CPU footprint, resulting in significant performance improvements above any other techniques.

The basic idea of Razor is to “trim” the “not needed” functions and only download these functions that are necessary for a specific usage scenario. This “trimming” process is called “raze”. After the initial download, if a “razed” function is needed, Razor will download this function on demand in the background.

Wouldn’t downloading one function at a time be very slow? Indeed. However, if you package a bunch of related functions together and if this one “package” is enough to fulfill one or more use scenarios, the user wouldn’t notice any negative performance impact of incremental downloading.

So the key to this approach is to understand when/which function is called during different runtime scenarios. For example, if we know exactly which functions are called and when they are called during the initial application loading, we can trim all other code from the initial download without breaking the application. This would significantly save the initial download size and improve page loading performance.

The knowledge of “when/which function is executed” can be achieved by profiling the application. By recording the profile data, we can have accurate knowledge of the dynamic runtime behavior of the application beyond static lexical analysis for delivering breakthrough optimization results.

What do you think of this approach?

Posted by Dion Almaer at 9:43 am

2.8 rating from 29 votes


Comments feed TrackBack URI

Theoretically it makes sense… only loading each function as needed. But once you end up packaging functions together (as suggested) it’s just another JS file. Which then I don’t see any difference between this and lazy loading (besides fancy marketing copy)

Comment by RobRobRob — November 6, 2008

Hum… I’m working on a site using a 420KB (minified using YUICompressor but without gzip activated) JS file (this single file is served all over the site) and doing some pretty low level performance benchmark at this time. What I’ve notice is that parsing 26000 lines (which represents a few YUI feature) of Javascript only takes less than 20 ms.

So, I’m not really sure that Javascript-on-demand is such a good thing. It will cost much more HTTPRequest than having a single big file and will probably not improve front-end performances.

Comment by Remi — November 6, 2008


No, it is different from lazy loading. The difference is that only what is needed is loaded – there is no extra fat. In a typical lazy loading case, if you are lazy loading file a.js, which may contain 10 JS functions, but you may need only one, so the remaining 9 is just “fat”.

In the case of Razor Optimizer, it strips out the 9 un-necessary functions. It re-builds the JS download packaging so that only what is needed is downloaded.

The idea looks simple but the impact is fairl significant. If you look at your typical JS libraries, what is the actual utilization rate? It is only about 10% to 40%. So this approach very effectively lower the footprint by 60% or even more without damaging functionality and developers don’t need to worry about code bloat during development.

Anyway…that’s the thought.

Comment by cwei — November 6, 2008

True that Javascript-on-demand may not be good, but this is not “JavaScript on demand”. I guess the description itself may lead people to think so. The core idea of Razor is to “trim” the extra fat in the code – which can be as high as 90% of the entire code base.

If you look at your 420KB, here are the questions to think about:
1. Do you really use every single function in your 420KB? I bet not. My guess is that only 20% to 60% of code is actually needed for your app. This is very typical for applications that use an Ajax library (in your case YUI) – of course, I don’t know anything about your app so this is just my guess;

2. Do you really need the 420KB loaded all up front? For the initial usage scenarios, you need only a portion of it;

If a significant portion of code is not needed for your app, why have them in the app? Razor Optimizer will remove such from the app and thus optimize the footprint.

Comment by cwei — November 6, 2008

You can actually do both, and it would be the best of both worlds.
Lazy Loading + Primed Cache + Razor Optimizer + Jquery = Web 2.1 :)

Comment by Ramon — November 6, 2008

This is awesome. It’s similar to the Doloto project from Microsoft Research ( ). Two downsides: 1) A site might get better file caching results if the perspective of where to split code was across multiple pages. By only looking at one page in the site, the user might be forced to download a different script file on the next page, even though there’s a large overlap in the functionality. 2) The page is slower if the user has to wait for a file to be downloaded because they triggered a new function. I wish there was a system like this that would take the “related functions” and do a code analysis to pull in other functions that could be called but weren’t in this particular page load (eg, error handling functions, browser specific functions).

Comment by souders — November 6, 2008


Thanks for the comments. For the two downsides:
1. Razor optimizer did take a site approach – you can profile muliple pages and optimize them together. The resulted JS packages can be shared on multiple pages so that caching would still work well;
2. For point 2, yes it is true, however, if you profile your application well (give it enough coverage), you can cover all usage scenarios that no new download is required.

Razor Optimizer tries to deal with both issues – took a lot of work though. There are still bugs related to some of these.

Comment by cwei — November 6, 2008

So does Razor, in fact, get down to the function level in, say, jquery? Does it work with jQuery minified?

Comment by Nosredna — November 6, 2008

@Nosredna: yes, it works with jQuery minified.

Comment by cwei — November 6, 2008

That’s great, but isn’t it easier to just use the 304 http header?

Also, it’d be nice if the server side component was available for other platforms (.NET, Rails, Django, etc)

Comment by LeoHorie — November 6, 2008

ops, status code, not header

Comment by LeoHorie — November 6, 2008

1. What do you mean by use “304 status code”? for what?
2. Server side component – given that it is just a tool, why would it matter if the tool itself is in Java, or .NET? Your application can be any server platform as you want. Any particular reason for other server platforms except for being nice to everyone?

Comment by cwei — November 6, 2008

i know this is unpopular here, but i use GWT. i already get most of this via static analysis (among other things) for ‘free’.

Comment by abickford — November 6, 2008

What I meant is that resources with a 304 status code in combination with cache control http headers can be made to never be requested after the 2nd time. In effect, what that means is that with a properly configured server, a js file can be made to download only once per user per day/month/deployment cycle/whatever.

I was just wondering out loud what would be more efficient (both development-time-wise and performance-wise), considering that I can’t think of any site whose main audience is primarily composed of first-time visitors and considering that most servers have fairly decent caching configurations.

On a second thought, I suppose that both strategies can be used together, for whoever does need to squeeze out performance to the last drop.

As for other frameworks, what I meant is that if razor is easily integrated into frameworks, people don’t need to go through an extra step to optimize javascript. It’s analogous to PNG optimization: everyone knows it’s good but not many people actully do it because it’s not automatic and it’s not something that comes out of the box for many (most?) frameworks.

Comment by LeoHorie — November 6, 2008

Tried using the Razor profiler, but I am getting the following exception.

INFO Nov 6, 2008 15:48:29(667)PM: Successfully connected to database at ‘jdbc:d
INFO Nov 6, 2008 15:48:29(667)PM: New database connection established (‘3’ avai
lable, ‘0’ busy): OK.
Exception in thread “com.coachwei.razor.service.RazorBackgroundTaskMgr-Thread” j
ava.lang.UnsupportedClassVersionError: Bad version number in .class file
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(
at org.apache.catalina.loader.WebappClassLoader.findClassInternal(Webapp
at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoa
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoa
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoa
at java.lang.ClassLoader.loadClassInternal(
at com.coachwei.razor.service.RazorBackgroundTaskMgr.initDbAndLicenses(R

Comment by ragjunk — November 6, 2008

@LeoHorie: both are good points.

1. No one technique solves all – Using 304/caching etc and Razor and a variety of other techniques together are highly recommended. Just a data point to ponit out that according to Yahoo’s research, 40% visitors are “clean cache” visitors in average – a number much higher than you would think;
2. Razor woks with all JS toolkits.

@ragjunk: Looks like that you are using an old version of JRE – try to update to JRE 6 to see if the problem still happens. BTW – most people here probably won’t care about this level of detail, so maybe post followups to tx.

Comment by cwei — November 6, 2008

Looks like a very interesting tool.
Nice work.

Comment by Nosredna — November 6, 2008

I can see this working if and only if you write javascript in terms of global variables as functions in isolated blocks. Then, you can profile each page and figure out which functions depend on which functions, and package them optimally.

But how can this work on a more complex codebase, that, for example, declares functions ‘on the fly’ – creating function objects from prototypes and exporting instances by calling their constructor? There is no “string literal” of a function that you can lazy load – as they are programatically constructed into new object types depending on the situation.

Or, in the example of prototype.js – it has an initialization phase where it copies methods onto pre-defined objects. even if the profiler can detect which prototype object “needed” to be extended for each page – is it smart enough to surgically detect all the dependencies?

It seems like this approach would cripple the dynamic nature of javascript.

Comment by sircambridge — November 6, 2008

@sircambridge: What you said is not true. Of course, if this cripples JavaScript, it is pointless. Fortunately it does not. For example, it works fine with PrototypeJS. In fact, if you take a look at one of the case studies at, the Ajax toolkit is PrototypeJS.

Perhaps a more detailed technical explanation would help. I’ll try to put up some more materials on

Comment by cwei — November 6, 2008

@cwei: Indeed no one technique solves it all. In fact, I think razor fits very well as the last-piece-in-the-equation that I’ve been secretly waiting for in the realm of javascript optimization (browser caching + request reduction + yui compression + the ability to remove library code I’m not using now with razor) :)

re: Yslow research: from what I can tell, Yahoo sets its cache headers to expire in 2 days (which sounds very low for things like images imho). I wonder how much of a difference in results they would get with different expiration values.

Comment by LeoHorie — November 7, 2008

Leave a comment

You must be logged in to post a comment.