Thursday, August 30th, 2007

CompressorRater: Compare the squeeze

Category: Performance

<>p>The JavaScript camp can get a bit obsessed with squeezing, and Arthur Blake’s CompressorRater tells you how far you can go:

There are many tools available that can help you compress your JavaScript code but it can be time consuming and difficult to analyze which tool work the best for a given situation. The goal of this web application is to report aggregated statistics on the general level of compression in all these tools, as well as allow developers to easily play with and compare the different tools on their own JavaScript code without having to set up all the tools on their own.

The following compression tools are compared, both with and without the affects of additional gzip compression that are supported natively by modern web browsers.

The tool will try all of the usual suspects for you, in various modes. I ran gears_init.js into the sausage factory and got out:

CompressorRater

Oh, and it has been a couple of days, so Julien has released a new version of the YUI Compressor (2.1).

Posted by Dion Almaer at 7:40 am
10 Comments

++++-
4.7 rating from 21 votes

10 Comments »

Comments feed TrackBack URI

Extremely cool tool. The best part about these shrinkers is that they continue to improve and we can trust them. Until internet speeds get much faster (at least in the US), we will need to shrink our javascript libraries if we want fast load times. Great article and project.

Comment by David Walsh — August 30, 2007

Also, see this article for an explanation on why you should serve your JavaScript using HTTP compression:

http://www.julienlecomte.net/blog/2007/08/21/gzip-your-minified-javascript-files/

Comment by Julien Lecomte — August 30, 2007

Am I completely missing something here or doesn’t every javascript “compressor” require extra cpu work and therefore extra delay for every visitor? Even if the “compressed” code is cached, it still has to be uncompressed every time, taking more ram and cpu cycles.

Why not just “pack” the code instead by removing whitespace and renaming variables to shorter size and then just use gzip web compression (aka http compression). Then when the code is cached in the visitor’s browser, it’s ready to execute on every load.

I did some rough tests and I was able make a “packed” gzip version of jquery within 1k or less of the size of the currently distributed “compressed” version – and the packed version loads quicker!

Most web designers go to great lengths to get even 100ms off a page load, why add 100ms to the client side just so a library developer can boast that they are 1k smaller than another – it’s meaningless!

Comment by _ck_ — August 30, 2007

@ck

You are absolutely right. However, only Packer (as far as I know) with the base62 encoding option enabled, will output code that requires some extra CPU cycles on the client. JSMIN, the Dojo Compressor and the YUI Compressor only remove white-space characters and comments, and the last two do a light obfuscation as well. If you read this article, you’ll see that jQuery, for example, would be better off distributing their library using a different compressor, and advocating the use of HTTP compression.

Regards

Comment by Julien Lecomte — August 30, 2007

Not so powerfully, but simply: http://compressor.ebiene.de

Comment by Sergej — August 30, 2007

@_ck_

You’re completely missing something. Pack your script using any of the packers mentioned and upload it to your server. Do NOT do it on the fly. Do it once and upload it. Then turn on gzip compression on your server. Your server will gzip a file once it is requested and CACHE it. It will only gzip it once and then serve the cached version. There is NO performance hit. There is only performance improvement.

Comment by Marat Denenberg — August 30, 2007

@_ck_

Sorry. Ignore previous comment. Noticed you were talking about base62 and stuff like that. From what I know, it is only about 200ms overhead at most.

Comment by Marat Denenberg — August 30, 2007

There is also overhead for your browser to decompress gzip files, but this is extremely fast native code and probably 99% of the time is well worth the speedup in file transfers. While this was a brilliant idea on the part of Dean, gzip still definitely beats packer-base62 compression in compression and speed.

Comment by Kris Zyp — August 30, 2007

Actually, the http un-gzip performance hit is a *one time* hit the first time it gets the file, then it is cached uncompressed. All other code based javascript compression methods are a *repeated* performance hit everytime a page loads the javascript, as it has to uncompress itself each and every time.

I used this packer http://dean.edwards.name/packer/ with shink variables but no base62 encoding and then level 1 gzip (fewest cpu cycles for both encode and decode) and it far outperforms any javascript compressor. jQuery becomes only half a K larger than it’s compressed counterpart and it caches better and is ready to execute faster.

Really, all these compressors are just javascript coders showing off how cleverly they can code but they are wasting their time that could be used on other efforts. Gzip is proven, works faster and is available on all modern browsers. Problem solved. (I just hope they aren’t trying to do the compression so they can bloat the libraries to over 100k and still boast how small the distribution size is, which is meaningless)

Comment by _ck_ — September 1, 2007

@ck

You are just rediscovering what we’ve all been advocating for quite a long time now: “stay away from “advanced” JavaScript compression schemes that look attractive on paper, but end up degrading the performance of your site”. Instead, use a good (i.e. safe) minifier and Gzip compression. Case closed.

Comment by Julien Lecomte — September 2, 2007

Leave a comment

You must be logged in to post a comment.