Monday, December 4th, 2006

Compression, Caching, for faster load times

Category: Articles, Performance

Jesse Kuhnert, Tapestry/Dojo team member, spent time on caching and compression mechanisms in the effort to give the best experience “for free” with Tapestry.

The result was:

  • Browser Caching: Previous versions of the framework weren’t aggressive enough in the way that all of the bundled assets (images/javascript/css/etc) were managed with http headers. Though the Expires and If-Modified-Since headers were being used it wasn’t really the complete solution. All of these resources now have realistic / appropriate headers set depending on the type of content and browser being delivered to. (Etag / Cache-Control / Expires / etc) Things will probably still be undergoing more and more change as this section is refined but anyone currently serving this content from the core Tapestry jars (or their own) – with no other configuration – should see a significant performance boost with the added caching support.
  • Gzip Compression: The biggest (and scariest) change has been the addition of intelligently gzip’ing content where appropriate. Now all javascript/css/html content that is typically managed by Tapestry gets a good once over with some gzip compression to help make those responses as snappy as possible.
  • Much Faster load time: The overall load time for pages should be much better now. The bundled version of dojo with tapestry is now served at a size of roughly 50k – down from the default size of 200k.

I would love to see some benchmarks on the gzip compression side. I used to read that for smallish file sizes, and certain machines, and certain networks, the overhead wasn’t worth it.

Have anyone in the community done good work on when to gzip versus when not too?

Posted by Dion Almaer at 9:23 am

4 rating from 39 votes


Comments feed TrackBack URI

I’ve only done some limited “eyeball” testing using fiddler to time response times using/not using gzip compression. (didn’t have the new firebug version at the time)

The performance differences were extremely slim for smaller dynamic responses, but still just a few ms faster than non compressed versions.

It would be good to get a more definitive set of tests to be sure everything is working as efficiently as possible though. (the css/javascript/etc file based compressions I do get cached in-memory until the files change on disk so that particular aspect of things hasn’t had any porblems wrt performance in cpu cycles )

Comment by Jesse Kuhnert — December 4, 2006

Be careful when packing to remove line breaks – if you are missing a semi-colon then your javascript might not be valid after removing the line breaks. Of course using JSlint should detect missing semi-colons. Perhaps packer should have an option to remove comments and whitespace but leave in the line breaks.

Comment by Patrick Fitzgerald — December 4, 2006

If you are really itching to squeeze as much compression out of it as possible, there’s an undocumented (simply because I’m not sure if everyone might like it) feature of the dojo compressor where you run your build as you normally would be add “gen-strip-docs” to the list of tasks to execute. The savings vary depending on the size of the dojo.js file being compressed. (and the questionable reason why it’s not documented is because of what it does to compress all of the dojo related js files found in your release directory.)

Comment by Jesse Kuhnert — December 4, 2006

[…] 12/4/06 Edit: Ajaxian has an article on gzip compression with some user responses. Interesting insights (in the comments section) on the law of diminishing returns with gzip compression used on ajax-enabled pages.  […]

Pingback by » apache mod_deflate reduces bandwidth usage by 27% on — December 4, 2006

It would be nice to see a benchmark of loading time versus compression rate. Gzip has different compression settings, and compression time varies a lot depending on the setting. I’ve used it successfully in the past when sending large datasets to the client (flash application loading multi-megabyte XML files which couldn’t be shrunk down), but I just picked the compression level that offered the best trade-off between filesize and compression time, instead of the one that produced the quickest load.

Comment by Joeri — December 5, 2006

I’m curious if anyone provided/found any documentation they could share on the relationship between compression rate and load time as well as the point of no return that Jesse was talking about. I wonder if port80software has anything to say about these subtle, yet relevant issues.

Comment by Frederick Townes — December 22, 2006

Leave a comment

You must be logged in to post a comment.