Monday, June 23rd, 2008p>Today is the kick off of the Velocity performance conference, and we are going to see a fair share of performance news over the next day or two.
To start out, Bill Scott (Rico/ex-Yahoo/now Netflix) has announced a new Firebug plugin, Jiffy that adds a new tab showing fine grained performance data. You want to know the time between the onunload of the previous page, the first rendering, time until onload, time after, and more.
This is where Jiffy-Web comes in. Jiffy-Web is a fine-grained and flexible website performance tracking and analysis suite written by Scott Ruthfield and the team at Whitepages.com.
The Firebug plugin uses that data, which it gets from the DOM JSON object, to do the visualization.
Bill wrote a detailed post on Measuring User Experience Performance that goes into the details behind this tool.
He goes into detail on how to measure things, and what can get in the way. For example, onunload:
The most logical place to measure the start of a request (“from Click”) is on the originating page (see A in figure above). The straighforward approach is to add a timing capture to the unload event (or onbeforeunload). More than one technique exist for persisting this measurement, but the most common way is to write the timing information (like URL, user agent, start time, etc.) to a cookie.
However, there is a downside to this methodology. If the user navigates to your home page from elsewhere (e.g., from a google search), then there will be no “start time” captured since the unload event never happened on your site. So we need a more consistent “start time”.
We address this by providing an alternate start time. We instrument a time capture at the very earliest point in the servlet that handles the request at the beginning of the response (see B in figure above). This guarantees that we will always have a start time. While it does miss the time it takes to handle the request, it ends up capturing the important part of the round trip time — from response generation outward.
There are a number of ways to save this information so that it can be passed along through the response cycle to finally be logged. You can write out a server-side cookie. You can generate JSON objects that get embedded in the page. You could even pass along parameters in the URL (though this would not be desirable for a number of reasons). The point is you will need a way to persist the data until it gets out to the generated page for logging.
Note that the absolute time captured here is in server clock time and not client clock time. There is no guarantee these values will be in sync. We will discuss how we handle this later.
He also talks about practical issues that he has found implementing this at Netflix, and when the data shows you the real truth:
Recently we fielded a different variation of our star ratings widget. While it cut the number of HTTP requests in half for large Queue pages (a good thing) it actually degraded performance. Having real time performance data let us narrow down on the culprit. This feedback loops is an excellent learning tool for performance. With our significant customer base, large number of daily page hits we can get a really reliable read on the performance our users are experiencing. As a side note, the median is the best way to summarize our measurements as it nicely takes care of the outliers (think of the widely varying bandwidths, different browser performance profiles that can all affect measurements.)