Saturday, March 15th, 2008

Progressive Enhancement with CSS support

Category: CSS, Testing

Via John Resig we just got to learn about a clever technique applied by the Filament group in Boston called Progressive Enhancement with CSS support.

The study rightfully claims that object detection to determine whether a user agent is capable of supporting a certain interface is not enough. You also need to make sure that the browser supports the right look and feel – in other words that the CSS you will apply can be rendered as intended.

I’ve done similar things in the past, reading out the offsetWidth of an element to determine if the browser is in standards or Quirksmode but Filamentgroup’s test script goes a lot further than this. It tests for the following CSS support:

  • Box model: make sure the width and padding of a div add up properly using offsetWidth
  • Positioning: position a div and check its positioning using offsetTop and offsetLeft
  • Float: float 2 divs next to each other and evaluate their offsetTop values for equality
  • Clear: test to make sure a list item will clear beneath a preceding floated list item
  • Overflow: wrap a tall div with a shorter div with overflow set to ‘auto’, and test its offsetHeight
  • Line-height (including unitless): test for proper handling of line-height using offsetHeight, primarily to rule out Firefox 1.0

For example the right box model support is tested with this script:

javascript

  1. var newDiv = document.createElement('div');
  2. document.body.appendChild(newDiv);
  3. newDiv.style.visibility = 'hidden';
  4. newDiv.style.width = '20px';
  5. newDiv.style.padding = '10px';
  6. var divWidth = newDiv.offsetWidth;
  7. if(divWidth != 40) {document.body.removeChild(newDiv); return false;}

When the browser passes, the script adds an “enhanced” class to the body that you can use in your style sheet.

Neat idea and very defensive programming.

Posted by Chris Heilmann at 4:16 am
7 Comments

+----
1 rating from 2477 votes

7 Comments »

Comments feed TrackBack URI

Some very interesting thoughts … but it would certainly increase the overall complexity and not to mention the rendering speed of the pages. For mostly static sites this technique would probably be overkill, but for very dynamic applications, I imagine that techniques such as this one, could help remove a lot of the frustrations of supporting user-agents with very different capabilities.

One thing to consider though, is that user-agents capabilities change over time, so you’re going to be constantly evaluating what “enhanced” really means for your site.

Relying on javascript to ensure sexy rendering, might also be a barrier for some, and might not be considered defensive.

I’ve already adopted Chris’ technique of adding a classname to the body element, when javascript is enabled, which allows me greater control in styling my content for javascript-disabled browsers.

Comment by Morgan Roderick — March 15, 2008

Contributors on comp.lang.javascript have been promoting testing CSS support like this for quite a few years. I used a technique like this in my recent article to test for display:none support before enabling a tabbed pane widget: http://peter.michaux.ca/article/7217

Comment by PeterMichaux — March 15, 2008

I’m still somewhat put off by this approach, it just seems way too inefficient.

Seems to me that in a non-trivial app, you could end up with a combinatorial explosion of these tests, ballooning the app size and startup time. Consider the awesome variety of ways that the spectrum of browsers fail on standards support, not just box model, but AcidTest du jour times browser type times browser version.

And then there’s failures that are hard or impossible to detect, like rendering errors, pathological performance gotchas, and strange memory leaks, not to mention mobile browsers where events don’t work like you expect (hello iPhone and mouse events)

The thing is, these tests run each and every time someone visits the page, whereas you could run these tests across a large sample of the browser space, save the results to a browser sniffing database, and then evaluate everything at compile time.

If you fail to recognize a browser when sniffing, either a) fall back to the tests or b) punt. The likely case is, you’ll still be serving 99.9% of the market until its fixed.

At the least, use a cookie to remember that the tests have been run and what the results are, so you can select a code base preoptimized for those results

I just don’t like the idea of having to pay for these tests *every time*. It generates suboptimal code runtime performance, size, and excessive network traffic, and lets you deal with a tiny percentage of the browser market that won’t be cause by sniffing.

Comment by cromwellian — March 16, 2008

@cromwellian: If you check out the article on Filament’s site, you’ll see that a cookie is dropped so that a browser is only tested one time (if it passes). This will tell the test to auto-pass on every following page load. This cookie could also be detected on the back-end to serve a high-end experience from the start.

Also, please take a look at the tests that occur and the resulting matrix of browsers that pass or fail. Chris’s mention of the box model test is only a portion of what takes place. We have made this division of experiences without the use of sniffing, but rather by testing for features and handling. Please note that ‘failing’ the test simply means a different, yet still functional, experience.

Simply sniffing may not tell us what we want to know. We don’t care who you are (device-wise), we just want to know which level of experience suits you best. Even A-grade browsers can be turned into B-grade browsers by changing preferences around.

We think the technique has merit and we’d love some help to make it even better. Also please check out the comments on John Resig’s article as well, as many of these issues have been discussed there already.

Thanks!

Comment by ScottJehl — March 16, 2008

yuk

do not like at all

you’re going to end up having to write and support 2 or maybe even 3 versions of your css and you’ll end up getting all paranoid about things like “oh my site doesn’t look correct for people using IE4 from a cell phone, must now spend 10 hours correcting”.

far better solution is to use a solid css framework or templates and basically disregard any browsers incapable of handling such basic standards.

Comment by stevesnz — March 16, 2008

@stevesnz: I’m not sure you are really understanding the purpose of this approach. The idea is for each developer to make their own decision on what combination of html, css and js they they are comfortable initially serving up to every device, then the script will allow for ‘hooks’ to add (or remove) additional css and scripts to the browsers that has passed the test in order to layer on a richer experience. We’re actually advocating just having these two versions (low and hi fidelity) to keep the complexity to a reasonable minimum, but each developer can decide how much to layer the experience based on their ability to fully test and maintain additional slices of device capabilities.

The real point we’re trying trying to make is that if you simply serve up a whole pile of complex css and js and don’t manage what happens to devices that will misunderstand this code, you are leaving a lot of people out in the cold. We believe the web should be about providing content and functionality to everyone, not just the latest and greatest versions of popular browsers.

The world we’re coding for is getting more diverse and complex, not simpler because of the wide range or mobile phone, interactive TV, kiosks, gaming devices that are coming into the market with web access and “non-standard” rendering or js support. When you add to this environment the need to support those with disabilities and achieve section 508 compliance, we don’t feel that it’s responsible to “basically disregard any browsers incapable of handling such basic standards”. Even though they may not be using FF 2 or IE 7, these are still real people (and potential customers) and turning them away because you wanted to use a js-dependent widget or complex CSS layout is really not an acceptable option.

Because Filament Group tends to specialize in building rich web apps, not simple content sites, we have made the choice to start by serving up a page that uses clean, semantic html, css (and sometimes a bit of js) that we believe will work everywhere. We keep the code as lightweight as possible because many people have slower web connections and less memory. If the client passes the test, we can feel more confident that layering in all the code for a richer experience will not case rendering or functionality issues. We want to play it safe with which browsers pass because it would be a shame for a device to receive a perfectly usable lo-fi version, then be upgraded to the hi-fi version and have some little css or js issue make the site unusable.

This approach is admittedly a work in progress (and it’s great to hear all the interesting dialog coming from our post) but it does provide a reasonable way to achieve something close to universal access for rich web experiences, all with a unified code base and manageable testing requirements. Hopefully, the web community can all help us refine this idea and reach the common goal of providing true accessibility to all without an unmanageable mountain of code.

Comment by ToddParker — March 16, 2008

To follow up on Scott’s Post: In case anyone’s curious about how much of a performance impact running this test script is, head over to the Filament Group website because it uses this technique for the whole site. I think you’ll see it does it’s business with nothing more than a tiny blink (worst case) on the first page load. After that, the cookie makes the script and css swapping pretty much undetectable. Always looking for ideas on how to make this even faster. Check it out at: http://www.filamentgroup.com

Comment by ToddParker — March 16, 2008

Leave a comment

You must be logged in to post a comment.