Monday, April 19th, 2010
Lindsey Simon has got a powerful update to BrowserScope, the community-driven tool to test and profile browsers. The new feature is exciting as it truly delivers the “community-driven” piece at scale: you can now add your own tests to the corpus, TestSwarm style. Now, the two are very different of course as TestSwarm is about having many browsers test *your* code, and BrowserScope is more about testing the browsers in general.
Here is what Lindsey says:
It seems like nearly every week we read about an interesting new hosted test for people to visit and run their browsers through (recent examples include html5test.com and findmebyip.com). Developers really love to poke at the black boxes they code for – and the matrix of browsers, OS, and networks is enormous. One thing I, and I presume other developers, would love to see are the aggregated results for these tests by user agent. Considering this is exactly what we built Browser to accomplish for our test categories, and that a user-hosted test feature has been on our Roadmap, the Browserscope team is happy to announce that we’re opening up an alpha of our User Tests feature.
Conveniently, this past week a User Tests use case came up for me at work and so it’s been a driver for building this feature. We began working on a UI component that we wanted to test for speed of task completion. After building up a test page with a timer and some deltas it dawned on me just how cool it would be to open up this test to the world, and aggregate the results. The test is kind of strange in that the the UI component is out of its context, and you can argue about the mechanics of the test itself, but I still feel like the results may be informative. Interestingly too, this test is exactly the kind of thing we would *not* want to feature on the homepage of Browserscope (it’s more of a performance test than a spec/binary test). And yet, the backend system with its median aggregation, scalability, and user-agent parsing library is a perfect fit. So check it out – and see how other people are doing on the test (courtesy of Browserscope).
This is definitely a release early/often feature, and we want to be explicit that things may change or break in the API while we’re in alpha mode. We may have to take the service offline briefly to fix things. But if you write tests for browsers and want to aggregate your results, sign in to Browserscope, register your test page and then read the HOWTO to start saving your results in the system. Please send any feedback to me or to our mailing list. We really hope to make this an easy system to use for the tests you’re already hosting.
I hope that we see a bunch of tests in there. One advantage of the browser scope approach is that it self-updates. A lot of the other sites are static, and when new browser versions come out the new feature data isn’t reflected. It could be cool for example, if the readiness visualization pulled the data from a JSON feed from BrowserScope :)
Thanks for doing this Lindsey and team!
Posted by Dion Almaer at 7:00 am