You searched for: 'native'

Friday, November 20th, 2009

Full Frontal ’09: Todd Kloots on ARIA and Acessibility

Category: Accessibility, HTML, Usability

Todd Kloots is talking accessibility and ARIA, with examples showing how YUI nicely supports these techniques. He explains how to improve in three areas: perception, usability, discoverability.

Can We Do ARIA Today?


Firefox and IE (he didn’t say which version) have really good support for ARIA. And Opera, Chrome, and Safari. Likewise for the screenreaders – JAWS, Windows Eyes, NVDA – also have good support. An the libraries – YUI, Dojo, JQuery-UI – all have good support baked in, one of the benefits of using ARIA is automatic support.

Improving Perception – ARIA and Screenreaders

Websites can have problems in perception when rendered with a screenreader; it’s hard to get the big picture about what the words refer to. With ARIA, we can close the gap in perception. This is another example of progressive enhancement – augment the item by adding properties, markup or Javascript if required:


  1. node.setAttribute("role", "menu")
  2. node.role = "menu" // alternative introduced by IE8. IE-only, so don't use!

Improving Usability – Keyboard Focus, ARIA, and YUI support

Keyboard access. For some people, it’s a necessity, and for others it’s still an option or preference (think Vim). To support it, you must be able to tab to the element to get focus, so you should control tabbing with tabindex. A good application of controlling tabbing is, amazingly enough, moving through tabs. Another is modal dialogs; the browser doesn’t “know” it’s modal, so we have to control focus to make sure it doesn’t slip out of the thing that’s the only thing users should be able to click on!

Todd shows us just how many steps are required to perform a task in a complex application like Yahoo! mail, using just tabs to navigate through – 19 steps in this example, walking through the toolbar; and even more, when you consider the wider picture of entering the app in the first place. To help with this, he introduces a pattern whereby tabIndexes are updated dynamically to control what comes next, as you move through a toolbar. A negative tabIndex will ensure the element is skipped over.

You can also use the “focus” pseudoclass to ensure focus appearance is consistent for all elements. But, and it’s a big one, it’s not very well supported; even IE8 doesn’t support :focus on <a>, for example. Doing it manually with Javascript has problems, in particular performance. Fortunately, PPK has worked out how to handle focus and blur with event delegation, so that it’s much more performant, and the resulting technique is built into YUI3.

Device-independence with markup was also advocated to further improve accessibility:

  1. <input role="menuitem" type="text/>

Improving Discoverability – ARIA

Essentially, this is about “random access” and keyboard shortcuts; jumping straight to areas in this page and activating them. The key ARIA feature here is “landmark roles” to identify particular points on the page. This is still something where users aren’t aware of the feature, and Todd points out it’s not surprising as most screen reader users are self-taught (just under 75% according to the study he showed). Also, not every user is a geek, and the same applies to screen-reader users.

Posted by Michael Mahemoff at 11:10 am

3.6 rating from 21 votes

Full Frontal ’09: Robert Nyman on the Javascript Language

Category: JavaScript, Presentation

Robert Nyman walks through some of the more subtle low-level features of Javascript, and some of the idioms that have emerged.

Comparisons: Understanding identity (===) versus equality (==).

Boolean expressions: Understanding how short-circuit logic (if a && b won’t eval b if a is false);

Types: Type coercion (“1″+2+3); “falsey” (false, null, 0) versus “truthy”; the importance of using operators like parseInt and instanceof.

Functions: Anonymous functions; self-invoking functions – function() { })() ; using the arguments collection to get all arguments to the current function, important to note it’s not a real array with all the array methods, and using arguments to overload arguments.

Objects: Using object literal notation { a:b, c:d } instead of setting up properties individually; equivalence of ben.arms and ben["arms"], and how useful it can be to use the latter in conjunction with a function argument, ie let the caller pass in a variable which will be set; using “in” to check if a property exists (if "arms" in ben).

Inheritance: Using the prototype chain for inheritance from subclass to superclasses up to Object. There are various implementations – e.g. Resig, Edward’s Base, Dan Webb; if you understand these implementations, then you understand Javascript. However, Robert’s arguing for the native way of doing it – as Doug Crockford says, “I now see my early attempts to support the classical model in Javascript as a mistake”.

Global scope: Avoid using global scope where you can. For example, nesting functions. R
obert later points to the Yahoo! module pattern.

Binding this: Using call and apply; these are useful for setting this and also can pass arguments through from the current function to another one without having to manually copy them out.

Sugaring: Adding syntax sugar, e.g. extending String.prototype.

Currying: As illustrated by Doug Crockford’s curry implementation.

Posted by Michael Mahemoff at 7:06 am

3.4 rating from 20 votes

Full Frontal ’09: Chris Heilmann on Javascript Security

Category: JavaScript, Security

It’s another Javascript conference! Full Frontal has kicked off in Brighton this morning (fullfrontal09 on twitter). First up is Ajaxian and Yahoo Chris Heilmann on Javascript security. The main theme is let’s use Javascript sensibly and don’t just blame the language when other things are creating the risks too.

Chris walks us through the history of javascript. The days of building complex systems with document.write() are over thankfully, though some people still think that’s what it’s all about. Having annoyed people with all the js bling, Ajax came around and suddenly javascript is seen as a tool for useful stuff. But…a little too much Ajax perhaps? People using it where it didn’t need to be and how it shouldn’t be used, hence the security fears.

According to a pie chart Chris presents, browser problems are responsible for only 8% of vilnerabilties; the biggest problems are SQL injection and XSS, where the server should be locking down.

Don’t judge the language by its implementation. it does have intrinsic issues like global variables, but the right implementation can keep things secure. As well as poor practice, the browsers bear responsibility. And the cool kids – safari and firefox – are the most vulnerable according to this survey. And it gets a lot worse when you start playing with browser extensions.

So do we turn javascript off? No, the experience of google maps etc is just too good. We don’t have to learn just by “View Source” anymore. There are plenty of resources out there to learn how to do it and how to do it properly (not just with a magic code-gen tool). Use javascript for the right things, not all things. e.g. Slicker UI, data validation warnings, UI controls not native to HTML, visual effects not native to CSS.

What if you’re sharing content from third parties, as the yahoo homepage now allows? Caja is presently the only way to do sandboxing in the browser. It doesn’t output pretty code right now, but it does cut out many risks. Caja prohibits eval(), iframes, * and _ hacks, an many other dangerous features. The latest YUI is caja-compliant code, and Chris reports John Resig is open to the same thing for jQuery, if it turns out there is demand out there.

Chris presents various examples of Javascript outside the browser – Air, web widgets, server side, even TV sets. It’s a very useful tool here, and shows it doesn’t have to be limited to the browser and the security risks that come with browsers.

Posted by Michael Mahemoff at 6:00 am
Comment here

2.9 rating from 17 votes

Monday, November 16th, 2009

280Atlas: Paid Beta Available

Category: Cappuccino


The long awaited 280Atlas keeps marching on to its full release. The milestone that the awesome 280North team have accomplished this weekend was paid beta.

The tool is Mac only right now and the team interestingly created their own framework for taking a Web app and making it run on the desktop. Note the scrollbars and feel of the application when you run it. Tom talks about the architecture on the beta blog:

At first glance Atlas appears to be a typical desktop application. However, under the hood it’s actually a Cappuccino application. This article talks a little bit about how and why we took this approach.

When we set out to build Atlas, using Cappuccino to create a web application was the obvious choice. However, most developers prefer to work with files on their local filesystem for many good reasons, including convenience, security, and offline access.

We considered a number of solutions, including syncing with the local filesystem or integrating with source control, but packaging the Atlas application as a downloadable desktop app was the easiest solution for everyone.

So how does it work? Atlas functions remarkably like a typical Cappuccino web application.

Instead of a web browser Atlas uses a custom (though generic) native application built around WebKit to bridge to Cappuccino. The application handles things like creating native windows instead of the “inner” windows you see when loading a Cappuccino application in a normal web browser, and using the native menu bar.

Atlas spawns a small web server, which serves the Atlas application, Objective-J and Cappuccino, and other static resources, as well as an application server to handle backend functionality, such as reading and writing to the filesystem.

The “backend” (actually running on user’s computer) is built on Narwhal, a general purpose JavaScript platform, which implements the CommonJS standard APIs. Narwhal provides an environment to run JavaScript (and Objective-J, of course) outside the browser, in our case as part of a desktop application.

On top of Narwhal we have Jack, a web application platform modeled after Ruby’s Rack. Jack implements the JSGI portion of the CommonJS specification, and a lot more. To handle communication with the filesystem we use a Jack-based WebDAV server, called JackDAV. Atlas uses Cappuccino’s CPWebDAVManager class to talk to JackDAV.

Congrats to the entire 280 North team. They are all fantastic blokes and I wish them great success as they help Web developers with great and fun tools.


Posted by Dion Almaer at 1:19 am

3.3 rating from 29 votes

Tuesday, November 3rd, 2009

WebOS Developer Event – Roundup

Category: Mobile

Editor’s Note:
Michael did a great job jotting down notes at our developer event in London, and we appreciate him taking the time to do a writeup. Some of the notes have been taken out of context, so we wanted to clarify: We started with a talk on the the future of the mobile Web. This talked about the potential of the Web as the platform for devices, and why we were excited to join Palm.
We don’t comment on our specific SDK plans, and while we are personally excited about the Web gaining GPU acceleration via technologies like WebGL and CSS Transforms, and we would like to see webOS gain these capabilities to allow web developers to better leverage our fantastic hardware, we were answering a question about our personal opinion on what we’d like to see happen to the platform. We don’t believe the term “immediate” was even mentioned by us, and we are sorry that people have read too much into this particular topic.

Ben and Dion have just wrapped up a WebOS talk in London, in conjunction with O2 Litmus. They explained why Palm is using the web as an application platform (might be preaching to the converted on this website!) and covered some of the development issues.

Embracing The Web

Ben and Dion open by discussing a panel Ben was on some years ago with Dave Thomas. Ben answered rich UIs as the most important trend, but Dave held up his mobile and said this is the future of software. And (according to Ben) Dave has been vindicated. Today, we have many, many, devices, and it’s tricky for us to target them as both developers (technically) and businesses (commercially).

But what’s interesting is the web browsers that have snuck into many of these platforms. The Ajax revolution changed the game for the first time, where you could build real-world web apps. (screenshots: GMail, GMaps, GOffice, Bespin, 280slides.) It’s only happening just now, and pretty soon, the web will be a great place not just from portability of the code, but portability of the distribution mechanism.

Tools like Fluid, Mozilla Prism, give us a specialised browser for a web app. On the other end of the spectrum, we have tools like Adobe Air and Appcelerator Titanium that let developers build full-fledged desktop applications.

But why the web? How about other technologies like Silverlight and Flash? Well, the browser really hasn’t changed that much (Netscape screenshot). But underneath, the engine has changed dramatically. We’re going from the hacky world of the first browsers to one where developers will benefit greatly from the Javascript engines, renderers, and APIs available to us.

For example, with canvas, the ability to do dynamic graphics (nice demos). Likewise, SVGs. Custom typography; Firefox is rolling out support for even obscure features of fonts. Later in the presentation, Ben demonstrates further capabilities: OpenGL and 3D CSS transformations (more nice demos); what Ben calls “the final frontier” of visual interfaces.

Faster Javascript engines are more than just increased performance. As Steven Levy points out, when you increase something’s performance by an order of magnitude, you haven’t just increased the performance; you’ve created something new. So we have generational garbage collection, for example, which is a sign of the virtual machines maturing and becoming much faster.

Related to JavaScript speed is threading. We don’t have threads and given opinions of the creator of JavaScript, probably never will. But “browsers now have something that’s even better than threads”: web workers. It’s used in different ways; for example, Chrome extensions can run in their own process this way. Another example is running a database server like SQLite in a separate process.

So does the web have the capabilities for today’s and tomorrow’s applicatons? There’s a huge user base and a huge developer based – more developers than any other platform. Some people still think of JavaScript as a toy, but when they get into it, they often realize it’s actually quite good, and it’s not JavaScript they don’t like, but things like browser incompatibilities that are troublesome.

Development Details

Web developers can use the Mojo framework, which provides simple Prototype based APIs. Notifications are HTML controls themselves, so you can put whatever HTML you want inside it. Security-wise, applications can get different powers

Palm wants to sell phones, not proprietary APIs, so it’s involved with BONDI and W3C widgets standardisation efforts. One of the things they need to know from us (the developer community) “Palm pays us, but they didn’t pay us enough to sell out”; Ben and Dion are developers and they’re not going to tell other developers use “our funny proprietary API, you’ll love it”. However, they can’t say when all this will happen, as they’re evolving the platform pragmatically and they feel other things might have more immediate impact, e.g. OpenGL support.

The web distribution model is simple – user surfs to a URL! But many people actually want a catalogue, and in fact some app catalogues are becoming spam catalogues. Some even boast about how many apps there are in the catalogue, Ben notes with a wink at the big I. With Palm, developers pay $50 and it helps to avoid the problem. There will be one catalogue, but developers can control which markets they’re selling in, and get useful analytics and feeds about its usage. Wanting to reduce the friction for people to buy an app, so would consider (although nothing definite) carrier billing and affiliate links.


The developer portal is currently being overhauled. Under consideration are ways to make things more transparent, e.g. bug tracking and planned features.

Ben and Dion anticipate that developers will probably be able to opt in to JavaScript obfuscation (or some other form of obfuscation). As Dion explains, View Source has been really important on the web, so there’s a tension and it’s likely multiple options will be available.

On ease of use, multitasking has been great; UI latency is still an issue even though the hardware is comparable to 3GS. The problem is the path to the GPU didn’t exist, but now with CSS transforms, that will be solved in the future. As far as screen size, where Pixi broke the mold 4 months after the Pre, this happy world of coding to the same screen size on mobiles is going. Ben says it would have been easier for Palm if it wasn’t the first phone to break the mould, but reality is the mobile space will break out to big screens, phones, etc etc, so it won’t be one fixed resolution or even aspect ratio.

There will be lots of new tools for developing with and they’ll be able to work with WebKit. Ben and Dion (not speaking for Palm, they’re sure to add) are open to the thought of embracing Flash for native apps and keen to hear people’s thoughts on it.

The open source question … will Palm open source WebOS. Ben says “we (Ben and Dion) would love to do it”, but it’s not Android working to thousands of phones and it still has to be considered.

Update: Turned out attendees scored free Pre’s after the event. Thanks fellas :).

Announcement from O2 Litmus guys.

Developer mail to

Event hashtag: #o2palm

Posted by Michael Mahemoff at 3:01 pm

1.7 rating from 68 votes

Fast by Default and Web Performances

Category: CSS, JavaScript, Performance

It does not matter if we have the latest CPU able to devour every single bit of a web page, round trip and network delay is still the real bottleneck of whatever website and Steve Souder knows it so well that he summarize best practices in 66 slides.

And That’s Not All!

Steve slides are mainly focused on JavaScript techniques able to download simultaneous files without blocking download first, and layout render after. He is generally right about his assumptions, but as is for everything, there are cases and cases. Please let me share here my thoughts about performances, not necessary too hard if we perfectly know what we are doing, but somehow hard to make it that perfect as well!

Image Sprites Rule

A common technique to avoid unnecessary requests to the server is the usage of image sprites. This means that rather than 16 images, as example, we could have a single 4×4 grid with all required images positioned when necessary via CSS and better compressed. Nothing new? Well, a common side effect of this technique is that if the current page will use only 4 out of 16 images, the total download size will be bigger than required one and the total amount of milliseconds to have a fully rendered layout will be, again, bigger!
Beside, if the image will be cached it won’t be downloaded for every other page where other cells in the sprite are necessary. As summary, sprites have pros and cons, if we put every image present in the CSS inside a single sprite but the user is interested only into one page, we are probably wasting bandwidth, and the initial feeling will be a slow website. The good compromise is obtained grouping related images inside a single sprite, being sure that if one is required, the rest of the UI will use at least 2/3 of other images in the same sprite.

JavaScript And Sprites Rule

Even if precedent point could sound obvious, we generally act in the opposite way with JavaScript. Not every library has been created to load incrementally and jQuery, as example, is one of these libraries. Even if we use just Sizzle and few core methods, we usually include the full library.
OK, jQuery is lightweight by default, and I have used it as example for its popularity, but it is a good example to explain that JavaScript is usually served as just one file with everything inside, but this is not the best we can do … remember sprites rule? Only if we use at least 2/3 of the library in that page it makes sense to include the entire library …
Other libraries such MooTools, YUI, or dojo allow us to choose the exact package we need for our purpose or, even better, these libraries are able to load dependencies incrementally and run-time … is this always a better technique?
Well, from parallel download point of view it is, but for overall responsiveness it may be not the right way.
Try to imagine a page with 6 files/namespaces dependencies, if these 6 files would have been served in a single one, minified and gzipped, a single request with a better/shared compressed dictionary would have been better than 6 different files. Still Sprites Rule: for few more milliseconds but a single request and a better compression ratio, the overall responsiveness of the page will be improved, thanks to every included dependency, rather than split files and lazy requests.
In other words, if without those files the page is not usable, the user will have a bad or “slow” experience, compared with the page that uses those files loading them in a shot.
The ideal scenario would be a non-blocking lightweight loader on the top able to call grouped or optimized piece of a library and execute code only at the end, something like this:


  1. (function(){
  2. function script(src){
  3.     // create the script tag and load it
  4.     var script = d.createElement(&quot;script&quot;);
  5.     script.type = &quot;text/javascript&quot;;
  6.     script.onload = script.onerror = onload;
  7.     script.src = src;
  8.     head.insertBefore(
  9.         script,
  10.         head.firstChild
  11.     );
  12. };
  13. function onload(){
  14.     // code already parsed, remove this script
  15.     this.parentNode.removeChild(this);
  16.     // call the callback, if present
  17.     // when every script has been loaded
  18.     if(--length == 0)
  19.         $.onload()
  20.     ;
  21. };
  22. var // document shortcut
  23.     d = document,
  24.     // the head element or the documentElement (quirks)
  25.     head = (
  26.         d.getElementsByTagName(&quot;head&quot;)[0] ||
  27.         d.documentElement
  28.     ),
  29.     // scripts length
  30.     length = 0,
  31.     // exposed object
  32.     $ = {
  33.         // method to call to load scripts
  34.         load:function(){
  35.             for(var
  36.                 i = 0,
  37.                 l = length = arguments.length;
  38.                 i &lt; l; ++i
  39.             )
  40.                 script(arguments[i])
  41.             ;
  42.             // chain to add an optional onload
  43.             return $
  44.         },
  45.         // silly dual behavior for every taste
  46.         // this.onload = function(){}
  47.         // or
  48.         // this.onload(function(){})
  49.         onload:function(onload){
  50.             $.onload = onload;
  51.         }
  52.     }
  53. ;
  54. return $;
  55. })()
  56.     // calling load function ...
  57.     .load(
  58.         // one or more file
  59.         // order not guaranteed (parallel downloads)
  60.         // suitable for namespaced libs
  61.         // or totally unrelated files
  62.         ''
  63.     )
  64.     // adding an onload callback ...
  65.     .onload(function(){
  66.         // jQuery is here
  67.         // be sure page has been loaded
  68.         $(function(){
  69.             alert($('body').html());
  70.         })
  71.     })
  72. ;

Above is just a 420 bytes (265 deflated) example function but fortunately every library able to load incrementally will have a better and more powerful one. Is the suggestion/idea clear?

JavaScript And Evaluated Comments

On slide 16 we can learn about the last amazing technique which aim is to speed up the whole parsing and executing process. I am talking about JavaScript in comments, totally ignored, unless conditionals, from every JavaScript engine, not parsed at all and for this reason faster to include as part of the code. but for some reason faster to evaluate. To be honest, I have not studied internals yet and to me is quite obscure the reason a function call, as eval is, should be that faster than native included code, since the parser will pass the code in any case and the latter one needs to be executed. The only guess I have is that eval misses something compared to native parsing, but I don’t know what … Sure is that via Firefox and enabled Firebug or analogue debuggers, eval will be slower, due to overload caused by the debugger itself, so may I suggest Function(strippedComment)() instead?
Global scope as a native included code will have, and less stress for debuggers!
OK, I went to far with Function suggestion, and my point over this comments technique is that being necessary to retrieve the comment content, as part of the text contained in the script, we cannot gzip/deflate the code unless the entire page has been compressed.
Being the network round trip one of the most costly operations for a mobile device, I don’t think this is a universally valid technique for desktop browsers. Parallel downloads are almost a joke for today mobile phones, but hopefully a reality for common ADSL or fiber optic connections.
As best option for both scenarios I think the evaluated code in a string one is more than reasonable since it can be easily included as external file and it can be handled via namespaces.


  1. var myLib = {
  2.     /*generic library*/
  3.     util:{}
  4. };
  6. (function($nmsp){
  8. // a generic namespace loader
  9. myLib.namespaceLoader = function(nmsp){
  10.     if(!$nmsp[nmsp]){
  11.         $nmsp[nmsp] = true;
  12.         eval(myLib[nmsp]);
  13.     }
  14. };
  16. })({});
  18. // code apparently faster to evaluate
  19. myLib.myLib_util_alert = "myLib.util.alert=function(s){alert(s)}";
  21. // load the required namespace
  22. myLib.namespaceLoader("myLib_util_alert");
  24. // try the namespace
  25. myLib.util.alert("BOOH!");

Well, the whole point is about network round trip, isn’t it? At least we know that if we have a dynamic layout but a static script, hopefully based on ETag and caching solutions, above suggestion will make sense as valid alternative.

CSS And Sprites Rule

Following for the last time the Stripes logic, CSS are rarely loaded incrementally. One call? Same style sheet for the whole website? Well, it could be a valid reason to serve a single file but at least we should remove noise from our CSS. How? It is simple, split the CSS for targeted browsers.
In the recent Confessions of a style sheet hacker, Jason Garrison justifies the usage of hacks for a single browser: Internet Explorer 6.
How many hacks are necessary to let this browser behave like every other? Unless our layout is not truly simple, lots of them! If we put “noise” inside a CSS specific for every browser but IE6, every browser will load unnecessary styles, slowing down the parser with messed selectors, and adding bytes, improving used bandwidth and download time as well.
Jason already replied and I would like to thanks him, but I still think an extra call for a single case is more worth it than overall noise for everyone.

My Performances Contribution

What a good occasion to introduce my latest project which aim is to improve performances for every static client file to serve?
PHP Client Booster aim is to use some good practice to improve 2 times or more client file serving. A common mistake with PHP website is the one to use

  1. // don't stress your server with this
  2. ob_start('ob_gzhandler');

even for static libaries, CSS, recently suggested to serve @font-face compatible fonts … but this could be a complete waste of resources over performances reductions, rather than improvements, specially if produced generic output could have been pre-compressed.
Compatible with every static file, serving a 304 as soon as possible, including only necessary code, and compatible with load balanced environments as well thanks to a shareable ETag management, PHP Client Builder is a cross-platform tool able to pre-compress resources, serving as default file type a deflated version, optionally a gzipped one, and finally the raw version of the original file.
The reason I have chose deflate as default file serving is that it does not overload the compressed file with initial extra bytes and it may be slightly faster with inflate against gunzip.
Moreover, some old browser had problems with gzip and its extra bytes but it should not have problems with deflate. The tool, which needs lot of documentation I am planning to write, should not be obtrusive, it could be customized or added in our already present Framework/environment, and it comes with a bench/ folder for a “try yourself” purpose. Comments, suggestions, or questions, will be appreciated (for the whole post as well).

Posted by webreflection at 8:00 am

2.2 rating from 65 votes

Monday, November 2nd, 2009

A State of the Web via October Tweets

Category: Ajax

A lot of great news is coming in via Twitter. I make a lot of Ajax comments under @dalmaer and wanted to give you a roundup on the month of October via Tweets. Always interesting to take a glance at the month. What do you think?

Posted by Dion Almaer at 6:44 pm

2.1 rating from 117 votes

Firefox 3.6 appearance adds a lot of developer features

Category: Browsers, Firefox, Mozilla

Firefox 3.6 is already on the scene with the first beta release. The Mozilla team is moving faster and faster these days which is fantastic to see.

At the high level:

  • Users can now change their browser’s appearance with a single click, with built in support for Personas.
  • Firefox 3.6 will alert users about out of date plugins to keep them safe.
  • Open, native video can now be displayed full screen, and supports poster frames.
  • Support for the WOFF font format.
  • Improved JavaScript performance, overall browser responsiveness and startup time.
  • Support for new CSS, DOM and HTML5 web technologies.

But there is so much more. There is a ton of CSS work including background-size, gradients, and multiple background images. Video can now have a poster frame, HTTP activity can be monitored, Web Workers can self-terminate with close(), drag and drop supports files via DataTransfer, window.onhashchange.

Posted by Dion Almaer at 6:30 am

2.7 rating from 69 votes

Thursday, October 29th, 2009

Would you like a _ with that $? New library gives JS what it should have

Category: JavaScript, Library

Jeremy Ashkenas and the DocumentCloud team have just released Underscore.js a small library that provides all the functional programming helpers that you expect from Prototype.js or Ruby, but without extending any core JavaScript objects.

Jeremy told us:

This makes it a natural fit alongside jQuery, without having to worry about the conflicts and redundant functionality that using Prototype and jQuery together would entail. For browsers that support the new Javascript 1.6 array functions, it delegates to the native implementations, so your “” can run at full speed, where available. It’s a tiny download, 4k when gzipped. Here’s the project page, with full documentation, live tests and benchmarks.

Some of the utilities:

each, map,
reduce, detect, select, reject, all,
any, include, invoke, pluck, max,
min, sortBy, sortedIndex, toArray,


first, last,
compact, flatten, without, uniq,
intersect, zip, indexOf


bind, bindAll, delay,
defer, wrap
, compose


keys, values,
extend, clone, isEqual, isElement,
isArray, isFunction, isUndefined


uniqueId, template

There has already been nice community patches and suggestions from the community, and Kris Kowal helped make it CommonJS-compliant.

Obviously, other libraries have covered a lot of these before, but it is nice to see a small core covering.

Posted by Dion Almaer at 12:50 am

1.9 rating from 74 votes

Wednesday, October 21st, 2009

The State of Developer Tools

Category: Utility

Our very own Ben Galbraith took a dip down under to talk about the state of developer tools.

The session description is:

For many years, developing for the web left quite a bit to be desired when it came to the tools at developers disposal, particularly in comparison with the sorts of development environments available for desktop applications.

But the rise of browser native tools, in Safari, Internet Explorer and Opera, browser based add-ons like Firebug, web based tools and more mean that developers have a vast array of powerful tools to help develop, debug, profile and otherwise improve their applications. But, just what’s out there? And what can be done with them?

In this session, co-founder of, and The Ajax Experience conferences, and now head of Mozilla Foundation’s new Tools team Ben Galbraith will take us on an expedition through the developer tools landscape. Learn what’s out there, and what they can do to make you more productive, your sites and applications better and faster, and your life as a developer more enjoyable.

Listen in to the audio recording of session, and check out the other talks from the event.

Posted by Dion Almaer at 6:27 am

2.3 rating from 42 votes

Tuesday, October 20th, 2009

Microsoft Ajax Minifier VS YUI Compressor

Category: JavaScript, Microsoft, YUI

I have discovered only yesterday the Announcing Microsoft Ajax Library (Preview 6) and the Microsoft Ajax Minifier post and using Visual Studio on daily basis I could not miss an instant minifier test.

First of all, my apologizes for the wrong tweet and the comment left in the related post. I have spotted a false positive during one of my tests where object properties where accessed dynamically via string concatenation. The result I have instantly noticed was something like:


  1. o[a+b]=c;

Where a and b were two different strings already defined elsewhere. Above practice could probably save few bytes but performances will be slower due to run-time string creation to discover the property/method rather than direct access as obj.prop is.

Nothing new so far, just the reason I reacted that fast, so let’s try to forget my error starting to compare these two minifier.

Different Defaults

Main YUI Compressor settings are:

  1. –nomunge, Minify only, do not obfuscate
  2. –preserve-semi,  Preserve all semicolons
  3. –disable-optimizations, Disable all micro optimizations

These jar arguments need to be explicit and are able to make YUIC minification less greedy.
As opposite behavior, Ajax Minifier requires explicit optimizations such hypercrunch, the YUI munge synonym, and an extra C to combine frequent strings into variables.


  1. // Ajax Minifier Best Option
  2. ajaxmin.exe -hc input.js -o output.js
  4. // YUI Compressor Best Option
  5. java -jar yuicompressor-2.4.2.jar input.js -o output.js

Ajax Minifier Hypercrunch Plus C

One reason Ajax Minifier could claim better ratio is the C option. While the hypercrunch option produces almost the same YUIC munged output, the extra parameter will try to understand how many times a literal string is repeated inside a block and, if more than 2 times, it will assign this string to a variable.


  1. (function(){
  2. node["onclick"] = function onclick(){
  3. node["onclick"] = null;
  4. setTimeout(function(){
  5. node["onclick"] = onclick;
  6. }, 1000);
  7. };
  8. })();

Above piece of code will produce these outputs:


  1. // Ajax Minifier
  2. (function(){var a="onclick";node[a]=function b(){node[a]=null;setTimeout(function(){node[a]=b},1e3)}})()
  4. // YUI Compressor
  5. (function(){node.onclick=function a(){node.onclick=null;setTimeout(function(){node.onclick=a},1000)}})();

Above example is a classic one where the YUI Compressor better understands the logic, removing repeated and unnecessary dynamic access, without performing literal checks or number translation. Has anybody spotted a 1e3 rather than 1000?

Stripped Comments

Using a different block of code, we can spot new things to care about:


  1. var o = (function(){
  2. var o = {};
  3. /*!
  4. important comment
  5. */
  6. var i = 0;
  7. o["on" + "click"] = function(){};
  8. o["on" + "mouse" + "over"] =
  9. o["on" + "mouse" + "out"] =
  10. o["on" + "mouse" + "move"] = o["on" + "click"];
  11. if(i)
  12. i++
  13. ;
  14. return o
  15. })();

First of all we have a well known comment syntax which aim is to preserve comments such Licenses, Copy Rights, or any comment we would leave there.
This is indeed the output via YUI Compressor:


  1. var o=(function(){var b={};
  2. /*
  3. important comment
  4. */
  5. var a=0;b.onclick=function(){};b.onmouseover=b.onmouseout=b.onmousemove=b.onclick;if(a){a++}return b})();

Other things is the same kind of optimization YUIC performed over static literal concatenation. In few words YUIC optimized that property or method access improving indirectly performances. The only “waste of bytes” are two brackets rather than original if(a)a++; snippet, something Ajax Minifier does not change at all.


  1. var o=function(){var c="mouse",b="on",a={},d=0;a[b+"click"]=function(){};a[b+c+"over"]=a[b+c+"out"]=a[b+c+"move"]=a[b+"click"];if(d)d++;return a}()

Even if we minify the source via -hl rather than -hc, Ajax Minifier won’t optimize dynamic access. This let us think that this minifier is not able to speedup or adjust our or third parts libraries code.
At the same time we can spot that Ajax Minifier strips out comments without respecting the /*! combination. To be honest, I could never expect something like this, specially for a library and a minifier “perfectly combined” with jQuery library: is every .NET developer stripping out jQuery credits? Hopefully no!

Ajax Minifier, Loads Of Options

  1. S, to avoid any kind of output during compression and put eventually errors in the stderr
  2. A, to analyze the code
  3. Z, to add semicolon where necessary, rather than new lines
  4. L, to avoid combined literals
  5. Eo or Ei, to decide ASCII or input dependent encoding scheme
  6. W, to set warning level
  7. P, to tidy up a compressed output making it pretty (the Visual Studio meaning of pretty)
  8. R, to combine literals in more than a file, as example jQuery sources into jQuery for custom builds
  9. I, to echo in console the original source
  10. D, to remove debugger statements
  11. 3, to do not add IE specific code, respecting strict standards (???)

Many others I could not test properly or I am not sure about, as is for the /J option, which let me write eval(“alert(123)”) without problems and without a single warning.

Code Analyzer

This is probably one of the most interesting Ajax Minifier options, something implicit via Rhino in YUI Compressor but not directly exposed. The code analyzer could give us some info without necessarily compress code.


  1. var o = (function(){
  2.     var o = function(){};
  3.     var i = 0;
  4.     if(i)
  5.         i++
  6.     ;
  7.     return o
  8. })();

If we pass this code via Ajax Minifier and /A option, we’ll obtain this result:

Crunching file 'test.js'...
test.js(7,5-10): improper technique warning JS1267: Always use full statement bl
ocks: if(i)

Global Objects
  o [global var]

Function Expression anonymous_0 - starts at line 1, col 9 [0 references]
  i [local var]
  o [local function]

Function Expression "o" - starts at line 2, col 12

var o=function(){var o=function(){},i=0;if(i)i++;return o}()

Interesting how Ajax Minifier knows that brackets are a good practice, and this time there is a warning.
Moreover, it is possible to understand how this application evaluates JScript behavior defining function expression everything, anonymous and/or assignment included.
We can also have a quick summary about all used variables in a script, local or global, understanding scopes and studying possible conflicts.

Conditional Comments Nonsense

Last but not least, the Ajax Minifier strips out by default JScript conditional comments. The result is that code will be preserved but in a totally obtrusive way if the browser is not Internet Explorer. Be careful!


  1. var b = false;
  2. /*@cc_on
  3. b = true;
  4. @*/
  6. // will be
  7. var b=false;b=true

No Winner Yet

I like bits and bops from both proposals, the YUI Compressor, stable, and widely adopted, but unable to perform some Ajax Minifier optimization, and the latter one, not necessary the best choice as we have seen before, but a truly good alternative, specially for Visual Studio based developers.
I found clever the Ajax Minifier concatenated variables declaration, normal in YUIC but not performed if there is a comment in between (var a=1;/**/var b=2;) while I consider a better tool, performances speaking, the YUI Compressor, since ignored Ajax Minifier dynamic access and literals to variables, when specified, could slightly increase run-time and execution speed compared to the same output produced via YUI Compressor. Not official tests yet, but I am planning to!

Posted by webreflection at 10:54 am

2.8 rating from 49 votes

Cloudera Desktop, MooTools Update

Category: MooTools, Showcase

Aaron Newton of MooTools fame is now at Cloudera, the awesome Hadoop startup, and has posted about the rich Cloudera Desktop project he has been working on.

In the post he discusses the implementation and how it uses new features in the new MooTools 1.2.4 release such as:

MooTools Depender

MooTools ships with a dependency map that powers its download builder. The modular nature of the library yields itself to custom builds, putting together a library specific to the task at hand. This allows MooTools to power, for instance, a mobile application with only a small amount of JavaScript. For the Cloudera Desktop, we knew we were going to end up with a LOT of JavaScript, and loading it all on startup didn’t make much sense. Instead, we authored the Depender application. It’s an easy-to-deploy, real-time library builder and dependency mapper. This allows our application to load with a minimum of JavaScript. When users launch specific applications, Depender loads any dependencies for that app that aren’t loaded already, and then display the application. In addition to the server side component (available in both PHP and Python/Django), there are two client side components: a stand alone version to be released in MooTools 1.2.4 and a server side application that ships with a client that talks to the server for you, which lets you do this slickness:


  1. Depender.require({
  2.     scripts: ['DatePicker', 'Logger'], //array or single string for one item
  3.     callback: function() {
  4.         //your code that needs DatePicker and Logger
  5.     }
  6. });
  7. //later, you need to load more dependencies...
  8. Depender.require({
  9.     scripts: 'Fx.Reveal', //array or single string for one item
  10.     callback: function(){
  11.         //if, for some reason, Fx.Reveal is available already,
  12.         //then this function will exeute immediately, otherwise it will
  13.         //wait for the requirements to load
  14.         $('someElement').reveal();
  15.     }
  16. });

MooTools ART

MooTools at the moment doesn’t have an official, public UI system, but that’s changing, and in no small part due to our contributions to the MooTools ART project. MooTools ART is an in-development UI library that currently outputs canvas. It’s an abstraction of the canvas API and it allows developers to make style-able UI elements like buttons, windows, and icons. At the moment it only outputs to canvas (limiting its support to browsers other than Internet Explorer), but we’re working on wrappers for VML and SVG.

In addition to these drawing tools provided by the ART API is a widget-based system that has numerous features including keyboard management, event bubbling, custom styling, and more. This widget system is the foundation for many of our UI elements, though not all of them. While the basic ART API was developed by the core MooTools Team (of which I am a part), we’ve contributed most of the widgets available in the library built with that API, including a window manager, a history interface, pop-ups for alert, confirm, and prompt, split views and more.

What are the major changes in MooTools 1.2.4?

  • Browser feature detection favoured over browser user agent sniffing
  • Added Trident 6 (IE8) detection
  • Request can take an instance of URI as a url
  • JSON.stringify and JSON.parse native methods are now accessible
  • DomReady always fires before load event
  • Fix for creating a Request in early versions of IE6
  • Fixes and optimizations for Element.getOffsets

Posted by Dion Almaer at 8:30 am

2.8 rating from 51 votes

Monday, October 19th, 2009

jQuery Concrete; ConcreteUI programming in jQuery

Category: JavaScript, jQuery

Hamish Friedlander of SilverStripe has developed jQuery Concrete as a way to enable developers to easily add functions to groups of DOM elements based on the structure and contents of those DOM elements.

Hamish told us:

I’d like to announce the 0.9 (API stable) release of the Concrete and Selector libraries for jQuery.

Concrete provides a brand new model of JavaScript code organisation – a replacement for Object Oriented programming that is focused on adding functions to DOM elements based on their structure and content. It’s a merging of the model and view layer that initially seems weird, but can give very powerful results.

We’re standing on the shoulders of giants here, taking ideas from Prototype’s behaviour & lowpro and jQuery’s effen & livequery. Concrete extends these concepts to provide a complete alternative to traditional OO design – self-aware methods, inheritance, polymorphism, constructors and destructors, namespacing and getter/setter style properties, all without a single class definition.

Compared to jQuery.plugins and jQuery.UI widgets, Concrete provides:

  • code clarity – the code structure leads to more readable code
  • extensibility – specificity based method selection allows the injection of logic almost anywhere without monkey patching
  • greater organisational capabilities – syntax promotes modularity
  • reduced name clashes – powerful refactoring-free namespacing eliminates name pollution

A key component of Concrete is Selector – a CSS selector engine developed in conjunction with Concrete to give it maximum performance. For the use patterns is it designed for, it can out-perform jQuery’s native Sizzle selector engine by an order of magnitude.

Developed by Hamish Friedlander, and partially sponsored by SilverStripe, Concrete is in use in several projects internally, and is a key component of the CMS redevelopment for the next version of SilverStripe.

A taste:


  1. $('div').concrete({
  2.     highlight: function(){
  3.         this.effect('bounce');
  4.     }
  5. });
  7. $('div.important').concrete({
  8.     highlight: function(){
  9.         this._super();
  10.         this.animate({background: 'white'});
  11.     }
  12. });
  14. $('#namediv').highlight();

In addition to the source, a tutorial and a screencast are available to help introduce the new concepts.

Posted by Dion Almaer at 6:47 am

2.4 rating from 73 votes

Monday, September 28th, 2009

Going into details with the WebKit Page Cache

Category: Browsers, WebKit

Brady Eidson has a great one two punch on the WebKit page cache. First, Brady delves into the basics of the page cache:

The Page Cache makes it so when you leave a page we “pause” it and when you come back we press “play.”

When a user clicks a link to navigate to a new page the previous page is often thrown out completely. The DOM is destroyed, Javascript objects are garbage collected, plug-ins are torn down, decoded image data is thrown out, and all sorts of other cleanup occurs.

When this happens and the user later clicks the back button it can be painful for them. WebKit may have to re-download the resources over the network, re-parse the main HTML file, re-run the scripts that dynamically setup the page, re-decode image data, re-layout the page, re-scroll to the right position, and re-paint the screen. All of this work requires time, CPU usage, and battery power.

Ideally the previous page can instead be placed in the Page Cache. The entire live page is kept in memory even though it is not on screen. This means that all the different bits and pieces that represent what you see on the screen and how you interact with it are suspended instead of destroyed. They can then be revived later in case you click the back button.

Then in part two we get deeper, and delve into the page show/hide events:

  1. <html>
  2.     <head>
  3.     <script>
  5.     function pageShown(evt)
  6.     {
  7.         if (evt.persisted)
  8.             alert("pageshow event handler called.  The page was just restored from the Page Cache.");
  9.         else
  10.             alert("pageshow event handler called for the initial load.  This is the same as the load event.");
  11.     }
  13.     function pageHidden(evt)
  14.     {
  15.         if (evt.persisted)
  16.             alert("pagehide event handler called.  The page was suspended and placed into the Page Cache.");
  17.         else
  18.             alert("pagehide event handler called for page destruction.  This is the same as the unload event.");
  19.     }
  21.     window.addEventListener("pageshow", pageShown, false);
  22.     window.addEventListener("pagehide", pageHidden, false);
  24.     </script>
  25.     <body>
  26.     <a href="">Click for WebKit</a>
  27.     </body>
  28.     </head></html>

Oh, and beware of plugins ;)

Secondly, a page might not be considered for the Page Cache because it’s difficult to figure out how to “pause” it. This happens with more complex pages that do interesting things.

For example, plug-ins contain native code that can do just about anything it wants so WebKit can’t “hit the pause button” on them. Another example is pages with multiple frames which WebKit has historically not cached.

Distressingly, navigating around these more advanced pages would benefit the most from the Page Cache.

Posted by Dion Almaer at 6:33 am

3 rating from 15 votes

Monday, September 21st, 2009

WebGL available in Firefox Nightly

Category: 3D, Firefox

We mentioned that WebGL had landed in WebKit source, when it joined Firefox.

Vladimir Vuki?evi? of Mozilla has posted on how it shows up in a nightly instead of just source (which requires a compiler flag etc.

This is incredibly exciting, as Jon Tirsen said:

Your next 3D shooter will sport a nice “Your browser is not supported please install Chrome, Safari or Firefox.” (Re: WebGL.)

Hopefully IE gets there too of course (Opera is in the group so we should see something there too).

Here is Vlad:

Along with the Firefox implementation, a WebGL implementation landed in WebKit fairly recently.  All of these implementations are going to have some interoperability issues for the next little while, as the spec is still in flux and we’re tracking it at different rates, but will hopefully start to stabilize over the next few months.

If you’d like to experiment with WebGL with a trunk nightly build (starting from Friday, September 18th), all you have to do is flip a pref: load about:config, search for “webgl“, and double-click “webgl.enabled_for_all_sites” to change the value from false to true.  You’ll currently have the most luck on MacOS X machines or Windows machines with up-to-date OpenGL drivers.

We still have some ways to go, as there are issues in shader security and portability, not to mention figuring out what to do on platforms where OpenGL is not available.  (The latter is an interesting problem; we’re trying to ensure that the API can be implementable on top of a non-GL native 3D API, such as Direct3D, so that might be one option.)  But progress is being quickly made.

When paired with high-performance JavaScript, such as what we’ve seen come from both Firefox and other browsers, should allow for some exciting fully 3D-enabled web applications.  We’ll have some simple demos linked for you soon, both here and on Mark’s blog.

Posted by Dion Almaer at 5:03 am

4.4 rating from 38 votes

Tuesday, September 15th, 2009

First sign of WebGL lands in WebKit

Category: 3D, Games

Jeffrey Rosen has taken a look at a preview of WebGL landing in the WebKit project. The demo above is an example of this work (here in HD):

WebGL is basically an initiative to bring 3D graphics into web browsers natively, without having to download any plugins. This is achieved by adding a few things to HTML5, namely, defining a JavaScript binding to OpenGL ES 2.0 and letting you draw things into a 3D context of the canvas element.

It is interesting to compare this low level API to O3D which is a scene graph API from Google (Google also supports WebGL, via the O3D team also, and sees the APIs as complimentary). They are very different APIs taking drastically different approaches. One gives you a new API but a higher level one that may appeal to JS developers more, whilst the other is very familiar to a certain set of developers and would be easier to port work. Ideally, someone will Processing/jQuery-inize WebGL to give it some nice high level love too.

Fun times with 3D and the Web! Great to see WebKit and Gecko doing great things with WebGL already.

Posted by Dion Almaer at 6:18 am

4.1 rating from 39 votes