You searched for: 'native'

Thursday, September 10th, 2009

SVG Web: Making SVG work in IE and beyond

Category: SVG

I was excited to work with Brad Neuberg at Google. He is a pragmatic champion of the Open Web; a “do-er”. In the past we have seen his work wrangling the browsers via Really Simple History and Dojo Storage.

This time, we get the fierce Owlbear. That is the code name for the latest release of SVG Web:

SVG Web is a JavaScript library which provides SVG support on many browsers, including Internet Explorer, Firefox, and Safari. Using the library plus native SVG support you can instantly target ~95% of the existing installed web base.

Once dropped in you get partial support for SVG 1.1, SVG Animation (SMIL), Fonts, Video and Audio, DOM and style scripting through JavaScript, and more in about a 60K library. Your SVG content can be embedded directly into normal HTML 5 or through the OBJECT tag. If native SVG support is already present in the browser then that is used, though you can override this and have the SVG Web toolkit handle things instead. No downloads or plugins are necessary other than Flash which is used for the actual rendering, so it’s very easy to use and incorporate into an existing web site.

Notable highlights of this release are:

  • View source – You can now right click on an Flash SVG image and view the SVG source
  • Huge improvements to the resizing code – the size of SVG objects on the page now change based on the zoom factor, resizing the window, etc.
  • Performance improvements
  • The demo.html file is much more robust now
  • getScreenCTM, matrix inversions, createSVGPoint, and SVGPoint.matrixTransform are now implemented
  • The internal JavaScript architecture has been cleaned up and simplified for Internet Explorer

For more information on SVG Web, check out:

Great work Brad, and the community that has rallied around the cause, such as Rick Masters and James Hight.

To take a peak at some of the hacks that they have had to employ to get this working, check out the source.

Posted by Dion Almaer at 8:11 am

4.5 rating from 31 votes

Tuesday, September 1st, 2009

Web OS? Web VM? Value in both?

Category: Browsers, Editorial

The following post comes from my personal blog

Chrome OS created a whole slew of buzz around the Web finally being an OS. There are many other examples of this of course. On the desktop we have had the likes of moblin and Jolicloud for quite some time. On the phone we have webOS.

The Web as a virtual machine

In many ways, it seems obvious that this will happen. When I look at my desktop, I often find it looking like this:


which looks surprisingly similar to this:


Wait a minute. Can’t you look at the browser as being a virtual machine for the Web? A hugely successful one that has been ported to almost every platform known to man. It is a viral virtual machine that has become so successful due to its simple yet powerful constructs that allowed the platforms to do the porting (oh and it was friggin made to be free! Take that gopher!). As a virtual machine it grew so fast that it is quickly usurping the hosts themselves. The traditional operating systems. Applications are being written for the VM rather than the OS. This is very different to the diagram that has Windows running inside a VM on the Mac. In that case you are running applications written for Windows on the Mac. The virtualization crew have done wonders to make that emulation possible. With the Web you didn’t quite need those tricks, and instead thanks to Web standards, you just had to write a browser.

Going inside out

Normally, we have seen systems built natively and then we virtualize them. In the case of the Web, the “Web platform” is the host environment and the VM. As this environment has become stronger and stronger, there is bound to be the time to ask “do we need our host anymore?” We are on the fringe of that time right now where a huge bulk of the applications that people use are Web based.

I just ran an experiment in this area myself. The hard drive in my Macbook Pro died and while the kind folks at the local Apple store replaced it I had to pick up a random machine for use. What would life be like if I had to work on a random machine? I purposely didn’t restore from my own backup, and ended up with a week seeing how much of this cloud thing I use.

It turned out that all of the apps that weren’t somehow cloud based were now a pain in the arse. Obviously I could get my email with Gmail and the other Web apps were there. However it went further than that. I could download Tweetie and be instantly up and running where I left off (thanks to the fact that Twitter is just a Web service). I got to see which IM servers did a good job keeping my server based contacts, and what a pain it is when you make changes on the local client to tell you that flubber65 is Frank and suddenly you have no idea who anyone is without that (NOTE: if you build a service, for gods sake save everything up there!).

It even made me happy for Bespin, as I was able to get to a bunch of my code. That has incentivized me to help make Bespin great :)

So, I learned that Web based services won. I was productive on a random machine in minutes, and cursed the situations where this wasn’t the case (mainly around development).

If the entire OS was Web based, with a rich cloud infrastructure, then I could literally login to any machine and be ready to roll. X Windows productivity will be back! :) This will greatly change the role of devices too. Seamless upgrades. Multiple devices sharing data. Fun times.

So, Web OS it is. Hopefully the Web platform will be complete enough, and where it isn’t this kind of force behind it will push it even faster (this is why I am excited to see the space heat up, and also wanting to keep a watchful eye to see how things get standardized with crazy timelines behind them).

What do we lose?

But, wait a minute. We have seen that virtual machines are actually pretty awesome. Having an entire system in a single file that I can suspend, backup, and run multiple off? Cool. I am running multiple “Web OS’s” every day (Firefox, Safari, Chrome). I can update them whenever I want, and they can compete.

We also see fantastic features where we can watch over all of the connections from the virtual machine to the host. Tweak how ethernet works, set limits on various usage, etc. Just as we are getting amazing things out of virtual machines…. maybe we don’t need the Web platform to move from there just yet.


Chris Beard (Chief Innovation Officer at Mozilla) is often talking about what it means for the browser to be the “users agent”. There is so much that the s/browser/virtual-web-machine/g can be doing for us, and we have only just begun.

We need to be wary of losing out on any of these thoughts if the Web becomes the OS on the computer (whatever that means, of course!).

As we think about Jetpack and granting enhanced functionality from Web apps to browser-chrome I am often thinking about a morphing browser. When I am on Gmail, I don’t need the URL bar and all of that jazz. Instead, add new Gmail specific UI for me to use, and let me tie into the local system for contacts etc.

In that vein, it has been nice to see a step in that direction when listening to Alexander Limi speak about UX in Firefox last week. He and the UX team showed some concept mockups for future versions of Firefox such as this:

By the “home” tab you can imagine other tabs that are for applications. Thin tabs that just have an icon (e.g. an email notification icon showing new message counts). These “app tabs” can be slightly magical. The user has to bless them, and at that time can grant special powers to access local services such as a File API, or a Webcam, or Geo location, or [insert cool thing that native apps can do]. This feels like a small step for the browser, but one that can open up a lot.

In conclusion, I am bullish about the Web being called the ‘OS’ on certain computing devices. I am most excited about seeing this driving innovation into the Web platform as a whole, and also exploring the great side of having a platform run as a virtual machine on your hardware. Gotta love the Web :)

Posted by Dion Almaer at 7:18 am

3.1 rating from 41 votes

Thursday, August 20th, 2009

jXHR: XHR API, JSON-P backend conduit

Category: Ajax

The Mullet: Business up front, party in the rear

Kyle has take the XHR API that we all know and…. wrap…. and married it with a JSON-P transport to make jXHR.

He tells us more:

I’ve put out a very simple little project called jXHR which does cross-domain Ajax via JSON-P calls (meaning, totally javascript based), but does so with an emulation of the native XHR API (“onreadystatechange”, “open”, “send”, etc). It also provides some basic error handling capabilities, which is something most JSON-P solutions don’t currently offer, at least not without complicated timers, etc.

The goal was to provide a simpler interface for making cross-domain Ajax calls with JSON-P, but in nearly the same way you make same-domain calls, by using the standard XHR API. Also, I wanted the solution to be framework independent for those who don’t or can’t use jQuery, Dojo, etc

Frameworks such as Dojo allow you to choose an transport independent of their API (e.g. iframe transport versus XHR).

Posted by Dion Almaer at 6:55 am

4.8 rating from 70 votes

Wednesday, August 12th, 2009

W3C publish first working draft of File API

Category: Standards

The W3C has published a working draft for the File API which gives us a much improved <input type=”file”> and programmatic ability to work with file uploads and the like.

There are actually a few pieces to this work, which does a good job interfacing with other standards too:

This specification provides an API
for representing file objects in web applications, as well as programmatically selecting them and accessing their data. This includes:

  • A FileList interface, which represents an array of individually selected files from the underlying system.
    The user interface for selection can be invoked via <input type="file">, i.e. when the

    element [HTML5] is in the File Upload state, or through the FileDialog interface.

  • A FileData interface, which provides asynchronous data accessors for file data via callback methods.
  • A File interface, which includes readonly informational attributes about a file such as its name and its mediatype.
  • A FileError interface, which defines the error codes used by this specification.

The API to get access to selected files is trivial (document.getElementById("myFileInput").files.length etc) and then you can get the file data itself in various forms (data: URL, text, binary, Base64, new filedata:// URL).

An example usage of the filedata URL:


  1. // Sample code in JavaScript
  2. // Obtain fileList from <input type="file"/> using DOM
  4. var file = fileList.files.item(0);
  5. if (file)
  6. {
  7.  // ... Make asynchronous call
  9.  file.getAsURL(handleURL);
  10. }
  11. function handleURL(url, error)
  12. {
  13.  if(url)
  14.  {
  15.    var img = new Image();
  16.    img.src = url;
  18.    // Other stuff...
  20.  }
  21.  else
  22.  {
  23.    // error conditions
  24.  }
  25. }

Fun to see this all come together. The editor is a fellow Mozilla-n Arun Ranganathan … an all round good chap :)

Some have talked about alternative solutions such as using XHR to do the work, or DOM events to allow built-in progress events. The working group is listening, what would you like to see?

Posted by Dion Almaer at 6:25 am

4 rating from 38 votes

Wednesday, August 5th, 2009

Google buying On2; New twist in the hope for Open Video

Category: Google, Video

Today, video is an important part of many people’s everyday activities on the Internet and a big part of many Google products.

Because we spend a lot of time working to make the overall web experience better for users, we think that video compression technology should be a part of the web platform. To that end, we’re happy to announce today that we’ve signed a deal to acquire On2 Technologies, a leading creator of high-quality video compression technology.

This could be huge for the Open Video movement. It all depends on what Google does with the codecs going forward (and the deal going through of course).

Robin Wauters has more:

Some of its codec designs are known as VP3, VP4, VP5, TrueMotion VP6, TrueMotion VP7 and VP8. Its customers include Adobe, Skype, Nokia, Infineon, Sun Microsystems, Mediatek, Sony, Brightcove, and Move Networks. On2, formerly known as The Duck Corporation, is headquartered in Clifton Park, NY.

If would be great if Google decides to open-source On2’s VP7 and VP8 video codecs and free them up as the worldwide video codec standards, thus becoming alternatives to the proprietary and licenced H264 codecs. On2 has always claimed VP7 is better quality than H264 at the same bitrate.

Also noteworthy: Google could use the VP8 codec for YouTube in HTML5 mode, basically forcing its many users to upgrade to HTML5-compliant browsers instead of using Flash formats.

What would you like to see?

Posted by Dion Almaer at 9:29 am

4.5 rating from 31 votes

Monday, August 3rd, 2009

Cartagen: Rich mapping on the client side

Category: Canvas, Mapping

Ben Weissmann is one of the researchers at the MIT Media Lab’s Design Ecology group who’s working on Cartagen, a vector-based, client-side framework for rendering maps in native HTML 5. It’s impressive. Here he explains more:

Using JavaScript and HTML5’s canvas element, Cartagen takes data (including OpenStreetMap data) and renders it in the browser. This provides significant advantages over image-based mapping systems like Google Maps, because maps are generated on the fly, meaning that data can be mapped in real-time without needing extensive server-side rendering. Cartagen uses Geographic Style Sheets (GSS), which is a JSON- and CSS-based stylesheet language for styling maps. GSS can specify how the map in rendered, including fill and stroke color, widths, labels, outlines. GSS can describe interactive behavior like context menus, click styles, and hover styles, as well as accept JavaScript functions instead of static values. As an example, we rendered the world using accurate geographic data, but styled as if it was a Warcraft map. The project is free and open source software — the source is available, and the project is actively seeking contributors. The project’s website has a live demo and more details.

The pieces such as GSS are really interesting. Here is part of the example maps GSS:

  1. highway: {
  2.     strokeStyle: "white",
  3.     lineWidth: 6,
  4.     outlineWidth: 3,
  5.     outlineColor: "white",
  6.     fontColor: "#333",
  7.     fontBackground: "white",
  8.     fontScale: "fixed",
  9.     text: function() { return this.tags.get('name') }
  10. }

Posted by Dion Almaer at 6:15 am

3.9 rating from 54 votes

Thursday, July 30th, 2009

The Doctor subscribes HTML 5 Audio cross browser support

Category: Sound

The doctor is in, and this time the specialist is Mark Boas who walks us through HTML5 Audio in various browsers and how to get audio working on the various implementations that are in the wild today.

This early in the game especially, all implementations are not equal. For one there is the codec support:

But also there are the various levels of API support (and even the fact that Opera current supports Audio() but not the audio tag for example).

Mark worked on the jPlayer jQuery plugin which lead him down this path, and in conclusion it comes down too:

  1. Check for HTML 5 audio support and if not present fall back on Flash.
  2. Check the level of HTML 5 audio support and adapt your code accordingly for each browser.
  3. Check what file types are supported and link to appropriate formats of the files.

You can go to a audio checker in various browsers to see them poked and prodded.

Mark shares code such as:


  1. // Using JavaScript You can check for audio tag support like this:
  2. var audioTagSupport = !!(document.createElement('audio').canPlayType);
  4. // Checking for the audio object looks more like this:
  5. try {
  6.     // The 'src' parameter is mandatory in Opera 10, so have used an empty string "",
  7.     // otherwise an exception is thrown.
  9.     myAudioObj = new Audio("");
  11.     audioObjSupport = !!(myAudioObj.canPlayType);
  12.     basicAudioSupport = !!(!audioObjSupport ? : false);
  13. } catch (e) {
  14.     audioObjSupport = false;
  15.     basicAudioSupport = false;
  16. }
  18. // Need to check the canPlayType first or an exception will be thrown for those browsers that don't support it
  19. if (myAudio.canPlayType) {
  20.    // Currently canPlayType(type) returns: "no", "maybe" or "probably"
  21.     canPlayOgg = ("no" != myAudio.canPlayType("audio/ogg")) && ("" != myAudio.canPlayType("audio/ogg"));
  22.     canPlayMp3 = ("no" != myAudio.canPlayType("audio/mpeg")) && ("" != myAudio.canPlayType("audio/mpeg"));
  23. }

Posted by Dion Almaer at 6:12 am

3.6 rating from 14 votes

Tuesday, July 28th, 2009

CSS Gradients for All!

Category: CSS

Weston Ruter has created a very cool library that enables CSS gradients on non-WebKit browsers (at least, a subset). Incredibly cool:

CSS Gradients via Canvas provides a subset of WebKit’s CSS Gradients proposal for browsers that implement the HTML5 canvas element.

To use, just include css-gradients-via-canvas.js (12KB) anywhere on the page (see examples below). Unlike WebKit, this implementation does not currently allow gradients to be used for border images, list bullets, or generated content. The script employs document.querySelectorAll()—it has no external dependencies if this function is implemented; otherwise, it looks for the presence of jQuery, Prototype, or Sizzle to provide selector-querying functionality.

The implementation works in Firefox 2/3+ and Opera 9.64 (at least). Safari and Chrome have native support for CSS Gradients since they use WebKit, as already mentiond.

This implementation does not work in Internet Explorer since IE does not support Canvas, although IE8 does support the data: URI scheme, which is a prerequisite (see support detection method). When/if Gears’s Canvas API fully implements the HTML5 canvas specification, then this implementation should be tweakable to work in IE8. In the mean time, rudimentary gradients may be achieved in IE by means of its non-standard gradient filter.

CSS Gradients via Canvas works by parsing all stylesheets upon page load (DOMContentLoaded), and searches for all instances of CSS gradients being used as background images. The source code for the external stylesheets is loaded via XMLHttpRequest—ensure that they are cached by serving them with a far-future Expires header to avoid extra HTTP traffic.

The CSS selector associated with the gradient background image property is used to query all elements on the page; for each of the selected elements, a canvas is created of the same size as the element’s dimensions, and the specified gradients are drawn onto that canvas. Thereafter, the gradient image is retrieved via canvas.toDataURL() and this data is supplied as the background-image for the element.

An aside. I only just noticed the Gears Canvas API. It doesn’t quite do what you think….. I always wanted to implement Canvas in Gears. It is also strange that Gears is so under the radar at Google these days. One blog post per year?. I guess all of the work is going into Chrome / WebKit itself.

Posted by Dion Almaer at 6:00 am

2.9 rating from 32 votes

Tuesday, July 21st, 2009

Wouldn’t it be Swell to be able to drag and drop between Web and desktop

Category: Framework, HTML, JavaScript

Christophe Eblé has kindly written a guest post on Swell JS and his drag and drop manager that works with your desktop. Here he tells us more:

At Swell we were about to create a Drag & Drop Manager just like in other Javascript libraries such as Jquery, YUI, Mootools, Scriptaculous, but we were not really satisfied with this decision.

What motivated us to adopt another strategy is that we didn’t want to provide yet another simulated solution but instead a drag & drop library that would use the real power of the web browser.

We’ve been faced to a lot of problems and we are still struggling with API differences. The Pros and cons of using system drag&drop over simulated solutions:


  • Accuracy and performance, mouse move tricks to position an element under the pointer and detect target collision are things of the past, system DD is wicked fast!
  • system DD can be anything you like (simple images or complex dom nodes) and can interact within your browser window, the chrome (search field, address bar…) or tabs (if the browser allows it, FF 3.5 does it right) and even your OS.
  • system DD through the dataTransfer object can carry powerful meta information that are not necessary the dragged object itself, this can be arbitrary text (JSON data for ex.), urls and for the latest browsers complex data types see this
  • system DD has true DOM Events


  • Browser differences in this subject are a real pain, I couldn’t list all the oddities here :)

The drag & drop implementation in Swell is still at an early stage, and not officially released but here’s some details on what we’re working on.

  • Provide a single way to create native drag & drop objects in every possible browser
  • Provide setDragImage method on browsers that don’t support it yet
  • Provide a DD Manager that acts as a container for all drag and drop events on which you can place your handlers, cancel events, or quickly batch all the DD objects on the page. (useful when you deal with dozens of DD Objects)
  • Provide a way to associate complex metadata with each DD objects
  • Provide a way to easily create visual feedback for your DD Objects
  • Tight integration with our event library
  • and much more…

There’s a TRAC available on the project with a roadmap and release dates, a blog, and even a SVN repo that you can check out. Be careful, as I said above the library is still in heavy development and is missing docs! We are looking for any kind of help and just hope to receive massive feedback ;).

See some examples in action:

In the video, we are providing a simple yet powerful application to import a RSS feed in your webpage. The classical way is to type in the feed URL and then getting redirected to it, which commonly takes 3 to 5 steps. With this implementation you just have to drop the RSS icon onto an appropriate target and that’s all folks!

We use YQL and JSONP to transform RSS into JSON and of course a Swell Element to dynamically attach the behavior to the webpage:


  1. var dd2 = new Swell.Lib.DD.Droppable('moo');
  3. dd2.ondrop = function(e) {
  4.     var file = this.getData(e, 'DD_URL').split("\n")[0];
  5.     if (/^http\:\/\//i.test(file) !== true) {
  6.         return false;
  7.     }
  8.     $('debug').setHTML('loading…');
  9. = '';
  11.     var yqlRequest = '*%20from%20rss%20where%20url=%22'+file+'%22&amp;format=json&amp;callback=moo';
  12.     html.script({
  13.         'type' : 'text/javascript',
  14.         'src'  : yqlRequest
  15.     }).appendTo(document.getElementsByTagName('head')[0]);
  16. }

Posted by Dion Almaer at 6:06 am

4.5 rating from 59 votes

Friday, July 10th, 2009

Augmented Reality is just hot, and the Web can do it

Category: Showcase

Michael Zoellner has developed an augmented reality twitter mashup that is very cool indeed. AR is great stuff, and the Web platform is a great way to mash it up. Hopefully Apple will take the petition seriously:

The whole application is developed in Webkit (UIWebView / Safari Mobile). A native Cocoa wrapper delegates location, compass and accelerometer to Javascript in the UIWebView. The 3D scene is based on Safari Mobiles brilliant 3D CSS transforms. The Ajax part is done with jQuery. After writing some native iPhone apps this Webkit approach seems to be ideal for rapid development of applications independent of the iPhone UI.

Let’s see what Apple decides about the (semi SDK conform) video background. I already joined the AR on iPhone petition.

Posted by Dion Almaer at 5:12 am

3.8 rating from 18 votes

Thursday, July 9th, 2009

Pimping JSON – YQL now offers JSONP-X!

Category: JavaScript, JSON, Yahoo!

Yesterday’s announcement of Yahoo’s YQL now supporting insert, update and delete overshadowed another interesting new feature: JSONP-X output.

Here’s what it is and why it is useful: YQL data can be returned in XML which is annoying to use in JavaScript (for starters because of crossdomain issues in Ajax). JSON is much easier as it is native to JavaScript. JSON-P makes it even more easy for us to use as JSON data wrapped in a function call allows us to get the data by creating script nodes dynamically.

Where it falls apart is when you want to get back HTML from some system (not on your own server) and use it in JavaScript. You either need to convert the XML to JavaScript and create HTML elements from it or you need to loop through a JSON construct and assemble HTML from it. JSONP-X works around that step for you. In essence it is XML as a string returned inside a JSON object.


  1. <results>
  2.   <div id="following">
  3.     <span>
  4.       <a href="">Codepo8</a>
  5.     </span>
  6.   </div>
  7. </results>



  1. {"results":[
  2.   "div":{
  3.     "id":"following",
  4.    "span":{
  5.      "a":{
  6.        "href":"",
  7.        "text":"Codepo8"
  8.      }
  9.    }
  10.  }
  11. ]}



  1. foo({"results":[
  2.   "div":{
  3.     "id":"following",
  4.    "span":{
  5.      "a":{
  6.        "href":"",
  7.        "text":"Codepo8"
  8.      }
  9.    }
  10.  }
  11. ]})



  1. foo({"results":[
  2. "<div id=\"following\"><span><a href=\"\">Codepo8</a></span></div>"
  3. ]})

The way to invoke the JSONP-X output is to provide a format parameter of xml and a callback parameter.

This allows me for example to get the list of people I follow on twitter from my twitter homepage and display it in another document with a few lines of JavaScript without any need of using the API or having a local proxy:

  1. <script type="text/javascript" charset="utf-8">
  2. function foo(o){
  3.   var out = document.getElementById('container');
  4.   var content = o.results[0]
  5.   out.innerHTML = content.replace(/href="\//g,'href="');
  6. }
  7. </script>    
  8. <script type="text/javascript" src="*[%40id%3D%27following_list%27]%22&format=xml&callback=foo">
  9. </script>

More details on this are available in this blog post.

Posted by Chris Heilmann at 10:48 am

2.3 rating from 40 votes

Tuesday, June 30th, 2009

Firefox 3.5: The fastest fox has landed

Category: Browsers, Firefox

It is great to feel the good vibes at Mozilla HQ today as we launch Firefox 3.5! It is always an interesting ride to see a browser develop, and realize how complex and large the work is.

Congrats to the browser developers out there who are working hard to make the Web better. With final versions of Firefox 3.5, Safari 4, and Chrome 2 out in the wild…. things are picking up nicely.

The Firefox 3.5 release is exciting for me because it really benefits the developers. We get Open Video, @font-face, cross site XHR, Geo Location APIs, CSS Media Queries, Native JSON, Offline support, Web Workers, and so much more.

And, the world keeps moving. I have seen some very cool things in the nightly tree, and look forward to beign around as the team works on the next great Firefox.


Steve Souders has posted on Firefox 3.5 getting 10 out of 11 in his UA Profiler tests.

Watch the downloads come in with this cool download tracker that uses Canvas and SVG, all thanks to Justin Scott. The stats so far show that if the current rate trends hold we will beat the Firefox 3.0 download day, which is a surprise to all.

Sean Martell has created a nice wallpaper and persona to commemorate!

Posted by Dion Almaer at 10:35 am

4 rating from 83 votes

Wednesday, June 24th, 2009

MooTools: Saving the dollars, replacing document.write

Category: JavaScript

The religion behind a simple $ has been fierce in the Web world. MooTools has decided to make the Dollar Safe Mode which is similar to cousins such as jQuery.noConflict (in MooTools case it just looks for the $ function). Now you can just use if you want to play in the wild, or wrap up in a closure to be nice:


  1. (function(){
  2.     var $ =;
  4.     this.X = new Class({
  5.         initialize: function(element){
  6.             this.element = $(element);
  7.         }
  8.     });
  9. })();

Please note that MooTools will probably remain incompatible with other frameworks that modify native prototypes, as there will probably be more name clashes. This isn’t a cross-framework compatible MooTools version by any means, nor does it want to be. The whole point is not to “steal” the dollar function from other libraries.

In other MooTools news, MooTools Core Dev Thomas Aylott (subtleGradient) shows another example of overriding document.write():

I created a replacement for document.write that saves the arguments and then injects them into the page after the dom is ready. This is useful for embedding gists on your page since you can use the additional filter option to reject stuff from being written to your page. This would also be really handy for sites that include JavaFX or certain ads or anything that requires the use of document.write on your page. By deferring the injection of that stuff until after the dom is ready your visitors see the page content before any of the extras like Java applets or ads begin to load.

Posted by Dion Almaer at 6:28 am

3.6 rating from 38 votes

Tuesday, June 23rd, 2009

Sprite Me! Helping you sprite up, but maybe you shouldn’t?

Category: CSS, Performance

There have been many tools to help make image spriting easier, by packaging up your images into one large image and splitting it up again via CSS.

Steve Souders just showed off a new little tool he created, Sprite Me at the Velocity conference that kicked off today. He has made it easier to work with sprites by:

  • finds background images: SpriteMe generates a list of all background images in the page. Hovering over the its URL displays the image.
    Each of the DOM elements that use that image are also listed. [DONE]
  • groups images: It’s hard to figure out which images can be sprited together, and how they should be laid out. For example, background images that repeat horizontally must fill the entire width of the sprite. Background images positioned left bottom must be at the right top of the sprite if their container might be bigger than the image. SpriteMe determines which images should be sprited together based on these constraints.[IN PROCESS]
  • creates sprites: SpriteMe generates the sprite for each grouping of images. This is done using open source tools, such as CSS Sprite Generator. [TBD]
  • updates CSS: The final tricky part of using sprites is changing the CSS. Sometimes the CSS is a rule in a stylesheet. Or it might be a rule in an inline style block. Or it might be specified in an element’s style attribute. Because SpriteMe runs inside your web page, it can find the CSS that needs to be updated. It makes the changes in realtime, so you can visually check to confirm the sprites look right.You can export the modified CSS to integrate back into your code. [TBD]

Great, a simple new bookmarklet to work with Sprites. It is always a good idea to sprite up right? Not exactly.

Vlad Vuki?evi?, a leader in the Mozilla community (and brought us cool stuff like Canvas 3D!) has spoken up on the internals of the browser, which shows you the trade-offs for the spriting approach:

The biggest problem with CSS sprites is memory usage. Unless the sprite image is carefully constructed, you end up with incredible amounts of wasted space. My favourite example is from WHIT TV’s web site, where this image is used as a sprite. Note that this is a 1299×15,000 PNG. It compresses quite well — the actual download size is around 26K — but browsers don’t render compressed image data. When this image is downloaded and decompressed, it will use almost 75MB in memory (1299 * 15000 * 4). If the image didn’t have any alpha transparency, this could be maybe optimized to 1299 * 15000 * 3, though often at the expense of rendering speed. Even then, we’d be talking about 55MB. The vast majority of this image is blank; there is nothing there, no useful content whatsoever. Just loading the main WHIT page will cause your browser’s memory usage to go up by at least 75+MB, just due to that one image.

That’s not the right tradeoff to make for a website.

What alternatives are there? None right now…. but they are hopefully on the way. Some folks have been talking about the idea of packaging up images in zip files, and then the browser can manage more than just the download process, but also only load up what it needs:

Many browsers have support for offline manifests already; it might be possible to extend that to allow downloading one file (like a jar/zip file) that contains a manifest of resources and equivalent URLs that are contained inside it.

Rob Sayre, also of Mozilla, broached the subject:

Sprites have the advantage of working right now, but maybe there should be a way to serve up a multipart response with your sprite images as well. That would cut down on CSS rule count and maintenance, but still group the images in one HTTP request. Authors are already giving up the advantages of separate resources in return for speed, so maybe this is worth doing.

You can (in theory… haha) get some of these advantages with HTTP pipelining, but a multipart response would allow the server optimize the response order as they do with sprites today.

Posted by Dion Almaer at 12:01 am

4 rating from 28 votes

Thursday, June 18th, 2009

JavaScript Compatibility Tests

Category: JavaScript, JSON

Robert Nyman has setup a really nice test suite for JavaScript 1.6, 1.7, and 1.8+ features such as getters/setters, Object.defineProperty, Object.getPrototypeOf, new String extras, and JSON.

It includes compatibility tables, and will try to run the tests on your browser to give you feedback. It also includes sample code to check web browser support that you can use in your own projects.

Nice job Robert!

Posted by Dion Almaer at 10:25 am

4.4 rating from 22 votes

Tuesday, June 9th, 2009

Titanium gets hardened with new beta that features Mobile and more

Category: Mobile

Appcelerator has announced a Titanium beta that adds to their desktop vision with new APIs and developer tool but also allows you to create mobile applications using HTML/CSS/JavaScript (and in fact Ruby, Python, …) that run on iPhone and Android.

You can take a look at what it takes to develop for desktop and mobile in this screencast from Kevin Whinnery:

I instantly thought of PhoneGap and asked Jeff Haynie about the differences. It appears that Titanium Mobile handles the device mapping in a different way. Instead of focusing just on device APIs and giving developers access to those APIs via JavaScript, it goes a little further. The Titanium tool will compile down your code and you will see that native widgets will be used in places. As the mobile version moves forward, they expect to do more of that kind of work, so instead of a WebView + APIs, you have native where you can, and WebView where you can’t.

From watching the Titanium Developer tool at work, you can quickly see that it is nicely taking care of you, the Web developer, as you go through the process of dealing with phones. Dealing with the SDKs from Apple and Android can be messy business and the tool really tries to make it much more seamless. The same can be said for the packaging and deployment process.

It is great to see yet another product that comes along to help Web developers take their existing skills and have them work on mobile platforms. Personally, the thought of going to Objective-C land isn’t a pleasant one for me, so this and PhoneGap make total sense. If other platforms such as Android and hopefully webOS catch on, then it will make even more sense to go cross platform. Who better to write code on difference devices that Web devs since we have to deal with it every day ;)

Need a visual builder? You could take the upcoming 280 Atlas, use Interface Builder or Atlas itself, and deploy your app to the phone. Great times for the Web platform.

Posted by Dion Almaer at 5:16 am

4 rating from 28 votes