Tuesday, May 27th, 2008

Announcing AJAX Libraries API: Speed up your Ajax apps with Google’s infrastructure

Category: Ajax, Google, JavaScript, Library

AJAX Libraries API

I just got to announce the Google AJAX Libraries API which exists to make Ajax applications that use popular frameworks such as Prototype, Script.aculo.us, jQuery, Dojo, and MooTools faster and easier for developers.

Whenever I wrote an application that uses one of these frameworks, I would picture a user accessing my application, having 33 copies of prototype.js, and yet downloading yet another one from my site. It would make me squirm. What a waste!

At the same time, I was reading research from Steve Souders and others in the performance space that showed just how badly we are doing at providing these libraries. As developers we should setup the caching correctly so we only send that file down when absolutely necessary. We should also gzip the files to browsers that accept them. Oh, and we should probably use a minified version to get that little bit more out of the system. We should also follow the practice of versioning the files nicely. Instead, we find a lot of jquery.js files with no version, that often have little tweaks added to the end of the fils, and caching is not setup well at all so the file keeps getting sent down for no reason.

When I joined Google I realised that we could help out here. What if we hosted these files? Everyone would see some instant benefits:

  • Caching can be done correctly, and once, by us… and developers have to do nothing
  • Gzip works
  • We can serve minified versions
  • The files are hosted by Google which has a distributed CDN at various points around the world, so the files are “close” to the user
  • The servers are fast
  • By using the same URLs, if a critical mass of applications use the Google infrastructure, when someone comes to your application the file may already be loaded!
  • A subtle performance (and security) issue revolves around the headers that you send up and down. Since you are using a special domain (NOTE: not google.com!), no cookies or other verbose headers will be sent up, saving precious bytes.

This is why we have released the AJAX Libraries API. We sat down with a few of the popular open source frameworks and they were all excited about the idea, so we got to work with them, and now you have access to their great work from our servers.

Details of what we are launching

You can access the libraries in two ways, and either way we take the pain out of hosting the libraries, correctly setting cache headers, staying up to date with the most recent bug fixes, etc.

The first way to access the scripts is simply be using a standard <script src=”..”> tag that points to the correct place.

For example, to load Prototype version 1.6.0.2 you would place the following in your HTML:

  1. <script src="http://ajax.googleapis.com/ajax/libs/prototype/1.6.0.2/prototype.js"></script>

The second way to access the scripts is via the Google AJAX API Loader’s google.load() method.

Here is an example using that technique to load and use jQuery for a simple search mashup:

  1. <script src="http://www.google.com/jsapi"></script>
  2. <script>
  3.   // Load jQuery
  4.   google.load("jquery", "1");
  5.  
  6.   // on page load complete, fire off a jQuery json-p query
  7.   // against Google web search
  8.   google.setOnLoadCallback(function() {
  9.     $.getJSON("http://ajax.googleapis.com/ajax/services/search/web?q=google&;v=1.0&;callback=?",
  10.  
  11.       // on search completion, process the results
  12.       function (data) {
  13.         if (data.responseDate.results &&
  14.            data.responseDate.results.length>0) {
  15.          renderResults(data.responseDate.results);
  16.         }
  17.       });
  18.     });
  19. </script>

You will notice that the version used was just “1”. This is a smart versioning feature that allows your application to specify a desired version with as much precision as it needs. By dropping version fields, you end up wild carding a field. For instance, consider a set of versions: 1.9.1, 1.8.4, 1.8.2.

Specifying a version of “1.8.2” will select the obvious version. This is because a fully specified version was used. Specifying a version of “1.8” would select version 1.8.4 since this is the highest versioned release in the 1.8 branch. For much the same reason, a request for “1” will end up loading version 1.9.1.

Note, these versioning semantics work the same way when using google.load and when using direct script urls.

By default, the JavaScript that gets sent back by the loader will be minified, if there is a version supported. Thus, for the example above we would return the minified version of jQuery. If you specifically want the raw JavaScript itself, you can add the “uncompressed” parameter like so:

javascript

  1. google.load("jquery", "1.2", {uncompressed:true});

Today we are starting with the current versions of the library, but moving forward we will be archiving all versions from now onwards so you can be sure they are available.

For a full listing of the currently supported libraries, see the documentation.

Here I am, talking about what we are doing in two short slides:

The Future

This is just the beginning. We obviously want to add more libraries as you find them useful. Also, if you squint a little you can see how this can extend even further.

If we see good usage, we can work with browser vendors to automatically ship these libraries. Then, if they see the URLs that we use, they could auto load the libraries, even special JIT’d ones, from their local system. Thus, no network hit at all! Also, the browser could have the IP addresses for this service available, so they don’t have the hit of a DNS lookup. Longer lived special browser caches for JavaScript libraries could also use these URLs.

The bottom line, and what I am really excited about, is what this could all mean for Web developers if this happens. We could be removed of the constant burden of having to re-download our standard libraries all the time. What other platform makes you do this?! Imagine if you had to download the JRE everytime you ran a Java app! If we can remove this burden, we can spend more time flushing out functionality that we need, and less time worrying about the actual download bits. I am all for lean, but there is more to life.

Acknowledgements

I want to acknowledge the other work that has been done here. Some libraries such as jQuery and Dean Edwards Base were already kind of doing this by hot linking to their Google Code project hosting repository. We thought this was great, but we wanted to make it more official, and open it up to libraries that don’t use our project hosting facilities.

Also, AOL does a great job of hosting Dojo already. We recommend using them for your Dojo needs, but are proud to also offer the library. Choice is good. Finally, Yahoo! placed the YUI files on their own CDN for all to use.

Posted by Dion Almaer at 9:42 am
80 Comments

++++-
4.6 rating from 200 votes

80 Comments »

Comments feed TrackBack URI

It would be great to also add Dean Edwards’ ie7.js and ie8.js libraries, as well as any minified prototype/protocolous version.

Comment by icoloma — May 28, 2008

I’m not a programmer but I have simple suggestion:

Couldn’t be possible to include in any library or script a unique ID so with simple script you could check if this library is present in a user’s cache?

Comment by letheos — May 28, 2008

Hmm. ‘Staying up to date with the most recent bug fixes’ — this sounds like blindly upgrading the hosted libraries to new versions, which is a very bad idea.

As OpenAjax noted, it’s a much better idea to use version-specific URIs, allowing users to choose the versions they wish to use — otherwise version mismatches will occur between user apps and the Google-hosted libs, creating bugs and the classic “dependency hell” that would be familiar to anyone who remembers the days of “DLL hell”.

Comment by jmason — May 28, 2008

IMHO the real benefit is “The Future” part. That’s where sites could really benefit from caching due to the shared hosting.

Here’s my proposal: Google could pre-warm the browser caches around the world by – randomly, probably based on usage – loading one of the JS libraries on the Google result page. To prevent delay of the result page this could be done asynchonuously, after a delay of, say, 2 seconds.

This would at the same time reduce the privacy issues as most sites would not issue requests to the googleapis site any more.

Comment by cschneid — May 28, 2008

thanks Google, it’s a simple idea but would deal major bonuses for faster page loading; plus keeping everything tied down in terms of versioning.

it’s like my birthday present came early ;-)

Comment by indiehead — May 28, 2008

robnyman sounds like a giant child. Haven’t we learned already from the MooTools guys that always tooting your own horn and dissing on other libraries (especially jQuery) isn’t going to get you anywhere? What’s the deal? Additionally, DOMAssistant looks like a direct rip off of jQuery except for the fact that it’s not chainable.

As for Google’s idea (which obviously from the comments isn’t such a new concept), let the people who want to use it, use it, and as for the rest of us who know what we’re doing and want complete control (or until this really does become ubiquitous), we’ll keep on doing what we’re doing.

Comment by krunkosaurus — May 28, 2008

This is a great idea!

I was one of the many people suggesting this back last year, I’m glad Google signed on.

Two additions I would like to see:

1. Ext JS. It’s a big library that would benefit tremendously from strong caching.

2. Mark James’s beautiful CC / PD icons (http://www.famfamfam.com/lab/icons/)

3. A formal method for suggesting files to include, and a list somewhere of what has been suggested and the status (working it out with the developer, not a good fit, evaluating the licensing, online and active, etc.).

Comment by richardtallent — May 28, 2008

@krunkosaurus,

Thank you for disregarding my arguments, and not reading carefully…

First, I wasn’t dissing jQuery, I mentioned how it got some of its attention in the first place. Why that was mentioned, was also in an ironic wink to what happened back then.

Second, if you were to actually read up on DOMAssistant, you would see that it indeed supports chaining.

It is a very valid concern, in my world at least, that most people will completely avoid learning how to serve code the best way and just rely on Google, and that Google gets to decide what code will be offered to such a massive market of web developers.

Cachefile.net seems (seemed?) to have a much more humble and open approach, where they were all ears for the market as a whole and suggestions about code that should be shared.

Conclusively, if the opposite to being a child is your comment, just ignoring facts and very real issues, I’d rather be a child all the way.

Comment by robnyman — May 28, 2008

I know this goes against the grain of caching the library off a CDN but I wonder if the benefits of this are negligible because it requires multiple domain lookups and the file size is fairly negligible at this stage of internet evolution compared to the lookup time. I hesitate to believe that libraries are so pervasive now that the average user will come into an application or site with a primed cache. Even with 100% adoption this seems like a lofty goal. In that light the convenience here is more about having a reliable, centralized external host for the library files. Which in and of itself isn’t such a bad thing. Personally what I’d find far more useful would be something to the effect of:

http://… google address …/jquery.LATEST.js
http://… google address …/jquery.LATEST.STABLE.js
etc.

Which would be a symlink or something similar to the latest published version of the library. Most library cores are stable and consistent enough in their behaviour now that I’d be comfortable with automatic the update. This would give my applications and sites the optimization advantages of the latest builds of the libraries without having to constantly download and update my inclusions in the numerous sites and applications I manage.

Obviously variants of this theme could be applied such as latest version of a major build (ie. jquery.1.2.latest.js) where any version up to 1.3 would be updated automatically. This would further cement the stability of the auto-updating.

*preparing to be flamed… ducks head* :)

Comment by OwenL — May 28, 2008

@OwenL – Having done this experimentally in my enterprise solution to the hosted-ajax-library problem (symlink ‘latest’ and ‘stable’ to numbered releases in a large distributed filesystem and hosted solution in an enterprise setting) I can say it’s a risky idea since it will make your webpage code be very vulnerable to the “infrastructure” breaking it. I think that it makes more sense to do a 1.x -> 1.x.y linking/aliasing, since in that case, library owners should aim for proper minor-version compatibility, and things should still work… What we do is deploy 1.x.y, let people run against it for a fixed amount of time by load()ing it explicitly, and if regression testing and other observed behaviors permit, we then swing over the link from one “y” to another “y” (e.g. some library’s 1.6->1.6.4 now becomes 1.6-> 1.6.5)

For development, it makes sense to have a “x.nightly.js” but for production you should know what api (major.minor) version you’ve written your app for and the library owner should ensure API compatibility.

Anyway, this is based on my enterprise AJAX experience where we’ve had hosted ajax in place for about 1.5 years so it might not be applicable to everyone….

Comment by Dov Katz — May 28, 2008

Well, regarding the DNS lookup, caching works both ways. If DNS is working properly then the DNS lookup will also likely be cached. Also, browsers will limit the number of simulatenous downloads from a single domain so having the content spread across multiple may help speed up the page load. That may not help with JS now that I think about it since the browser will wait to do anything until the JS is loaded and executed.

One benefit has not been mentioned yet – getting gzipped js files to work across browsers is not always that easy[1]. If your site is optimized for static content you may not have the ability to serve compressed js files only to capable browsers. That means you’re forced to use the hard to debug with packed js files.

Comment by newz2000 — May 28, 2008

I agree this is a good start but I feel the url version references are a little too flexible, Dion and will invariably negate some of the intended caching benefits.

Personally I’d rather see an HTTP RFC addition of file checksum/hashes to be ref’d against cached files based simply on version-less names, etc. But that’s just me.

And I’m not entirely sure I agree with Dean’s concern over stifling development (one of the few apparently).

Comment by bclaydon — May 28, 2008

@Robnyman
“It is a very valid concern, in my world at least, that most people will completely avoid learning how to serve code the best way and just rely on Google, and that Google gets to decide what code will be offered to such a massive market of web developers.”
It’s a bit like saying let’s not create helpers, frameworks, wysiwyg editors… people will rely on them and become less skilled…

If someone could not cache things properly before what makes you think they were about to start caching things properly ever… If they were not compressing their files… you’re saying now that google will compress one or two that they will host for them… they will never compress the rest of their files?

Odds are being in contact with “the correct way of doing things” will inspire more people (or maybe just inform them that its even possible to do) than otherwise.

But for those who don’t care and will do things the crappy way and would have continued doing so no matter what… you are right… now that it’s all done for them… they will continue to do things the crappy way no matter what…
I don’t see what the point is though.

Yes, now you make an extra DNS lookup and possibly additional HTTP requests if you had a bundling solution.. but only if the file is not cached.
For libraries with version sensitive URL… the lifespan of a file in the cache can be very long… in theory it should never change.

With browsers using these URIs as references to factory loaded libraries… there’s no reason you could not have a system that recognizes links to:
http://ajax.yahooapis.com/ajax/libs/prototype/1.6.0.2/prototype.js
as much as
http://ajax.googleapis.com/ajax/libs/prototype/1.6.0.2/prototype.js
or
http://mysite.com/ajax/libs/prototype/1.6.0.2/prototype.js

Comment by JeromeLapointe — May 29, 2008

…or
http://ajax.domassistant.com/ajax/libs/domassistant/2.7.1.1/domassistant.js

Comment by JeromeLapointe — May 29, 2008

@JeromeLapointe,

I basically agree with what you are saying. What I meant was that, yes, many people will disregard from learning or looking into how they themselves can cache, gzip and compress files on their own, and other types of files than JavaScript, since it’s just offered to them now.

If they instead, as you suggest, look at how Google do it and learn from it, great! I’m just skeptical, though, after having worked as a consultant for about a decade now, having seen how numerous customers work. I think it’s as important to educate web developers and spread the word, and not just offer hosting for certain JavaScript libraries, added and judged only by the Google gods (they should add many more JavaScript libraries to begin with, such as base 2 etc, and also add some sort of “suggest this code” feature).

While frameworks, WYSIWYG tools etc have their place, they are also, more often than not, part of creating something decent in a short amount of time, but not really optimal (since no code can foresee any type of possible context it will be used in).

And sure, files will be cached, but for those who aren’t, there’s still the DNS risk, and looking at how some web sites perform with Goggle Analytics and Google AdSense scripts, I’m just not sure that it will always be completely friction-less.

So, at the end of day, this might be a great solution for some, but for now my doubts linger in relation to factors such as people not learning proper set-ups, DNS dependencies and a too small JavaScript library subset on offer.

Comment by robnyman — May 29, 2008

Cheers that was very helpful

Comment by Aphrodisiac — June 5, 2008

Regarding the critical mass… is anyone using this yet?

Are any big players using it?

Comment by blueclock — June 6, 2008

@robnyman

I can see your point about making assumptions about it being frictionless.
But can you (or anyone else) tell me something… it seems to me that the real value of this idea is not in relying on Google to do the proper gzipping and hosting of the files, but rather the files will have a much higher chance of staying cached (especially if a critical mass is reached) if they’re all pointing to a SINGLE location, no?

Or, I can rephrase the question: if I host “mootools1.11.js” on my site, and an identical version exists on the client’s computer, it won’t be cached because the file is specific to my domain, correct? This is the problem that Google’s solution offers an answer for.

Comment by Bowser — June 11, 2008

Just posted my thoughts on “Web Service for Content Distribution Network” – http://tinyurl.com/6y3gqx . Would love to hear your thoughts.

Thanks,
Mukul.
http://mukulblog.blogspot.com

Comment by Mukul — June 12, 2008

This boasts a better ability to cache across web servers. Even if you have a very well equipped caching system etc. The idea would be that if the user had come from another site, loading the same framework, from Google’s URL, and you make a request to the same URL, you’re able to then just have the user load from their cache instead of saying “No use my version from x.com instead of y.com, even though the file is identical”


Matt Foster
Ajax Engineer
Nth Penguin, LLC
http://www.nthpenguin.com

Comment by MattFoster — June 17, 2008

Wont the DNS be be on auto for this to happen?

Comment by Tribulus — September 17, 2008

This article is very good for developers.

Thanks for this article

Comment by GlobalFreelanceProjects — October 12, 2009

It looks like a very promising project, good luck with your work and keep us all informed on its progress and how its use has benefited small community groups.

Comment by pariuri — December 3, 2009

Good review…… Thanks for posting…….. I was searching this kind of projects…… Good work man….

Comment by aarthi123 — December 16, 2009

google is starting to be a very important part in our lifes. As a matter of fact, some of us could not live without the big G. By launchig services like this one Google assure’s us that the continuos spreading is far from finish.

Comment by Bet365 — January 8, 2010

Actually using Google’s AJAX Library API (or any JavaScript Library CDN from Yahoo or Microsoft) is slower right now than just serving the files locally except in extreme cases. Its the Network effort. This is only valuable if lots of other people are doing it too. Otherwise nothing gets cached. I did some research and found tat the market penetration for these JavaScript Library CDNs isn’t enough to make it worth the cost of contacting a wrd party.

You can read more here : http://zoompf.com/blog/2010/01/shoud-you-use-javascript-library-cdns/

Comment by BillyHoffman — January 15, 2010

is very interesting what you say there must be knowledge but a little more advanced. I am not so good.

Comment by masini — February 8, 2010

Thanks for sharing your knowledge.

Comment by josephanderson — July 8, 2010

Nice, this article is very helpful for us.

Comment by royalinternationalpackersmover — July 8, 2010

The files are hosted by Google which has a distributed CDN at various points around the world, so the files are “close” to the user. That is the point.

Comment by thestreamssstone — September 19, 2010

Leave a comment

You must be logged in to post a comment.