Friday, July 27th, 2007

Kevin Lynch at the Ajax Experience

Category: Adobe, The Ajax Experience

>Kevin Lynch started out the keynote talking about “Four Generations of Applications”:

- Mainframe
- Client/server
- Web applications
- Rich internet applications

He then discussed the architecture of rich internet applications, focusing on the challenges.

The first challenge he discussed was local storage, and he highlighted how Google Gears is solving that problem, and how Air wants to cooperate with Gears.

The next challenge was searchability and deep linking. He proposed that we as a community use # as a standard for maintaining state in Ajax application URLs. Adobe has sent a proposal to the OpenAjax Alliance to standardize # and begin the process of standardizing deep linking.

Cross-domain access came up next. He reviewed the problem and why the cross-domain security policy exists. He discussed a proposed “crossdomain.xml” permissions file that allows a site to declare exceptions to the cross-domain security policy, which looks like:

    <cross-domain-policy>
	
        <allow-access-from domain="*.siteA.com"/>
		
    </cross-domain-policy>

It turns out this file already exists on 36% of Alexa’s top 100 sites in order to support cross-domain Flash behaviors. Because Flash already supports this mechanism, you can use Flash today as a hidden communication mechanism to allow cross-domain behaviors (see adobe.com/go/crossdomain).

Kevin hopes this same mechanism can be implemented in browsers and offered to work with standard bodies to add this to browsers.

Next, Kevin chatted about the Tamarin project, reviewing the guy features and described it as “JavaScript from the future”, reviewing key features, such as:

- Much faster performance
- E4X
- Strong types
- Sealed classes
- Runtime exceptions
- (Highly optimized, fast) regular expressions

To tempt the audience, he showed off some E4X syntax, like:

- feed.channel.title.text()
- feed..title.text()
- feed..item[1].text()
- feed..item.(@id==”82″).title.text()

Kevin showed off the adoption of Flash Player 9, showing that it was pushed out to 83% of the web in 9 months, calling it “the most ubiquitous platform in the world, even more than operating systems” and the “fastest deployment” ever for a new platform.

Kevin announced a new free, open-source Flash/Ajax Video kit that allows really simple syntax for playing movies.

	<div id="videoBox"></div>

	video = new FAVideo("videoBox", "myvideo.fly", 500, 500);
	video.play();

The playbar beneath the video can be hidden or customized:

	video.skinVisible = false;

He also showed how to create HTML controls:

	<p align="center">
		<a href="#" onclick="video.play()">PLAY</a>
		<a href="#" onclick="video.stop()">STOP</a>
		<a href="#" onclick="video.seek(video.getPlayheadTime() - 5)">REW</a>
		<a href="#" onclick="video.seek(video.getPlayheadTime() + 5)">FWD</a>
	</p>

This toolkit is available at adobe.com/go/favideo. He went to hbovoyeur.com to show off how you can use the video capabilities of the Flash player to do some really cool interactive stuff.

The next weakness Kevin highlighting is developer productivity. He said that a declarative way to do development is more productive than procedural mechanisms. He highlighted how Flex’s MXML gives a much richer declarative mechanism than HTML. To this end, he reviewed that Flex 3 is now open-source, with a public bug database and daily builds. The project will be fully up and running by the end of the year under the Mozilla Public License.

Data synchronization. This is clearly a hard problem and Kevin reviewed how Adobe is currently solving this problem with LifeCycle Data Services. He pointed out that Ajax applications can be used to talk with LifeCycle via HTTP or their own RTMP protocol (RTMP adds push capabilities). He had a demo showing Dojo use the sync services, but sadly the demo was broken. He did show the code, which was about 6 lines of code to bind a JavaScript data collection to the LiveCycle sync services.

And now the transition to AIR, which Kevin described as a way to bring Web apps to the desktop. He highlighted that AIR adds these services to web applications:

- File system access
- Network detection
- Notifications
- Application updating
- Drag-and-drop
- Local database

He also reviewed that AIR applications can be written in two styles: HTML or Flash. In both cases, you can seamlessly integrate PDF documents into the application. He also highlighted a capability I hadn’t seen before: support for deploying to “Device OS’s”. He then showed off some AIR applications:

- Simple Tasks, an Ajax application written by Jack Slocum of Ext JS running as a local Air app.
- Finetune, a Flash application to stream music
- Buzzword, a high-quality Word processor that shows off some very sophisticated layout and UI features. From his quick demo, it seemed more powerful than Apple’s Pages but adds collaborative features like co-editing with other users over the network (but is turn-based not concurrent)
- Adobe Media Player, a way to play Flash video on the desktop (similar to the Quicktime Player)
- Pownce, a client for Kevin Rose’s new Twitter-esque service

Pownce is invite-only, but Adobe obtained 300 invites. First-come, first-served by emailing pownceme@adobe.com.

Adobe has an Ajax homepage at adobe.com/go/ajax.

Related Content:

Posted by Ben Galbraith at 11:49 am
9 Comments

+----
1.8 rating from 129 votes

9 Comments »

Comments feed TrackBack URI

Always like Ben’s writeups, they are comprehensive. Many thanks for the great info! …Although crossdomain.xml for Flash has been around awhile, I like that Adobe is getting the word out about that. It would seem ‘cleaner’ for a straight Ajax app to be able to get info cross-domain in straight JS through the browser, but it’s not like it’s that much extra to include a Flash .swf and write a little extra code to accomplish a similar x-domain transport. (Likewise, I always like it when there is a ‘push’ from a group or company to increase motivation for other platforms towards a worthwhile technology)

Comment by Mark Holton — July 27, 2007

…btw, love hearing as much as possible about E4X, the syntax is clean, and the fact that XML has become a primitive in ES (JS/AS) is applauded. Wondering: does anyone have any links related to quantitative performance of E4X vs other DOM traversal methods?? Would like to see benchmarks { this one is a start (the Ajaxians posted awhile back):
http://ajaxian.com/archives/census-ria-data-loading-benchmarks
http://www.jamesward.org/census/ }

Comment by Mark Holton — July 27, 2007

Early web applications had a lot in common with mainframe apps. The browser acted like a block-mode terminal, only with pictures and lousy error handling.

Comment by Pixy Misa — July 27, 2007

Hi. Pulled ‘favideo’ yesterday & it works pretty good.

No mention in notes about proper syntax to point to a Adobe media &/or red5 based RTMP stream.

I would expect it to do this.

Anyone know of some docs/release notes with an example of such?

Cheers GregH

Comment by greg huddleston — July 28, 2007

cross-domain-policy sounds great !

Comment by riper — July 30, 2007

IMHO, crossdomain.xml wont solve anything but will increase the attack surface. Keep in mind that there is a big difference between XMLHttpRequests and JavaScript remoting. I cannot see why simple JavaScript includes should be restricted by the crossdomain.xml file. Therefore, given that fact that almost every service provides JSON output, you will increase the attack surface by making XMLHttpRequest calls not only available to the current origin but also to other websites. Correct me if I am wrong.

Comment by pdp — July 30, 2007

pdp, I need to correct you.

There are different attacks being addressed. Same-domain restrictions on data loading (e.g. XmlHttpRequest) are intended to protect the server hosting the *data*. Imagine, for example, servers behind a firewall. They should not be automatically query-able by external content running in a browser.

However, this restriction in the browser provided no means for servers to *permit* cross-domain data access. crossdomain.xml addresses this limitation with an explicit permission mechanism. JSON output also addresses this restriction, but opens up an opportunity for attack to server hosting the *requesting application*.

Loading a .js file from another domain opens up your web app to attack. The loaded JS is essentially imported into your app’s domain, giving it full privilege to query your app’s server — complete with session credentials!

So, further use of crossdomain.xml for real data loading via XmlHttpRequest should reduce the use of cross-domain JSON for otherwise safe data loading operations.

Comment by Ethan Malasky — July 30, 2007

Ethan,

I understand what you are saying. However, what I am trying to say is that with or without crossdomain.xml, XMLHttpRequest objects and JavaScript remoting hacks work. Let me make it clearer. :)

Due to the Same Origin Policies JavaScript can access only the current origin. Even if you implement the crossdomain.xml file, JavaScript again will be able to access the current origin. Why? Compatibly issues. We cannot move to the new technology over the night. With or without crossdomain.xml JSON or JavaScript remoting, if you like, will still work. The only thing that will change is increased attack surface due to the trust relationship between apps. Let me explain.

Let’s say that we have app on A.com and another one on B.com. B.com says that A.com can access its data. Effectively, this means that If I can get XSS on A.com, I will be able to read the data on that domain including the data on B.com due to the trust relationship. Today this is not possible. I need two XSS vulns rather the one.

Again, correct me if I am wrong :)

Comment by pdp — July 30, 2007

pdp –

Ah, I see your point. You are not wrong.

In your example, B.com’s data is more vulnerable, since any client it permits (A.com) may have XSS vulnerabilities.

However, A.com is *less* vulnerable, since it no longer has to rely on JSON importing in order to access B.com’s data. It can load the data via XmlHttpRequest and not fear that man-in-the-middle attacks or ownage of B.com would render its own site (A.com) vulnerable.

That’s the point I was trying to make. As a whole, I think it’s an improvement, since the risk is assumed by the data provider, rather than the consumer. But depending on your role, it’d be reasonable to feel differently. If you’re the data provider, you might prefer that your clients assume all the risk.

-Ethan

Comment by Ethan Malasky — August 2, 2007

Leave a comment

You must be logged in to post a comment.