Friday, June 27th, 2008

The fight for cross domain XMLHttpRequest

Category: Security, XmlHttpRequest

<p>There is a thread going on secure cross domain requests. Microsoft came out with a paper saying that the W3C standard isn’t secure, and pushing the Microsoft XDR spec:

A few proposals and implementations exist like XDomainRequest in IE8, JSONRequest and the W3C’s Web Applications Working Group’s Cross Site XMLHttpRequest (CS-XHR) draft specification, which combines an Access control framework with XMLHttpRequest or other features. While XDomainRequest is focused on enabling anonymous access of third party public data, Cross Site XMLHttpRequest has added functionality and consequently enables a broader set of scenarios that may appeal to the developer who may choose to use cross domain authentication and access control among other features. As can be expected with securing a large cross section of cross domain scenarios, a number of concerns have been identified with the CS-XHR draft by the web development community, the IE team members and members of the Web Apps Working Group. For a list of our recent feedback on security on CS-XHR and our take on important security principles in cross domain, please read our Security Whitepaper on Cross Domain. The paper also covers best practices and guidance for developers who will choose to build on the current draft if it’s supported by a future browser.

The community quickly jumped on this in the comments, and beyond.

Anne van Kesteren said:

After half a year of waiting Microsoft finally posted their feedback on Access Control for Cross-Site Requests and specifically the way XMLHttpRequest Level 2 works with that. Microsoft blogged about this event. I suggest people read this rebuttal from Jonas on the paper Microsoft published. To be clear, while the specifications are not entirely finalized nobody has so far put forward a viable attack scenario that does not already apply when these technologies are not supported by user agents.

(Related: Working group fun and “Concerns” raised about W3C Access Control spec have been little more than FUD.)

As linked from Anne, Jonas posted nice feedback:

I’ll start with a mini FAQ to avoid repeating myself below:

Why is the PEP in the client rather than the server?

In order to protect legacy servers some of the enforcement will have to live in the client. We can’t expect existing legacy servers to all of a sudden enforce something that they haven’t before.

In fact, even XDR using client side PEP. It’s the client that looks for the XDomainRequest header and denies the webpage access to the data if the header is not there.

In fact, Access-Control does allow full PEP on the server if it so chooses by providing an “Origin” header.

Is Access-Control designed with “Security by design”

Yes. In many ways. For example Access-Control does not allow any requests to be sent to the server that aren’t already possible today, unless the server explicitly asks to receive them.

Additionally Access-Control sets up a safe way to transfer private data. This prevents sites from having to invent their own which risks them inventing something less safe.

Thirdly, Access-Control integrates well with the existing HTTP architecture of the web by supporting REST apis and the Content-Type header. This allows existing security infrastructure to inspect and understand Access-Control requests properly.

What about DNS rebinding attacks.

Even with DNS rebinding attacks Access-Control is designed not to allow any requests which are not possible already in todays web platform as implemented in all major browsers.

Especially the last point is something that seems to have been misunderstood at Microsoft. It is not the case that DNS rebinding attacks affect Access-Control any different than it affects the rest of the web platform.

Related Content:

Posted by Dion Almaer at 9:24 am
18 Comments

++++-
4.3 rating from 23 votes

18 Comments »

Comments feed TrackBack URI

I agree with the continued fight to get a standard way of making cross-domain communication available to Javascript. However, all these standards that keep flying back and forth, they are only now starting to creep into the bleeding edge of the major browsers, and even still, noone’s agreed on how it should actually work. FF3 has an implementation, and so of course IE has to break that and put in a different one. But what about the millions of users who will still be on FF2 and IE7 for at least a year or two more, if not longer? Do we have to keep relying on ugly and insecure methods like IFRAME proxies or SCRIPT tag injection?

May I humbly suggest that there *are* some viable solutions that *do* implement similar security and still give that cross-domain communication ability. And they will work in (almost) *any* browser, definitely way more widespread than the bleeding edge stuff that’s being bantered about now.

For instance: flXHR (http://flxhr.flensed.com) – an API-compatible drop-in replacement for native XHR, with powerful new features like cross-domain communication (with server policy authorization), robust error callback handling, timeouts, and easy configuration/integration/adaptation.

The beauty of this solution is that you can drop in flXHR and replace the usage of the native XHR, and all other existing code needn’t be touched at all, because it works the same. This makes it easy to adapt into all manner of browsers and JS frameworks. See the demos (especially #7-12) here: http://flxhr.flensed.com/demo.php#demo7

Comment by shadedecho — June 27, 2008

It seams to me people will fight any idea Microsoft submits simply because it’s Microsoft. One thing I’ve learned over the years is to listen with enough humility to accept that there might be something of value in a suggestion such as Microsoft’s.

The question I have to ask is, “What is specifically wrong with having a ‘technically’ more secure solution despite the fact there isn’t a ‘viable attack scenario’ at the moment?”

This is a legitimate question. I want to know what the overall negative effect to the ‘draft’ this ‘proposed’ revision would have on the overall goal of the standard.

Comment by tysonofyork — June 27, 2008

@tysonofyork,

I think that people are very much listening. Jonas’ message to the thread was in a great tone if you ask me.

It would be nice to hear the concerns a touch earlier though don’t you think? Instead of two years into the process?

Jonas says that Microsoft has a lot of great experience. We all want them to be active in the community.

Cheers,

Dion

Comment by Dion Almaer — June 27, 2008

Right now my firefox 3 doesn’t allow me to access pages from other domains through ajax. If it was allowd, I could have easily created my own search engine which would be actually google in disguise, what do you think of that?

Comment by ideamonk — June 27, 2008

@Dion,
Yea. :) I suppose you are right. The overall tone from Jonas was generally a good one. My apologies.

However, set me straight if I am wrong in thinking this, but isn’t it a draft? I take this to mean it’s not finished; that it’s open for revision.

I envision an open community that even if you are two years late to the table, there’s an open seat waiting for you.

Maybe I just naive and crazy.

Comment by tysonofyork — June 27, 2008

As glad as I am that they’re drafting this.. I wonder when other “drafts” will be set in stone. See CSS2.0, for an example.

I think the browsers should collaborate and just skip W3C. Then again, pigs can’t fly.

Comment by ibolmo — June 27, 2008

Who really needs another ‘World vs. Microsoft’ debate? I thought you guys post articles about video games on Fridays!

Comment by WillPeavy — June 27, 2008

ibolmo: I believe the current thinking of the W3C is that standards shouldn’t be published as complete until they have been adopted and implemented by the majority of user agents. That’s because problems and ambiguities often don’t show up until people try to implement and use the technologies.

The idea is that just because a standard is in draft, that doesn’t stop user agents providing experimental implementations. That’s what Mozilla, Opera and Webkit have been doing with CSS 3 and HTML 5, for example. The lessons learned from these experiments can then be fed back into the draft to improve it. When the draft and implementations have reached an appropriate level of maturity, the draft can be published as complete, and the implementations switched to non-experimental status.

The problem with this approach is that it allows Microsoft, by deliberately misunderstanding it, to respond to requests to support things like CSS 3 by saying “We don’t have to support it, because it’s not a published standard”. They pick and choose a few simple things to implement, so as to maintain the impression of moving forward to address standards, whilst avoiding the more advanced things like Canvas in favour of pushing their own technologies such as Silverlight. Even when CSS 3 and HTML 5 finally are published as recommendations, they can still say they need a few years to develop an implementation, because unlike the other browser vendors they haven’t bothered to develop experimental implementations during the draft phase.

Also, the browser vendors did collaborate to skip the W3C. Apple, Mozilla and Opera formed the WHATWG and developed HTML 5 due to their concerns over the direction the W3C was pushing XHTML 2. It worked too, as HTML 5 has now been accepted as a proposed standard and has gained quite a lot of traction already.

Comment by Amtiskaw — June 27, 2008

@ Amtiskaw I’m not disagreeing with your nor do I think you’re disagreeing with me. I think we both understand the situation. I am just criticizing W3C doesn’t push/force Microsoft to adapt (but then again can anyone do this?). By the way, I think Silverlight is a Flash competition, not a Canvas competition. They used VML against SVG and Canvas, thus far. But, I do agree on your statement.

Comment by ibolmo — June 28, 2008

@Amtiskaw
Great insight…
I think there can be only two possible reasons why MSFT all of a sudden starts to care about Open Web standards. Either they have figured that they cannot fight the entire rest of the world alone and they are looking to “get in” on the Open Web thing since they realize Silverlight is going to flop. In which case this is great news :)
OR they are trying to FUD the Open Web standards hoping people will flock to the more “secure solution” – read; Silverlight… :(
I hope this is #1, though I fear and believe it’s #2…

Comment by polterguy — June 28, 2008

ibolmo: Like you said, the W3C has no real power to force Microsoft (or anyone) to do anything. The standards it publishes are just recommendations. The best way to force Microsoft into adopting standards is to reduce it’s domination of the browser market. By getting about 25% of the market, Firefox, Safari and Opera are now at least able to keep Microsoft honest, and stop them heading completely off in a proprietary direction. However, Microsoft’s majority share still allows them to dictate the pace of advancement and essentially gives them a veto over which standards get used in the real world (which is what they’re trying to do with CS-XHR). If Mozilla, Apple, etc, can increase their market share further, this might change.

Silverlight’s main competitor is certainly Flash, but the bigger issue they’re fighting over is the future of web applications, and that’s where SVG and Canvas come in. Right now, if you want to develop graphically intensive internet apps you primary choices are two non-open systems: Flash and Silverlight. But what if you want to use only open technologies? Your only option is to use a hodge-podge of things like SVG, VML, Canvas, exCanvas, etc, and make a lot of effort to work around browser incompatibilities. This is possible for big companies like Google, but for smaller developers it’s not really viable. However, imagine if every browser were to support HTML 5, SVG, Canvas and ECMAScript 4. Now you’ve got a powerful cross-browser, cross-platform toolset that could compete with Flash and Silverlight, all using open standards.

polterguy: Actually I believe Microsoft’s renewed interest in web standards is down to two things: Firstly, they realised the sheer amount of anger amongst many web developers over the poor quality of Internet Explorer. This wasn’t just the usual Free Software advocates, who Microsoft know they’ll never win around, it was ordinary 9-5 software developers who got incredibly frustrated dealing with the bugs and inconsistencies of IE in their daily jobs. This anti-MS feeling was, and still is, dangerous for them, as people who are pissed off with you or your products tend to go to competitors, like Flash, if they have a choice.

Secondly, they’ve been investigated and fined heavily by the European Commission for anti-competitive behaviour. Unlike the US Department of Justice, the EC has shown it has the teeth, and more importantly the will, to bring MS to book over its practices. In order to head off the threat, they are making a lot of noise about supporting standards and interoperability. This is good news, but don’t count on it lasting forever. You can be sure that behind the scenes, Microsoft is lobbying furiously to increase it’s influence over European politicians, so it can use them to neutralise the EC. This is exactly what they did to deflect the DOJ, and it worked. If they succeed, you can bet that their supposed conversion to open standards will disappear very quickly.

Comment by Amtiskaw — June 28, 2008

It was not my intention to start a flame war. Not at all! Sorry guys. We spend so much time pointing the finger blaming this company and that and the thing it accomplishes is division and misdirection. Shouldn’t we be focusing on the impact of the solution, not who submitted it. My question was to know the impact on the standard itself, not the other companies that have already implemented a draft. I don’t care that it was Microsoft. Let’s look at the actual proposition and critic it. Let’s look at it without bias and see if it’s worthy of addition or not.

Comment by tysonofyork — June 28, 2008

@Amtiskaw: I disagree that it’s not possible for the “regular joe” to use canvas. If you include excanvas, and just pay a little attention to the lowest common denominator, it’s quite straightforward to use canvas in a cross-browser fashion. So far I’ve written two google gadgets that use canvas (with excanvas). I’ll admit the first one was a learning experience, and it does degrade in IE, but the second one renders identically (and fast) in all browsers, and this didn’t require a lot of browser-specific debugging. I’m a big flash fan, but I’ve come away from the canvas experience concluding that for basic vector graphics (animated charts and the like), you should use canvas, not flash or silverlight.

Comment by Joeri — June 29, 2008

@Joeri, the biggest problem is that Canvas, SVG, and VML were not meant to be animated. This is even more true when you emulate Canvas on IE with VML (DOM gets huge very easily if you’re not careful).

Some of us are more fortunate with our computer specs, but be careful with some of your audience that may not be as fortunate. Of course, this is a small case, but never the less be considerate.

I also agree that working with excanvas is not difficult. The only problem is that the emulation can only go so far. VML has it’s limitations, so one has to become creative and use other Microsoft proprietaries to make things work (Their CSS filter attribute, for instance).

Comment by ibolmo — June 29, 2008

I’d rather have the browsers implement something that is not standard than nothing at all. It sucks that libraries have to clean up the mess. But, at least there is something to clean up.

Comment by JustinMeyer — June 30, 2008

Ajaxian got hacked yet again… hurray spam hackers

Comment by Jon Hartmann — June 30, 2008

I think Microsoft is on the right side of this one. We should be asking the other browser makers to adopt XDR just as they adopted XHR. XDR is a lot smarter in its design than XHR. The Web Applications Working Group is taking a familiar but bad design and making it worse.

There is clearly a lot of resentment over Microsoft’s past behavior. But looking just at the merits of XDR, it gets enough things right that we should go forward with it. It is a good thing.

Comment by crock — June 30, 2008

i think xmlhttprequest for FF/other browser and activeXobject is for microsoft is only to remove the monopoly for the particular.

hence, we have to accept these boths, like generally we do. for this context no one should be dependent on browser specific code generation or development.

jst think about future :)

Comment by Ajaxester — July 11, 2008

Leave a comment

You must be logged in to post a comment.