Thursday, July 13th, 2006

Two-Way Web: Can You Stream In Both Directions?

Category: Comet, Remoting

Comet is mostly considered a server-to-browser thing, but how about a permanent connection in the opposite direction, from browser to server? I’ve been talking about this on my blog and received some interesting thoughts from Alex Russell.

There are two key issues:

(1) Server needs to start outputting before incoming request is finished. With a specialised server, this problem could be overcome.

(2) (More serious as we can’t control the browser) The browser would need to upload data in a continuous stream. You can do it with Flash/Java, but I can’t see how to do this with standard JS/HTML. If you use XHR, you’re going to call send() and wave goodbye to the entire request…there’s no support for sequencing it. Same if you submit a regular form, change IFrame’s source etc. Even if you could somehow delay reading of content so it’s not immediately uploaded, the browser would probably end up not sending anything at all as it would be waiting to fill up a packet.

Now I’ve seen various people mention the possibility of HTTP keep-alive, but I’ve never actually seen any concrete demos or techniques to take advantage of it from a script. So if you know of any …

Anyway, Alex Russell says it’s probably not possible, but we can get around it anyway:

So I’ve spent some time investigating this (as you might expect), and at
the end of the day there’s not much to be done aside from using Flash
and their XMLSocket interface. That’s an obvious possibility given the
high-performance Flash communication infrastructure we have in Dojo.
Doing bi-directional HTTP probably won’t happen, though, but I don’t
think that’s cause for despair. In my tests, we can get really good
(relative) performance out of distinct HTTP requests so long as the
content of the request is kept to a minimum and the server can process
the connection fast enough. HTTP keepalive exists at a level somewhat
below what’s currently exposed to browsers, so if the client and server
support it, frequent requests through stock XHR objects may verywell be
using it anyway. We’ll have to do some significant testing to determine
what conjunctions of servers/clients might do this, however.

As an interesting side note, he also pointed to some work going on at to build an open Comet protocol.

Posted by Michael Mahemoff at 5:42 pm

3.7 rating from 34 votes


Comments feed TrackBack URI

Using such a streaming technology would break the very standard we are working on every day: HTTP. HTTP does not allow the server to respond before having the full request and in my opinion Comet does already strech the standard too much, although I don’t see any good Server Side Push alternative right now…
If you really have to do fancy stuff like maintaining open a full duplex connection JS just isn’t the right choice for you.

Christian ‘Snyke’ Decker

Comment by Snyke — July 13, 2006

It seems to me that this is another case of looking at everything as a nail, now that the Ajax hammer has been discovered.

Really, the browser environment doesn’t need continuous streaming of data in both directions. One of the greatest benefits of asynchronously sending pieces of data is the lowered data throughput required (compared to sending entire pages every time new data is requested). If you want streaming duplex connections, use a different environment, such as a custom application or, as already mentioned, Flash or Java. Trying to wedge advanced client/server functionality into JS/HTML is more than just asking too much of the client-pull based technology, it potentially could turn web users away from sites that make extensive use of javascript.

I’m not saying that persistent two-way network communication doesn’t have its place — I’m just saying that JS/HTML isn’t that place. Furthermore, it’s bad design (IMHO) to violate the Model-Control-View potential that you have with a JS frontend and a dynamic-page/SQL-DB backend. Throwing a persistent connection into JS/HTML implies putting more of the model into the JS frontend than necessary.

Comment by kbdamm — July 13, 2006

someone fix this comment code!! download a .php file during submission?? why is this still doing that?

Comment by kbdamm — July 13, 2006

Agree with Snyke, it seems that under web enviroment, the full duplex connection mostly not rquired. If under special requirement, it can implemented by JS or Flash or QuickTime.

Comment by Charlie Cheng — July 13, 2006

The first browser to support Comet-like behaviour (server -> client, client -> server) would get a subset of websites being designed specifically for that browser; or some functionality on the site would be only for users of that browser. Perhaps we should promote our requirements to the browser makers in these terms?

Comment by Dr Nic — July 14, 2006

Sorry guys, I can’t buy the argument that the browser “doesn’t need” this. I can think of many applications, so you can say “can’t do” or “shouldn’t” do this, but if we worked according to what browsers and HTTP were actually designed for, Ajax – and even more so, Comet – wouldn’t exist.

A lot of theory about the Right way to things has to go out the window when you have zero control over which clients people are using.

Dr Nic, yes, it would be good to see browser support. It wouldn’t actually have to introduce any new protocols because as Alex pointed out, it would be possible for the browser to simply use keep-alive under the covers, when it notices there are frequent requests. That’s not as good as proper duplex, because you have one connection in each direction, but still gives you low latency.

Comment by Michael Mahemoff — July 14, 2006

good luck *sigh*

Comment by m3nt0r — July 14, 2006

I think the question about if we should or shouldn’t do this is not just a matter of boldness or need or advancement. Is more about if it’s worth doing it this way.

I mean, HTTP was designed with certain goals in mind. It’s ok to push the boundaries of the protocol to find new ways of solving problems. Sure. But what if our goals differ so much from HTTP’s that it would be better to use something else, to define a new protocol with these new goals in sight? (Or maybe not a completely new protocol but an extension to HTTP)

Ajax is not that far from how HTTP is supposed to work. But permanent connections? Well, maybe those are.

Comment by Gonzalo — July 14, 2006

What’s the difference between what you are talking about in the post above and what mod-pubsub and KnowNow do?

(Other than mod-pubsub seems to be dormant…)

Comment by Bob Haugen — July 14, 2006

I can’t wait for developers to keep one (at least) connection open for each web request. I never liked my servers anyway. May they burn in hell.

Seriously, if you want to try and hack the protocol to do what you “need” it to do, go ahead. Push for an extension of the protocol, and wait 10 years for it to be implemented, and 5 more years for a browser to support it. Or use flash, and shutup.

Comment by Dan — July 14, 2006

Hey Dan,

Keepalive, while not well supported these days, has been part of a standards-track RFC since ’99. The flash option is, unfortunately, very brittle. Just getting reasonable performance across the JS/Flash boundary in a production environment is non-trivial. Debugging it is even worse.

As for “not liking your web servers”, this is exactly why we’re using perlbal and Twisted Python for cometd. Your *current* web server might not be able to handle tons of open connections, but that’s a temporary failure of optimization. OSes have been upgraded underneath the web stack to support these kinds of apps (Window, Linux, and Solaris all can handle it)…the only thing left to upgrade are we web servers and the contracts they make with the application tier.

So you’re right. It *does* take a long time to upgrade the web’s infrastructure to handle this kind of workload…but the upgrade is almost done. We’re much further along this path than most people recognize.


Comment by Alex Russell — July 16, 2006

[…] As was briefly mentioned in this previous post, there’s a framework in development for those users out there looking to the skies and wanting to use Comet – Cometd. Cometd is a scalable HTTP-based event routing bus that uses a push technology pattern known as Comet. […]

Pingback by Testing The Web Dot Com » Blog Archive » Cometd: Bringing Comet to the Masses — July 17, 2006

Client to server persistant connection is not as important as the server to client connection for one main reason – the client can already send a message to the server whenever it wants, but for the server to send a message to the client you need something different (comet or whatever you want to name it). What are you trying to solve by having a persistant client to server socket? performance? Have you actually hit a performance problem with just sending individual messages? Under the covers it may well use keep-alive anyway, as long as HTTP proxies arent getting in the way too much.

Just to add to the ideas that fall into the category of ‘will probably never work in just javascript’, at the protocol level HTTP supports something called pipelining, which is sending subsequent requests on the same keep-alive socket before the previous response has been sent/received. In theory if you could get things to play nice, you could have a single HTTP socket doing bi-directional messages – the client sends new HTTP requests on the same socket while the server is still responding to the first in a typical comet ‘hanging get’ style. The server could either continue sending the first response (modifying the content based on the other requests received) or when receiving the new request it could end the first response and start sending a new response. I realise it probably wont work due to any kind of keep-alive requiring an HTTP response to have a content-length defined. I havent tried any of this since it’s just not practical when you factor actual browsers and proxies into the mix.

Comment by Martin Tyler — July 18, 2006

What’s the benefit of a persistent client->server connection? The reason the server->client messaging connection needs to be persistent is because the connection direction is client->server, so to get true low-latency message notification you need a persistent connnection.
The same is not true in the other direction – the server is listening for connections all the time and you can create a new socket for client->server messaging. Obviously keep-alive can help with minimizing open sockets (to a minimum of 2 per client if near-continuous bi-directional messaging is needed).
This is the model used by Caplin’s Liberator product which was designed from the ground-up for comet style communication. It can easily handle 10,000 concurrent clients…

Comment by Patrick Myles — August 8, 2006

I have implemented full-duplex client-server and server-client communication in Javascript over HTTP in Anyterm (, where it is used to proide a “terminal emulator in a web page”. When I first thought of the idea it felt like a hack and I guessed that it would break browsers and servers, but in practice it turns out to work reasonably well. The limit of two connections between any client and server suits exactly.

Client-to-server data is sent using regular HTTP requests. The corresponding HTTP responses are empty. Server-to-client data is sent using HTTP responses. When the client gets a response it returns a request, which is empty. At start-of-day the client has to send an initial empty request to “prime” the system.

If there is no activity on the server-to-client channel for a while it is necessary to avoid timeouts by sending an empty message.

I’m now using the same technique in my Javascript webmail application, Decimail ( where it’s used to communicate a new mail notification from the server to the browser without the need for polling.

Yes, it feels a bit hacky, but it does work.

Comment by Phil Endecott — August 31, 2006

Leave a comment

You must be logged in to post a comment.