Thursday, September 15th, 2005

Ajaxian Fire and Forget Pattern

Category: Editorial

Have you ever had a use case where you wanted to fire off an asynchronous request as a “Fire and Forget”. You didn’t care about hearing from the server (even to hear if all is well).

We had a use case like this recently, and ran into some interesting issues.

The root problem in our case was that a service that we were hitting via Ajax was insanely slow, and a lot of side-effects came out of this, and it made us think about writing this post about things to think about, such as:

  • Fork and Spin on the Server
  • Concurrent HTTP limits to a given domain
  • abort captain. abort.

Keeping the Server Side Short

Normally with an Ajax request, you want to keep the response time as fast as possible, and do not want to be waiting for the server to do any work that doesn’t have to be done.

This is just as important as a normal page request, if not MORE important as you often have many Ajax requests going on, and the user expects them to be pretty much immediate. Their expectation is different to a page response that they expect to wait for.

We have seen a fair amount of projects where people ignore this.

If the work done on the server takes awhile you have a couple of options:

  • Good ole caching: If you take a second look at the response, you may find a place to cache the data. This caching can take place in many layers. It can be cached in a way where you don’t even have to make the request (cached response in the client), cached response on the server side in the web tier, or caching data on the server side in the services tier. Each have pros and cons depending on your use cases and need proper architecture.
  • Queue Work: If we are fortunate enough to only care about a response that needs to run a subset of the server side functionality, we can fork off work for the server to run seperate to the request/response lifecycle. This fits in perfectly with fire and forget mode as your response logic can now be “capture request info, validate if needed, put info on a message queue, respond”. Now you have a request/response lifecycle that takes no time at all.

Concurrent HTTP Requests

Various browsers and systems put in limits for the number of concurrent HTTP requests. You may have noticed this if you were on a site that had a bunch of images to download, and you went wild telling the browser to save images in the background. You could have seen that only 3-5 images come down at a time, and the others are queued until the first images fully download.

To get around this problem, you can often cheat the system by using URLs such as:,,


Again, if you can fire and forget your Ajax request, you do not want to bother waiting for the full response from the server (if it is as slow as one of our cases). Ideally you would have been able to keep the server-side short and all would be well. But if you really have to take it to the next level, you can use a hack that we put into place.

We noticed that if we fired off a lot of XHR requests, we would max at 5, and the others died. After playing around with some low level items (with readyState and such) we found a cheeky solution that is working for us.

In our callback, we call abort on the xhr object right away.

xhr.onreadystatechange = function() { xhr.abort(); }

You may think that it would be cleaner to check for at least the readyState of 2 (so you know that send() has really happened), but in practice that didn’t work too well. In theory this means that we may lose some requests, but in testing we are getting all of them over to us and it is working!

Posted by Dion Almaer at 2:13 pm

3 rating from 6 votes


Comments feed

I’ve never heard of, but am happy to see some XHR abort() function.

Comment by Chris Charlton — October 4, 2005


We are using ajax to get data dynamically from server side.

If a xmlhttp request is made when the previous xmlhttp request is still being processed, DB connections are not getting closed.
Couple of requests like these is making the DB connection pool empty.

Every time a new xmlhttp request is made, we are aborting the previous request and we are closing the connection in finally block. Still connections are not getting closed.

This is our code:

//AJAX code in javascript
xmlHTTP.abort();‘post’, url, true);
xmlHTTP.setRequestHeader(“Content-Type”, “application/x-www-form-urlencoded”);
xmlHTTP.send( queryString );

//Java code
ReportLogic reportLogic = ReportLogic.getInstance(); //ReportLogic does the db query
Connection con = null;
List profilesList1 = null;
con = getConnection();
profilesList1 = reportLogic.getReport();
}catch(Exception e){
return mapping.findForward(“global-exception”);

Can anybody figure out the problem

– Kiran M N

Comment by Kiran M N — October 6, 2005

function abort doesn’t work as expected. Read more:


Comment by Marius Zilenas — March 8, 2006

Leave a comment

You must be logged in to post a comment.