Thursday, November 10th, 2005

Scaling Connections for Ajax

Category: Java, Server

<p>Most of the Ajax world is focused on the client side, and “ooooh ahhhh” effects :)

Greg Wilkins is sitting on the other side of the fence, thinking about the implications of a popular Ajax application on the server side.

Earlier, Greg described how Jetty 6 uses Continuations and javax.nio to limit the number of threads required to service Ajax traffic.

This time around he moves beyond threads, and talks about buffers.

  • Operating Systems & Connections: Choose your TCP/IP stack wisely remember when you used to buy a TCP/IP stack? :)
  • Connection Buffers: Significant memory can be consumed if a buffer is allocated per connection. Memory cannot be saved by shrinking the buffer size, which is a good reason to have significantly large buffers.
  • Split Buffers: Jetty 6 uses a split buffer architecture and dynamic buffer allocation. An idle connection will have no buffer allocated to it, but once a request arrives an small header buffer is allocated. Most requests have no content, so often this is the only buffer required for the request. If the request has a little content, then the header buffer is used for that content as well. Only if the header received indicates that the request content is too large for the header buffer, is an additional larger receive buffer is allocated.
  • Gather writes: Because the response header and response content are held in different buffers, gather writes are used to combine the header and response into a single write to the operating system. As efficient direct buffers are used, no additional data copying is needed to combine header and response into a single packet.
  • Direct File Buffers: Of course there will always be content larger than the buffers allocated, but if the content is large then it is highly desirable to completely avoid copying the data to a buffer. For very large static content, Jetty 6 supports the use of mapped file buffers, which can be directly passed to the gather write with the header buffer for the ultimate in java io speed.

Great to hear from the server side folk like Greg of Jetty on implications like this.

What implications have you come across?

Have you tweaked your server settings a lot to get that extra bit of performance out of the server side of your ajax app?

Related Content:

Posted by Dion Almaer at 5:16 pm
3 Comments

+++--
3.7 rating from 10 votes

3 Comments »

Comments feed

Almost a year ago we started on an application for live-football for the largest on-line internet newspaper in Norway, VG Nett (http://www.vg.no). It uses AJAX calls to the server to identify new events in each of the games and updates parts of the page as a result of its findings. The service has been a huge success.

In regards to you scaling issues, we have used a cache layer to skim off the worst traffic. In the most popular games in the norwegian and english matches, there are up to 6000 hits per second (most of which from ajax-syncronization calls) pounding the cache-layer. This is not a problem for the application — it can be a problem from time to time for the caches as they consume a lot of CPU to handle this many hits.

The greatest issue with using ajax has been the fact that IE caches javascript calls, so we have to append a changing parameter to each XMLHttp-call towards the server (eg http://url.com/?random=ABC). This makes the call go through IE’s cache, but also the caching layer in front of the back-end application, which can be a problem if not carefully designed. IE fortunately only caches for 90 seconds, so you can use a loop of random params (e.g. A then B then C then D then A again and so on).

The AJAX-implementation has increased the number of requests, but limited the application on bandwidth, as we only get new data, not all the css and HTML for each time.

The J in AJAX also provided us with the tools to generate cool events such as blinking and a pling-sound when a team scores a goal.

The application resides on http://sport.vg.no/live/

Comment by Geir Berset — November 11, 2005

Geir,

Thanks for the response. Great to hear of the success on the application too.

Out of interest, what made you choose to beat the IE cache with the random extra param versus setting the cache busting headers on the server side?

Cheers,

Dion

Comment by Dion Almaer — November 11, 2005

Dion,

we could not use cache busting headers on the server side, as that would not only penetrate the the cache in IE, but also the cache-layer in front of the application. We still had to rely on the external cache-layer for performance with so many external users.

Hope that answers you question.


Geir.

Comment by Geir Berset — November 14, 2005

Leave a comment

You must be logged in to post a comment.