Thursday, June 10th, 2010

Facebook has a BigPipe to smoke competitors on performance

Category: Facebook, Performance

Remember a time when you would make fun of Facebook for having such poor performance? You would see 400 scripts that would be loading, some of which that would have code for no reason. That was in the distant past now.

Makinde Adeagbo gave that great talk at JSConf about the copious amount of code they were able to delete while speeding up the site. With folks like him and Tom Occhino on the case, you know good things are happening.

If you do a view source on the Facebook home page these days, you see a lot of this:

  1. <script>big_pipe = new BigPipe(null, 4, null, true);</script>
  2. <script>big_pipe.onPageletArrive({"id":"pagelet_intentional_stream","phase":1,"is_last":false,"append":false,"bootloadable":{"ufi-tracking-js":["F+B8D","CDYbm","A5j5z","3NVRu"],"UIIntentionalStreamRefresh":["F+B8D","CDYbm","EMOa3","zwScZ","fWhta","EzjZW"]},"css":["jFmkz","z9ULo","lShFv","bh3tE","1AZL5","OxGjK"],"js":["F+B8D","CDYbm","A5j5z","fWhta","uUXWA"],"resource_map":{"fWhta":{"name":"js\/a62kak05d08cgw8o.pkg.js","type":"js","permanent":false,"src":"http:\/\/\/rsrc.php\/z1AQ7\/hash\/qkma6pho.js"},"lShFv":{"name":"css\/sprite\/autogen\/e6h3iy.css","type":"css","permanent":false,"src":"http:\/\/\/rsrc.php\/zALI5\/hash\/cngu73tz.css"},"bh3tE":{"name":"css\/sprite\/autogen\/3jkv60.css","type":"css","permanent":false,"src":"http:\/\/\/rsrc.php\/z4M49\/hash\/7wet04gi.css"},"OxGjK":{"name":"css\/1b9p1ur0qpog8cgw.pkg.css","type":"css","permanent":true,"src":"http:\/\/\/rsrc.php\/zC6TL\/hash\/1quse983.css"},"3NVRu":{"name":"js\/ufi\/tracking.js","type":"js","permanent":false,"src":"http:\/\/\/rsrc.php\/z8CIM\/hash\/7c5lvnd6.js"},"EMOa3":{"name":"js\/lib\/util\/user_activity.js","type":"js","permanent":false,"src":"http:\/\/\/rsrc.php\/z2MJ2\/hash\/7q88hxyg.js"},"EzjZW":{"name":"js\/stream\/UIIntentionalStreamRefresh.js","type":"js","permanent":false,"src":"http:\/\/\/rsrc.php\/z7LZY\/hash\/5vjds43u.js"}},"requires":[],"provides":["pagelet_controller::home_intentional_stream"],"onload":["window.__UIControllerRegistry[\"c4c0ebcac26d1c478579b3\"] = new UIPagelet(\"c4c0ebcac26d1c478579b3\", \"\\\/pagelet\\\/home\\\/intentional_stream.php\", {\"is_multi_stream\":true,\"is_prefetch\":false,\"first_load\":null}, {});; ;","share_data={max_recipients:20}","window.__UIControllerRegistry[\"c4c0ebcac36a540af71b6d\"] = new UIIntentionalStream($(\"c4c0ebcac36a540af71b6d\"), \"nile\", 1276034077, 1276032692, 5, \"lf\", 10, 0, \"[]\", \"[]\", false, 300);;
  3. //..
  4. </script>

This is BigPipe, and it is explained by this Facebook Note:

To exploit the parallelism between web server and browser, BigPipe first breaks web pages into multiple chunks called pagelets. Just as a pipelining microprocessor divides an instruction’s life cycle into multiple stages (such as “instruction fetch”, “instruction decode”, “execution”, “register write back” etc.), BigPipe breaks the page generation process into several stages:

  1. Request parsing: web server parses and sanity checks the HTTP request.
  2. Data fetching: web server fetches data from storage tier.
  3. Markup generation: web server generates HTML markup for the response.
  4. Network transport: the response is transferred from web server to browser.
  5. CSS downloading: browser downloads CSS required by the page.
  6. DOM tree construction and CSS styling: browser constructs DOM tree of the document, and then applies CSS rules on it.
  7. JavaScript downloading: browser downloads JavaScript resources referenced by the page.
  8. JavaScript execution: browser executes JavaScript code of the page.

The first three stages are executed by the web server, and the last four stages are executed by the browser. Each pagelet must go through all these stages sequentially, but BigPipe enables several pagelets to be executed simultaneously in different stages.

The picture above uses Facebook’s home page as an example to demonstrate how web pages are decomposed into pagelets. The home page consists of several pagelets: “composer pagelet”, “navigation pagelet”, “news feed pagelet”, “request box pagelet”, “ads pagelet”, “friend suggestion box” and “connection box”, etc. Each of them is independent of each. When the “navigation pagelet” is displayed to the user, the “news feed pagelet” can still be being generated at the server.

In BigPipe, the life cycle of a user request is the following: The browser sends an HTTP request to web server. After receiving the HTTP request and performing some sanity check on it, web server immediately sends back an unclosed HTML document that includes an HTML tag and the first part of the tag. The tag includes BigPipe’s JavaScript library to interpret pagelet responses to be received later. In the tag, there is a template that specifies the logical structure of page and the placeholders for pagelets.

Performance results

The graph below shows the performance data comparing the 75th percentile user perceived latency for seeing the most important content in a page (e.g. news feed is considered the most important content on Facebook home page) on traditional model and BigPipe. The data is collected by loading Facebook home page 50 times using browsers with cold browser cache. The graph shows that BigPipe reduces user perceived latency by half in most browsers.

Posted by Dion Almaer at 6:10 am

3.7 rating from 19 votes


Comments feed TrackBack URI

I wonder what the reason is for firefox being so slow in both cases. Obviously the javascript engine is faster than IE’s, so is it the HTML parser that is too slow, or the styling engine, or…?

Comment by Joeri — June 10, 2010

“After receiving the HTTP request and performing some sanity check on it, web server immediately sends back an unclosed HTML document that includes an HTML tag and the first part of the tag.”

This is insane. Editors, please fix your escape mechanisms so that we don’t need to view the source to see that here you first talk about the ‘head’ tag and then the ‘body’ tag.

Comment by nbr — June 10, 2010

Using as a test, I am not seeing anything impressive and Firefox is not any slower than Chrome (4ghz i3, so there is near zero render overhead). In fact with tracing on, Chrome is 100-150ms slower than Firefox. They should state the specs of the PC they tested on?

They do however have a MASSIVE number of external objects loading on each page, including over 20 external javascripts, 10 CSS, tons of images. Kinda insane. So I guess they need every speedup trick possible. They also seem to be using lazy image loading on some images.

Comment by ck2 — June 10, 2010

my god, this is awful. I don’t see why we should be amazed.

Comment by darkoromanov — June 10, 2010

So, they’re looking for exploits that they can use to fix things instead of going to the root of the problem and fixing their bloated frontend code? This just seems inherently wrong.

Comment by PaulArmstrong — June 10, 2010

I didnt recognize any better performance, just trouble to edit friends list, as I have arround 3700 friends, and i want to delete something arround 3000 of them, was there impossible to look up something or go to next page. still returns the same page again and again…
Time to time i got the same output as a previous page and time to time i got 2 pages, thats realy bad.
Soooo, please correct bigpipe for pageying, cause that mess is realy not common. :)
Try it.

Comment by mimoccc — June 10, 2010

@Joeri I suspect FF simply doesn’t do as good a job at lookahead downloads (lookahead past script tags since a script tag technically needs to be executed before continuing to parse the page). IE8 does put stuff on the wire decently fast.

Comment by andersbe — June 10, 2010

@darkoromanov: Bigpipe was in addition to simplifying the front-end code as described in the linked talk by Makinde at JSConf:

Comment by schrep — June 10, 2010

Wow, no love for facebook ;-)
I’m not condoning fb’s privacy policies (or lack thereof), but these techniques seem pretty awesome for parallelizing loading of massive scale mashups. Perhaps one could use something like labjs to emulate bigpipe’s loading behavior?
If fb wants to earn some (sorely needed) community karma they should github bigpipe.js asap.

Comment by rdza — June 10, 2010

I did this test: there is easy to see how the page start to download aditional css and js when finish to flush the pagelest

the keyword here is “flush”

Comment by martinborthiry — June 10, 2010

Isn’t that called “Flushing content early”? xD

Comment by SleepyCod — June 10, 2010

This is an interesting “solution” but I don’t see how it is useful outside of Facebook and may actually make things a bit worse because it allows you to not optimize your back-end.

It basically breaks down into 2 parts:

1 – They are now forced into a code pattern that requires progressive enhancement. Considering where they came from, this is huge (but not necessarily useful for anyone else).

2 – The parallelization of BigPipe itself which essentially hides a “slow back-end” problem. It just interleaves the responses from different modules into the base html itself. The opportunity in savings for any site here is just the time for the base HTML to load which doesn’t usually justify something this complex to solve unless you’re serving personalized (non-cacheable) data for as many people as they are.

Very cool technique and glad it solves a problem in their use case but it’s not nearly as game-changing as they make it out to be.

Now, if they could only fix the performance of the widgets everyone else embeds in their pages to integrate with them!!!

Comment by PatrickMeenan — June 11, 2010

I’ve just released our own BigPipe implementation at

All feedback is greatly appreciated.

Comment by garo — October 1, 2010

Is any open code of Pipeline available?

Comment by Friseur — March 4, 2011

Leave a comment

You must be logged in to post a comment.