Nginx, SPDY, and Automattic

Yesterday, Valentin Bartenev, a developer at Nginx, Inc., announced SPDY support for the Nginx web server. SPDY is a next-generation networking protocol developed by Google and focused on making the web faster. More information on SPDY can be found on Wikipedia.

At Automattic, we have used Nginx since 2008. Since then, it has made its way into almost every piece of our web infrastructure. We use it for load balancing, image serving (via MogileFS), serving static and dynamic web content, and caching. In fact, we have almost 1000 servers running Nginx today, serving over 100,000 requests per second.

I met Andrew and Igor at WordCamp San Fransicso in 2011.  For the next six months, we discussed the best way for Automattic and Nginx, Inc. to work together. In December 2011, we agreed that Automattic would sponsor the development and integration of SPDY into Nginx. The only real requirement from our end was that the resulting code be released under an open source license so that others could benefit from all the hard work.

For the past 6 months, Valentin and others have been implementing SPDY support in Nginx, and for the past month or so, we have been continually testing SPDY, fixing bugs, and improving stability. Things are almost ready for production and we hope to enable SPDY for all of in the next few weeks. Today, this site is SPDY-enabled if you are using a recent version of Chrome or Firefox and accessing this site over SSL. You can download the Chrome extension here and the one for FireFox here.

Thanks to the Nginx team for all their hard work implementing SPDY, and thanks to all of my Automattic co-workers who helped us test SPDY.  I hope to post some real-world performance numbers in the next few weeks as we complete our SPDY deployment and gather more data. We are also looking forward to SPDY support being part of the official Nginx source in the near future.

“We’d like to say big thanks to the team at Automattic and especially to Pyry Hakulinen who has been great in helping us test and debug this first public version of SPDY module for nginx. Automattic is a great partner, and we will continue to work with Barry and his team on improvements to nginx and to nginx/SPDY in particular.”

Andrew Alexeev – Nginx, Inc.

Load Balancer Update

A while back, I posted about some testing we were doing of various software load balancers for  We chose to use Pound and have been using it past 2-ish years.  We started to run into some issues, however, so we starting looking elsewhere.  Some of these problems were:

  • Lack of true configuration reload support made managing our 20+ load balancers cumbersome.  We had a solution (hack) in place, but it was getting to be a pain.
  • When something would break on the backend and cause 20-50k connections to pile up, the thread creation would cause huge load spikes and sometimes render the servers useless.
  • As we started to push 700-1000 requests per second per load balancer, it seemed things started to slow down.  Hard to get quantitative data on this because page load times are dependent on so many things.

So…  A couple weeks ago we finished converting all our load balancers to Nginx.  We have been using Nginx for Gravatar for a few months and have been impressed by its performance, so moving over was the obvious next step.  Here is a graph that shows CPU usage before and after the switch.  Pretty impressive!

Before choosing nginx, we looked at HAProxy, Perlbal, and LVS. Here are some of the reasons we chose Nginx:

  • Easy and flexible configuration (true config “reload” support has made my life easier)
  • Can also be used as a web server, which allows us to simplify our software stack (we are not using nginx as a web server currently, but may switch at some point).
  • Only software we tested which could handle 8000 (live traffic, not benchmark) requests/second on a single server
We are currently using Nginx 0.6.29 with the upstream hash module which gives us the static hashing we need to proxy to varnish.  We are regularly serving about 8-9k requests/second  and about 1.2Gbit/sec through a few Nginx instances and have plenty of room to grow!
%d bloggers like this: