WordPress.com DDoS Details

As you may have heard, on March 3rd and into the 4th, 2011, WordPress.com was targeted by a rather large Distributed Denial of Service Attack. I am part of the systems and infrastructure team at Automattic and it is our team’s responsibility to a) mitigate the attack, b) communicate status updates and details of the attack, and c) figure out how to better protect ourselves in the future.  We are still working on the third part, but I wanted to share some details here.

One of our hosting partners, Peer1, provided us these InMon graphs to help illustrate the timeline. What we saw was not one single attack, but 6 separate attacks beginning at 2:10AM PST on March 3rd. All of these attacks were directed at a single site hosted on WordPress.com’s servers. The first graph shows the size of the attack in bits per second (bandwidth), and the second graph shows packets per second. The different colors represent source IP ranges.

The first 5 attacks caused minimal disruption to our infrastructure because they were smaller in size and shorter in duration. The largest attack began at 9:20AM PST and was mostly blocked by 10:20AM PST. The attacks were TCP floods directed at port 80 of our load balancers. These types of attacks try to fill the network links and overwhelm network routers, switches, and servers  with “junk” packets which prevents legitimate requests from getting through.

The last TCP flood (the largest one on the graph) saturated the links of some of our providers and overwhelmed the core network routers in one of our data centers. In order to block the attack effectively, we had to work directly with our hosting partners and their Tier 1 bandwidth providers to filter the attacks upstream. This process took an hour or two.

Once the last attack was mitigated at around 10:20AM PST, we saw a lull in activity.  On March 4th around 3AM PST, the attackers switched tactics. Rather than a TCP flood, they switched to a HTTP resource consumption attack.  Enlisting a bot-net consisting of thousands of compromised PCs, they made many thousands of simultaneous HTTP requests in an attempt to overwhelm our servers.  The source IPs were completely different than the previous attacks, but mostly still from China.  Fortunately for us, the WordPress.com grid harnesses over 3,600 CPU cores in our web tier alone, so we were able to quickly mitigate this attack and identify the target.

We see denial of service attacks every day on WordPress.com and 99.9% of them have no user impact. This type of attack made it difficult to initially determine the target since the incoming DDoS traffic did not have any identifying information contained in the packets.  WordPress.com hosts over 18 million sites, so finding the needle in the haystack is a challenge. This attack was large, in the 4-6Gbit range, but not the largest we have seen.  For example, in 2008, we experienced a DDoS in the 8Gbit/sec range.

While it is true that some attacks are politically motivated, contrary to our initial suspicions, we have no reason to believe this one was.  We are big proponents of free speech and aim to provide a platform that supports that freedom. We even have dedicated infrastructure for sites under active attack.  Some of these attacks last for months, but this allows us to keep these sites online and not put our other users at risk.

We also don’t put all of our eggs in one basket.  WordPress.com alone has 24 load balancers in 3 different data centers that serve production traffic. These load balancers are deployed across different network segments and different IP ranges.  As a result, some sites were only affected for a couple minutes (when our provider’s core network infrastructure failed) throughout the duration of these attacks.  We are working on ways to improve this segmentation even more.

If you have any questions, feel free to leave them in the comments and I will try to answer them.

Dell MD3000 Multipath on Debian

We are in the process of deploying some new infrastructure to store the 150+GB of new content (media only, not including text) uploaded to WordPress.com daily.

WordPress.com data in GB

After some searching and testing, we have decided to use the open source software MogileFS developed in part by our friends at Six Apart. Our initial deployment is going to be 180TB of storage in a single data center and we plan to expand this to include multiple data centers in early 2010. In order to get that amount of storage affordably, the options are limited. We thought about building Backblaze devices, but decided that the ongoing management of these in our hosting environment would be prohibitively complicated. We eventually settled on Dell’s MD PowerVault series. Our configuration consists of:

  • 4 x Dell R710 ( 32GB RAM/2 x Intel E5540/2 x 146GB SAS RAID 1)
  • 4 x Dell MD3000 (15 x 1TB 7200 RPM HDD each)
  • 8 x Dell MD1000 (15 x 1TB 7200 RPM HDD each)

Each Dell R710 is connected to a MD3000 and then 2 MD1000s are connected to each MD3000. The end result is 4 self-contained units, each providing 45TB of storage for a total of 180TB.

Illustration by Joe Rodriguez

Our proof of concept was deployed on a single Dell 2950 connected to a MD1000 and things worked relatively flawlessly. We could use all of our existing tools to monitor, manage, and configure the devices when needed. Little did I know the MD3000s were so much of a pain 🙂 Since we are using MogileFS which handles the distribution of files across various hosts and devices, we wanted these devices setup in what I thought was a relatively simple JBOD configuration. Each drive would be exported as a device to the OS, then we would mount 45 devices per machine and MogileFS would take care of the rest. Didn’t exactly work that way.

When the hardware was initially deployed to us, they were configured in a high availability (HA) setup, with each controller on the MD3000 connected to a controller on the R710. This way, if a controller fails, in theory the storage is still accessible. The problem with this type of setup is that in order to make it work flawlessly, you need to use the Dell multi-path proxy and mpt drivers, not the ones provided by the Linux kernel. Dell’s provided stuff doesn’t work on Debian. Initially, without multipath configured, some confusing stuff happens — we had 90 devices detected by the OS (/dev/sdb through /dev/sdcn), but every other device was un-reachable. After some trial and error with various multipath configurations, and some help I ended up with this:

apt-get install multipath-tools

Our multipath.conf:

defaults {  
        getuid_callout "/lib/udev/scsi_id -g -u -s /block/%n"  
		user_friendly_names on
devices {  
        device {  
                vendor DELL*  
                product MD3000*  
                path_grouping_policy failover  
                getuid_callout "/lib/udev/scsi_id -g -u --device=/dev/%n"
                features "1 queue_if_no_path"  
                path_checker rdac  
                prio_callout "/sbin/mpath_prio_rdac /dev/%n"  
                hardware_handler "1 rdac"  
                failback immediate  
blacklist {  
       device {  
               vendor DELL.*  
               product Universal.*  
       device {  
               vendor DELL.*  
               product Virtual.*  

multipath -F
multipath -v2
/etc/init.d/multipath-tools start

This gave me a bunch of device names in /dev/mapper/* which I could access, partition, format, and mount. A few things to note:

  • user_friendly_names doesn’t seem to work. The devices were all still labeled by their WWID even with that option enabled
  • The status of the paths as shown by multipath -ll seemed to change over time (from active to ghost). Not sure why.
  • Even with all of this set up and working, I still was seeing the occasional I/O error and path failure reported in the logs

After a few hours of “fun” with this, I decided that it wasn’t worth the hassle or complexity and since we have redundant storage devices anyway, we would just configure the devices in “single path” mode and mount them directly and forego multipath. Not so fast…according to Dell engineers, “single path mode” is not supported. Easy enough, lets un-plug one of the controllers, creating our own “single path mode” and everything should work, right? Sort of.

If you just go and unplug the controller while everything is running, nothing works. The OS needs to re-scan the devices in order to address them properly. The easiest way for this to happen is to reboot (sure this isn’t Windows?). After a reboot, the OS properly saw 45 devices (/dev/sdb – /dev/sdau) which is what I would have expected. The only problem was that every other device was inaccessible! It turns out, that the MD3000 tries to balance the devices across the 2 controllers, and 1/2 of the drives had been assigned a preferred path of controller 1 which was unplugged. After some additional MD3000 configuration, we were able to force all of the devices to prefer controller 0 and everything was accessible once again.

Only other thing worth noting here is that the MD3000 exports an addition device that you may not recognize:

scsi 1:0:0:31: Direct-Access DELL Universal Xport 0735 PQ: 0 ANSI: 5

For us this was LUN 31 and the number doesn’t seem user configurable, but I suppose other hardware may assign a different LUN. This is a management device for the MD3000 and not a device that you can or should partition, format, or mount. We just made sure to skip it in our setup scripts.

I suppose if we were running Red Hat Enterprise Linux, CentOS, SUSE, or Windows, this would have all worked a bit more smoothly, but I don’t want to run any of those. We have over 1000 Debian servers deployed and I have no plans on switching just because of Dell. I really wish Dell would make their stuff less distro-specific — it would make things easier for everyone.

Is anyone else successfully running this type of hardware configuration on Debian using multipath? Have you tested a failure? Do you have random I/O errors in your logs? Would love to hear stories and tips.

I have some more posts to write about our adventures in Dell MD land. The next one will be about getting Dell’s SMcli working on Debian, and then after that a post with some details of our MogileFS implementation.

* Thanks to the fine folks at Layered Tech for helping us tweak the MD3000 configuration throughout this process.

AMD Barcelona vs. Intel Nehalem

We are looking at switching some of our servers from AMD Opteron Barcelona quad-core processors to the new Intel 5520 Nehalem processors. These are both 4 core CPUs, but the Intels utilize hyper-threading, so the OS sees 8 cores per CPU.  It wasn’t that long ago that the first thing you did with a hyper-threading-enabled CPU was switch it off in the BIOS, but I have heard good things about Intel’s reincarnation of hyper-threading, so I decided to give it a shot.  

I ran some real-world stress tests against these servers, adding them into the WordPress.com web pool and seeing how many requests per second they could serve before becoming 100% CPU bound effectively falling over. The types of requests served are varied; a lot are rendering web pages, but there are also quite a few image resizing operations thrown in here as well, as we spread this image work evenly over the 2500 cores in our web tier.  Everything is php executed via fastcgi.  I was a bit skeptical that there would be much of a difference between the two processors, but the numbers proved me wrong — the Nehalem’s are impressive.

2 x AMD Opteron 2356 Barcelona Quad-core 2.3Ghz
40 requests/second at 87.5% CPU utilization

2 x Intel 5520 Nehalem Quad-core 2.26Ghz
78 requests/second at 94% CPU utilization

Few things that I thought were interesting:

  • On a per request basis, there isn’t much of a difference between the two. They both generate a given page in roughly the same amount of time.
  • As CPU utilization approaches 100%, The Intel’s scale rather linearly, while the AMDs seem to struggle over the 85% range.
  • The load averages were pretty high during these tests (35+ on the Intel box), but request times didn’t seem to suffer.

Has anyone else seen the same sort of results or maybe something to the contrary?   These 2 configurations are roughly the same price, making it seem like a no-brainer to choose the Intels for web applications.

New Datacenter for WordPress.com

Towards the end of 2008, we brought online a new datacenter to serve the over 5.5 million blogs now hosted on the WordPress.com platform.  Adding the data center in Chicago, IL gives us a total of 3 data centers across the US which serve live content at any given time.  We have decommissioned one of our facilities in the Dallas, TX area.  Our friends at Layered Technologies were kind enough to shoot this footage for us (think The Blair Witch Project) and the always awesome Michael Pick took care of the editing.  Here’s a peak at what a typical WordPress data center installation looks like…

For those interested in technical details here is a hardware overview of the installation:

150 HP DL165s dual quad-core AMD 2354 processors 2GB-4GB RAM
50 HP DL365s dual dual-core AMD 2218 processors 4GB-16GB RAM
5 HP DL185s dual quad-core AMD 2354 processors 4GB RAM

And here is a graph of what the current CPU usage looks like across about 700 CPU cores.  As you can see there is plenty of idle CPU for those big spikes or in case one of the other 2 data centers fail and we have to route more traffic to this one.


Redundancy and power outages

Scott Beale reports that many Web 2.0 websites were affected by today’s power outage at 365 Main in San Francisco. While unfortunate, as a systems guy I have to assume things like this are going to happen. They shouldn’t happen, but they can and they will. At the data center level, there should be multiple levels of redundancy that minimize the probability of a power outage. Things such as multiple power circuits, redundant UPSes, and generators are standard. For a complete power outage to occur there should have to be multiple simultaneous system failures. I looked for a statement from 365 Main as to what the problem was, but couldn’t find one.

The system architecture behind WordPress.com and Akismet is designed to take entire data center failures into account. For WordPress.com, we serve live content in real-time from 3 data centers (33% from each data center) and in the event of a data center failure, traffic is automatically re-routed to the 2 remaining data centers. Syncing content in real-time between multiple data centers has not been easy, but at times like this I am sure that we made the right decision.

%d bloggers like this: