If you weren’t paying attention last week, the Internet has gone to war.
ABC News proclaimed “Welcome to Infowar, Version 1.0″. Fox warned of the “growing data war”. And the Guardian provided minute by minute coverage on the opening salvos of this first “Internet-wide Cyber War”.
Of course, all of the above headlines refer to the rash of DDoS attacks both against the Wikileaks web site and the retaliatory strikes against hosting and commercial institutions that severed ties with the organization.
So are we now in a permanent state of cyber-war? As the San Francisco Chronicle asks, do sixteen year old hackers now control the fate of humanity from their laptops?
Well, this blog uses detailed statistics on the last year of DDoS attacks across the Internet to provide some perspective. I’ll compare the Wikileaks and retaliatory DDoS attacks to historical baselines of attack activity and discuss broader DDoS trends.
In general, getting accurate data about Internet attacks can be a challenge. Namely, a) companies avoid publicly discussing most attacks and b) the attacks can be difficult to measure or at least consistently compare. For example, engineering mailing list discussion of ISP security and DDoS attack trends generate a bewildering variety of responses. In one instance, two engineers at the same ISP debated the largest observed botnet attacking their company — one estimated the size at a few thousand hosts while the other at millions. Later when pressed on the source of their data, both of these engineers readily admitted they were really just guessing (they did not have any infrastructure / tools to actually measure the number of attacking botnet hosts).
In an effort to better quantify DDoS attack trends, two years ago Arbor added support for the export of detailed measurements of confirmed DDoS attacks to our commercial products and ATLAS anonymous statistics (deployed in roughly 75% of all Internet carriers). This blog post provides a first look at quantitative measurements of over 5,000 confirmed (via operator classification or mitigation status) attacks over the last year across 37 large carriers and content providers around the world. We believe this is the largest data set of validated DDoS events ever collected. I presented an earlier version of this blog post at this Fall’s NANOG (link to the presentation here) and we’re currently working on an academic paper version.
Before diving into the statistics, a bit of background — our data includes both survey results and two overlapping measurement data sets: alerts and mitigations. At a high level, alert data include the magnitude and fingerprint of a DDoS (i.e. IP header fields and router / interface topological origins of the attack). Mitigation statistics include finer-grain detail on the payload of the attack, including spoofed source IPs, number of valid (i.e. not spoofed) source IPs, connection attempts, bps and pps rates per attacking IP, etc.
In general, we evaluate DDoS attacks using two metrics: the scale and the sophistication of the attack. At the high end in 2010, we observed a number of DDoS attacks in the 50+ Gbps range. These large flooding attacks often exceed the inbound aggregate bandwidth capacity of data centers and carrier backbone links (often OC192 / 10 Gbps). Mitigation of these high end attacks can be a challenge — carriers generally need specialized, high speed mitigation infrastructure and sometimes the cooperation of other providers to block the attack traffic. The below graph plots the growth DDoS flooding attacks over the last decade (hard to imagine that 400 Mbps was an impressive attack back in 2002).
On the other end of DDoS spectrum, we encounter attacks focused not on denying bandwidth, but the back-end computation, database, and distributed storage resources of large web services. For example, service or application level attacks may focus on a series of web or API calls that force an expensive database transaction or calls to slow storage servers. The attackers then use botnets to inundate the web service with thousands of clients issuing a steady stream of these particularly expensive web / API calls. Other application attacks attempt to overwhelm SIP, HTTP or TCP state (e.g. Slowloris). In many of the more sophisticated application DDoS, attackers perform reconnaissance of the web service for weeks or months before the attack (identifying weak links in the infrastructure). Unlike massive DDoS traffic floods, application attacks can be far more subtle and may only register as increased load on servers or a precipitous drop in five minute real-time sales revenue charts. Also like 10+ Gbps flooding attacks, sophisticated application attacks may required specialized, high speed infrastructure to detect and mitigate the DDoS.
So if we’re in a Cyber-War, then very large (50+ Gbps) traffic floods and sophisticated application attacks are the front-lines. Which brings us back to the question of Wikileaks and the retaliatory hactivist attacks. Were these attacks massive high-end flooding DDoS or very sophisticated application level attacks?
Despite the thousands of tweets, press articles and endless hype, most of the attacks over the last week were both relatively small and unsophisticated. In short, other than than intense media scrutiny, the attacks were unremarkable. I note that our ATLAS based observations agree with data from the operators directly involved in mitigating the attacks.
For example, below is a graph of DDoS activity against multiple Wikileaks hosting sites on third day (December 1) following the initial release of “Cablegate” documents. The DDoS traffic (in red) never grew beyond 3-4 Gbps. Today, mitigating attacks of this scale is fairly routine for tier1/2 ISPs and large content / hosting providers (more of an annoyance than an imminent critical infrastructure threat — or “easy peasy” to block as one Internet engineer explained). Also see earlier blog posts (link available here) for more analysis of the Wikileaks attacks.
The retaliatory hactivist attacks took a slightly different approach with mostly low-level application layer attacks against a range of companies perceived as anti-Wikileaks, including banks, hosting and credit card companies. The loosely organized Anonymous group called hundreds of volunteer activists to arms with messages like:
"TARGET: WWW.xxxxx.COM: WEAPONS http://xxx.xx.ru FIRE FIRE FIRE!!! PAYBACK!"
[I replaced the target and Russian download site with xx's].
Approximately 20% of retaliatory attack DDoS HTTP requests in one attack last week came from a new variant of LOIC named, predictably, LOIC-2. The new LOIC version (a “total rewrite of LOIC”) supports additional “hive” remote control command channels including RSS, Twitter, and Facebook (LOIC only supported irc). More significantly, LOIC-2 supports two new “slow” class of attack methods (i.e., DDoS strategies where the client deliberately elongates HTTP transaction times to burden the victim server).
In addition to LOIC, ATLAS observed Slowloris like TCP attacks and several other tools / scripts generating web or TCP DDoS traffic. A smaller component of the hactivist campaign included DDoS flooding using ICMP Smurf and LOIC operating in UDP flood mode (sending traffic to UDP port 80).
More recently, Anonymous supporters released two more sophisticated HTTP flooding tools: High Orbit Ion Cannon (HOIC) and Geosynchronous Orbit Ion Cannon (GOIC). The new tools support multi-threaded HTTP flooding, simultaneous attacks against up to 265 web sites, plug-ins and an “easy to use interface”. However, HOIC and GOIC did not appear to play a significant role in the DDoS attacks last week.
While the last round of attacks lead to brief outages, most of the carriers and hosting providers were able to quickly filter the attack traffic. In addition, these attacks mostly targeted web pages or lightly read blogs — not the far more critical back-end infrastructure servicing commercial transactions. By the end of the week, Anonymous followers had mostly abandoned their attack plans as ineffective.
Overall, both the attack traffic and the hundreds of volunteers running the software on their PCs were not terribly sophisticated. Most volunteers clearly did not realize the tools do not anonymize their PC source IP address nor that word processors store incriminating meta-data in revolutionary manifestos. In short, not exactly the work of evil criminal masterminds.
So ultimately, I’d suggest the last week of DDoS attacks surrounding Wikileaks supporters and opponents falls far short of a “cyberwar”. While it makes a far less sexy headline, cyber-vandalism may be a more apt description. In a similar vein, a Foreign Policy Op-Ed called hactivist DDoS the digital equivalent of a sit-in by youth around the world.
All of the above is not to say DDoS is not a serious problem. The number and firepower of botnets grows dramatically each year as well as the sophistication of application attack toolsets. HOIC and succeeding generations of volunteer botnet controlled PCs may evolve to pose a significant Internet-wide threat. However, traditionally the DDoS threat has come more from increasingly professional criminal hackers than volunteer activists.
With discussion of cyberwar out of the way, I’ll compare Wikileaks and related attacks to some of the broader trends we are observing in ATLAS DDoS statistics. The chart below shows the distribution DDoS attack vectors in the 5,000 validated attacks in the ATLAS dataset. Note that this dataset represents a subset of all attacks as not all providers have enabled anonymous export of data and many providers are running earlier versions of the product (i.e., lacking anonymous DDoS statistics export support). See the NANOG presentation (link available here) for more details on the methodology.
As discussed earlier, brute-force flooding continues to dominate most DDoS attacks (60%). Generally, these attacks (including the initial strike against the Wikileaks web site) resemble the early days of DDoS attacks circa 2000 except more distributed (better botnets) and greater use of amplification. As in 2000, most flooding DDoS attempt to overwhelm upstream bandwidth, firewall / load balancer state, or resources on web / application farms.
Though traditional DDoS flooding attacks remain popular, most of the recent DDoS activity has included some level of application or TCP layer attack components. Involved in 27% of the confirmed attacks over the last year, application layer attacks are also the fastest growing DDoS attack vector. Open source tools like LOIC / HOIC and large library of more advanced commercial criminal software targets firewall, load balancer and end-system web, database, and TCP state. A tutorial by security consulting company Securitech provides a nice overview and examples of these layer3+ attacks.
Finally, “Other” in the above chart is a bit of a grab-bag, including operator defined policy around allowed traffic levels for things like ASN, GeoIP (countries), ATLAS filters, large lists of ACLs and payload (e.g. DNS, URL) regular expressions. Although designed as a line-speed DDoS mitigation appliance, some providers use the Arbor TMS to effect policies similar to next-generation firewall or carrier-grade IPS. Our analysis generally cannot distinguish between DDoS mitigations and policies enacted for other carrier security strategies.
As discussed earlier, the Wikileaks flooding DDoS components fell into the small or mid range of our yearly survey data (links available here). The chart below shows statistics on the flooding DDoS bandwidth, packets per second and duration for the 5,000 validated attacks. The average DDoS comes in at 300 Mbps and 200 Kpps lasting several hours. Though given the heavy tailed nature of DDoS attack distribution, the mean is skewed by a relatively small number of extremely large DDoS (including one 22 Gbps and 9 Mpps IP fragment attack against a single web farm lasting four days). The median of 30Kpps suggests that the majority of DDoS by number of incidences remain fairly low bandwidth (and likely reflect provider offering DDoS mitigation services for hundreds of small customers).
The next table focuses on the number of unique sources involved in DDoS flooding attacks. Despite the availability of massive botnets, most confirmed attacks in our study involve relatively few, well-connected IPs — the average is 80 sources generating an average of 162 Mbps and 48 Kpps each. Even the 95th percentile of attacks involves only 300 sources. Why so few botnet hosts in these attacks? I suspect the answer is a) a hundred well-connected hosts is more than sufficient to overwhelm many mid-size web farms (you just don’t need more than this) and b) botnets are an increasingly valuable resource to be used judiciously as discussed in this Security Week article.
Though more than 100,000 users downloaded the LOIC software last week, the actual peak number of simultaneous Wikileaks retaliatory attackers was significantly lower. ATLAS data suggests the number of attackers was in the hundreds (i.e., instead of thousands or tens of thousands). In other words, the number of source IPs observed in the Wikileaks retaliation attacks fell into the mid or higher end of the 5,000 validated DDoS last year.
Of course, just tracking statistics per IP does not tell us if these are real or spoofed source addresses. And indeed, increasingly unrealistic data as we approach the max (4 Gbps per source IP!) in the above chart suggests some degree of either source spoofing (e.g. poorly written attack tools always using the same source address) or large number of hosts behind NAT / mega-proxies. About 10% of attacks fall into this category of unrealistic source IP statistics.
The next table focuses on TCP layer DDoS attack statistics. The first column shows the number of TCP connection attempts per second in each attack and the second column provides the median, mean, 95th percentile and max number of connections that actually pass a range of validation algorithms (i.e. “prove” that the TCP connection is from a real host). Ranging from several hundred thousand to millions of connection attempts per second, the data in above chart suggests most of these Syn floods either use attack tools with incomplete stacks or spoof the source IP address (which is pretty much what you would expect). In the specific case of the Wikileaks retaliatory attacks, we believe most of the traffic did not spoof and used the actual sources IPs.
Finally, the last table below provides statistics on two types of application-layer attacks: HTTP and SIP. In general, HTTP attacks involve highly targeted floods of requests for complex / computational expensive web or service queries. Examples of well-known attacks include Slowloris and Slow Post. From the data, web attacks involve relatively low bandwidth (95h percentile is 10Mbps). Further, web attacks involve large number of hosts (414 in the 95th percentile) than zombie and other types of flooding attacks. Both SIP and HTTP layer attacks tend to be long-lived — targeting infrastructure for days and sometimes weeks. Unlike HTTP, SIP attacks tend to be larger (average 200 Mbps and 77Kpps) and more resemble flooding attacks as hackers attempt to overwhelm SBCs or soft-gateways.
So what conclusions can we draw from all of the above data?
Like the initial Wikileaks attacks, most DDoS continue to rely on brute force flooding to exhaust link capacity or overwhelm load balancer, firewall and web server state. Further, despite the conventional wisdom in the security community that spoofing is no longer common (because botnets are so prevalent), analysis of 5,000 validated DDoS attacks suggests a significant percentage of attackers still take advantage of a lack of BCP-38 and generate large volumes of spoofed DDoS traffic.
While the Wikileaks and retaliatory attacks may not represent the start of “cyberwar”, governments clearly view cyberspace as the battlefield of the future. The trend towards militarization of the Internet and DDoS used as means of protest, censorship, and political attack is cause for concern (the world was a simpler place when DDoS was mainly driven by crime, irc spats and hacker bragging rights). Overall, DDoS fueled by the growth of professional adversaries, massive botnets and increasingly sophisticated attack tools poses a real danger to the network and our increasing dependence on the Internet.
Credit to Joe Eggleston, Jose Nazario, Jeff Edwards, Roland Dobbins and Mike Hollyman for their contributions to this analysis.