Five Sinkholes of newGOZ

By: Dennis Schwarz -

By Dennis Schwarz and Dave Loftus

It has been a few weeks since news broke of the Zeus Gameover variant known as newGOZ. As has been reported, the major change in this version is the removal of the P2P command and control (C2) component in favor of a new domain generation algorithm (DGA).

The DGA uses the current date and a randomly selected starting seed to create a domain name. If the domain doesn’t pan out, the seed is incremented and the process is repeated. We’re aware of two configurations of this DGA which differ in two ways: the number of maximum domains to try (1000 and 10,000) and a hardcoded value used (0x35190501 and 0x52e645).

Date based domain generation algorithms make for excellent sinkholing targets due to their predictability, and provides security researchers the ability to estimate the size of botnets that use them. With this in mind, we have gathered five days worth of newGOZ sinkhole data. Our domains are based on the first configuration, since this configuration seems to be used the most in the wild.

As with all sinkhole data, many variables can affect the accuracy of victims such as network topology (NAT and DHCP), timing, and other security researchers. However, we feel that the data provides a good estimation of the current scope of this new threat.

Monday, July 14


Four days after the discovery of newGOZ, our first sinkhole saw 127 victims. To corroborate our initial data set, SecureWorks reported seeing 177 victims connect to their sinkhole a few days earlier on July 11.

Friday, July 18


An 89% increase to 241 victims.

Monday, July 21


Over the weekend we saw a 78% increase to 429 victims, mostly in the eastern half of the United States.

Friday, July 25


As reported by Malcovery Security on July 22, they saw a large spam campaign distributing newGOZ by the Cutwail botnet. This campaign appears to have been very successful. On July 25, we saw an 1879% increase to 8494 victims—the rest of the United States is covered.

Monday, July 29


Over the weekend and 19 days after its discovery, our fifth and final sinkhole for this post saw a 27% decrease to 6173 victims. This is most likely due to victims cleaning themselves up from that last spam campaign. Latin America, South Africa, South East Asia, and New Zealand start filling in.


In aggregate and over three weeks, our five sinkholes saw 12,353 unique source IPs from all corners of the globe:


The most infected country was the United States followed by India. The top 10 were:


In addition, a number of organization types were affected, the top being:



Pondering on the five days worth of newGOZ sinkhole data above, some thoughts come to mind:

First, will the threat actor continue to use the same DGA configuration that they’ve been using so far? Empirically, there seems to be more security research sinkholes populating the DGA namespace than actual C2 servers. There is also the second DGA configuration that hasn’t received much use yet. Additionally, as we’ve seen, the actor is willing to completely replace the C2 mechanism altogether.

Second, will the botnet continue to grow and at what rate? The sinkhole data for July 25 suggests that the second Cutwail spam campaign was relatively successful. Will future waves continue this trend?

Finally, with the infection numbers at a fraction of what they were in the P2P version of Zeus Gameover, how long will the threat actor focus on rebuilding their botnet before they return to focusing on stealing money?

The Citadel and Gameover Campaigns of 5CB682C10440B2EBAF9F28C1FE438468

By: Dennis Schwarz -

As the infosec community waits for the researchers involved to present their Zeus Gameover take down spoils at the next big conference; ASERT wanted to profile a threat actor that uses both Citadel, “a particularly sophisticated and destructive botnet”, and Gameover, “one of the most sophisticated computer viruses in operation today”, to steal banking credentials.

Citadel Campaign

When a threat actor decides that they would like to start a Citadel campaign they: buy the builder software, build the malware, distribute it to the wild, and then, unfortunately, usually profit. A “login key” in Citadel parlance identifies a specific copy of the builder. This key is also copied into the generated binaries so a link between malware builder and malware is formed. Login keys are supposed to be unique, but due to builders being leaked to the public, some aren’t. For all intents and purposes though, malware researchers use login keys to distinguish between distinct Citadel campaigns.

On October 29, 2013, security researcher Xylitol tweeted that login key 5CB682C10440B2EBAF9F28C1FE438468 was not associated with any of the defendants in Microsoft’s Citadel botnet lawsuit:


ASERT has the following command and control (C2) URLs linked with that campaign. Most of these were hosted in the netblock—owned by EuroByte:

MD5 Command and Control URL


Using archived copies of the campaign’s configuration files from and ZeuS Tracker it can be seen that the threat actor was using 28 webinjects to target 14 financial institutions in the Netherlands and Germany:

set_url: **
set_url: **html*
set_url: **
set_url: **
set_url: **
set_url: **
set_url: **
set_url: **
set_url: **
set_url: **
set_url: *mijn**
set_url: *
set_url: *
set_url: **action_prepareStepTwo=Inloggen
set_url: *
set_url: *
set_url: **
set_url: *
set_url: *
set_url: **action_prepareStepTwo=Inloggen
set_url: *
set_url: https**
set_url: https*de*portal/portal*
set_url: https*paypal*


As an example and reference for later, here are a few snippets of one of the webinjects:


Per ZeuS Tracker and VirusTotal passive DNS data, it seems as if this particular campaign started fizzling out around the end of 2013.

Zeus Gameover Campaign

As noted by security researcher Brian Krebs, the “curators of Gameover also have reportedly loaned out sections of their botnet to vetted third-parties who have used them for a variety of purposes.” Analyzing webinject data from the global configuration file that was being distributed on the peer-to-peer network shortly before its takedown on June 2, 2014; it looks as if the threat actor behind Citadel login key 5CB682C10440B2EBAF9F28C1FE438468 had joined the ranks of Gameover’s coveted third party. Checking historical versions of the config show that this collaboration goes back to at least January 2014.

In the analyzed configuration, there was 1324 total web injects targeting many financial institutions. 12 of these were associated with the profiled actor and will be focused on here.  First, the banking credentials extracted by this group of injects were being exfiltrated to IP address This IP had previously hosted a C2 panel of the above Citadel campaign. Second, there were eight financial institutions targeted; seven of which were a subset of the previous campaign: 

match: ^https.*?de.*?portal/portal.*?
match: ^https://.*?
match: ^https://.*?
match: ^https://.*?*?
match: ^https://.*?*?
match: ^https://.*?*?
match: ^https://.*?*?
match: ^https://.*?*?
match: ^https://.*?
match: ^https://.*?
match: ^https://.*?
match: ^https://.*?*?


Finally, the coding style, function/variable naming, and formatting of the webinjects themselves were akin to the above and looked to have been retrofitted from Citadel to work with Gameover:


The drop site itself is a Ruby on Rails application that logs and displays the data sent from infected hosts:



Each entry can be formatted a bit better by clicking “Show”:


Some of the logging text seen in these screenshots—for example: “Wait tan from holder”—can be correlated back to the earlier snippets of the webinjects.

The initial entries in the list are dated from around March and June of 2012, but these entries may be old or in error as there is a jump to December 2013 and then consistent logging from there. At the time of this writing there were approximately 1089 entries.

In addition, up to five Jabber IDs can be configured in the application and then messaged on receipt of freshly stolen credentials:


At the time of writing, the configured Jabber IDs were:


But, there wasn’t much open source intelligence on these.


Pondering on the data available…this threat actor ran a fairly targeted Citadel campaign focusing on a small set of banks in the Netherlands and Germany. Based on ZeuS Tracker data, most of the Citadel C2s became active after the start of Microsoft’s lawsuit on June 5, 2013, so this likely explains the exclusion of 5CB682C10440B2EBAF9F28C1FE438468 from the legal notices.

The Citadel campaign looks like it closed up shop at the end of 2013. In December 2013, logging on the out-of-band Gameover drop site started in earnest, so this might be when the threat actor moved to stealing banking credentials via Gameover.

So far, it seems as if this threat actor has escaped the clutches of the great Citadel take-down and, since the drop site is still receiving stolen credentials, has evaded the Zeus Gameover take-down as well. In the spirit of “see something, say something” and with the recency of the legal action, ASERT has provided the data available to our law enforcement contacts.

The MegaUpload Shutdown Effect

By: Jose -

The popular file sharing site MegaUpload was shut down by the US FBI and Department of Justice on Thursday, January 19, and executives from the company were taken into custody. This story is very well covered by the Wall Street Journal and includes a copy of the indictment for your reading.

As you would expect, this was a wildly popular site with users from all over the world. So much so that even notable celebrities appear in a video discussing MegaUpload, almost endorsing it. Previous work by Arbor Networks showed that content providers and hosting sites like MegaUpload are the new “Hyper Giants”. With enough global data, you can actually see the traffic drop when the shutdown occurs. Based strictly on the traffic rates it appears that the shutdown started just after 19:00 GMT on January 19, with traffic plummeting down over the next two hours. The graphic here shows three main client regions – Asia-Pacific, Europe, and the US.

Over the past 24 hours, the top countries (in aggregate) using MegaUpload were the United States, France, Germany, Brazil, Great Britain, Turkey, Italy, and Spain, although dozens more countries are represented.

As for the traffic drop off, we’re not the only ones to notice. As seen on Twitter, South America experienced a dramatic traffic drop at about the same time, presumably due to this MegaUpload shutdown. Furthermore, we’re seeing reports of a fake MegaUpload site that is supposedly a malware infection site.

Friends of mine from elsewhere in the world have been joking that the Internet seems to be running a bit smoother today. That may be, given how much bandwidth appears to have been freed up.


Torrent Sites and The Pirate Bay: DDoS Afoot?

By: Jose -

Around the time of the convictions late this week of the folks behind The Pirate Bay (also see The Pirate Bay Trial: The Official Verdict – Guilty on TorrentFreak), a well known BitTorrent tracker distribution site, we started seeing reports of DDoS attacks on other torrent tracker sites. Never one to miss an opportunity to look for massive DDoS attacks against important sites, I went looking in our archives to see if this was indeed the case.

It’s interesting to note that TPB may indeed be a key to illicit torrents and pirated material (although if it’s shut down something will surely replace it). See P2P researchers fear BitTorrent meltdown from the Iconoclast website. According to the piece, TPB is so heavily connected with over half of the torrents tracked there disrupting it may have an impact on all BitTorrent traffic for a while:

Raynor told TorrentFreak that if The Pirate Bay goes down, many of the other trackers might collapse as well. “If The Pirate Bay goes down the load will automatically shift to others. This is because most of the Pirate Bay swarms also include other trackers. When Pirate Bay goes down it would overload others until they fall also. Meaning even more stress and further casualties. This is likely to end in a BitTorrent meltdown.”

This makes sense that someone, perhaps a vigilante or just a “griefer”, is hoping to shut down pirate activities by going after major torrent tracking sites. This bears some analysis.

All in all, except for getting attacked by a Black Energy botnet run out of China (using the C&C at, we can’t corroborate this spate of attacks. has been getting pounded by this botnet since mid March, 2009, in fact. But none of the other major sites appear to be receiving such packet love.

As such, while this may be happening we’re not seeing it hit the massive scale that we would expect.

Conficker Did Not Melt the Internet

By: Jose -

But it is busy.

Last week’s April 1 trigger date for the new routines in Conficker.C/D (depending on the vendor) was mis-reported by some press agencies as the date many in the CWG said the Internet would melt down. Not quite. The press has been busy with the story as a hype leading to fizzle angle etc. Sure enough, the Internet kept on trucking. Here’s a view from one BGP peer during that time, no big change in traffic from the days prior:


Some folks even said Conficker was sleeping.

Not quite. It’s not DDoSing, but it is apparently doing it’s thing, or “Confickering” as a friend said. It’s dropped a new E variant over P2P. Some reports are seeing it talk to a Waledac domain but not everyone is.

As for why a possible Waledac (aka Storm) connection, we do not know.

So, what happened? It looks like the DNS lockout has been working; I suspect the attackers also noticed the great de-peerings of bad ISPs that have been going on for a while and decided to avoid hosting a C&C in one of those and instead went the P2P route. For details on how the bots know how to talk to other bots over P2P see the LEXSI CERT blog.

We have no additional info to give outside of those above links. If you’re following this story this is a major development.

Other Conficker Stuff

IBM/ISS has interesting stats on Conficker infections based on their P2P models. Some in the press have said these numbers mean 4% of the Internet is infected with Conficker. Not quite. As Vint Cerf pointed out, it helps to know the denominator when you make those sorts of population claims. As I’ll point it, it also helps to know what you’re measuring. We know Conficker has a big population, but we also know our measurements are only a lower limit (based on IP visibility etc).

We’re seeing a rise in TCP/445 scanning. 3wks of ATLAS data for the top 50 scanners is shown below, a clear rise in this time:

445 tcp scans.png

We think this is related to new Conficker activity in this timeframe.

UPDATE April 10

Oh and Mafiaboy, Conficker was not a ruse.

iWorkServices == P2P iBotnet

By: Jose -

If you want iWork 09 and didn’t want to pay for it, you may have grabbed a pirated copy. That may not have been all you got. If you wanted your Mac to be a part of a P2P botnet, then you’re in luck!

It turns out the package you may have downloaded over BitTorrent, a massive 450MB ZIP installer, is really just a huge Trojan horse package that installs a simple P2P bot tool on your box. Running the installer will not install iLife but instead the official sounding “iWorkServices”. This is not what you think it is. The binary has these characteristics:

MD5 (iWorkServices) = 046af36454af538fa024fbdbaf582a49
SHA1(iWorkServices)= 55d754b95ab9b34bdd848300045c3e11caf67ecf
SHA(iWorkServices)= 6b83df2636a4813ef722f3fad7c65b5419044889
file size: 413568 bytes
iWorkServices: Mach-O universal binary with 2 architectures
iWorkServices (for architecture ppc):   Mach-O executable ppc
iWorkServices (for architecture i386):  Mach-O executable i386

When run as root it creats a couple of files and directories to get set up:


This will now run whenever your box boots. The installer makes sure that the script is runnable:

chmod 755 /System/Library/StartupItems/iWorkServices/iWorkServices

And the script just launches the binary:

/usr/bin/iWorkServices &

Not very sophisticated. On startup it creates a “dot” directory under /tmp:


It fires up some connections:

It will keep on trying until it connects. It also grabs a list of seed P2P peers from the file itself by decrypting the running file (thwarting static analysis) and managing the known peers as you would expect. It generates a port to listen on as needed (although it’s not quite clear to me how it would handle being behind a NAT device).

The bot software itself appears to be a Kadima-related P2P protocol with the expected commands to manage the peer list, but also to provide a remote shell, download and run arbitrary code, and to give full access to the box:


What’s more is that there is an embedded Lua interpreter, giving a very sophisticated command language some additional structure.

So, what’s this botnet been up to? DDoS it seems, via a downloaded and executed PHP script. Clever.

Looking to find if anyone else is monitoring this botnet …

Bear in mind that this is just like all of the other OS X malware: you have to willingly install it. It’s much more of a Trojan Horse than a virus or worm.

Related info:

Edited to fix the name of the product this Trojan package masqueraded as.

Fast Flux and New Domains for Storm

By: Jose -

At last week’s FIRST conference in Vancouver I presented on some of our ATLAS fast flux data. The slides aren’t yet available, but the ongoing reports in ATLAS have been reflected to continuously update some of the analysis we did. Some of the new reports include the lifetimes for each network, and the “distinct networks” section, which identifies related domains through shared botnet membership. ATLAS users can also get the updated blocklist of fast flux domains for use in stopping such attacks.

Just in time, too, the Storm Worm has begun using new fast flux domains. Messages look like this:

> Date: Sun, 29 Jun 2008 00:56:18 +0700
> From:
> Subject: You make my world special

> My heart belongs to you ht tp:/ /

Here’s a list of all of the domains we’ve identified so far.        NS        NS       NS       NS     NS       NS

Storm has changed its tactics constantly in the past year and a half, and this “love theme” is nothing new. We’ll see how long this theme lasts.

UPDATE 1 July 2008

Here’s a full list of domains:      NS       NS     NS        NS        NS      NS      NS      NS NS  NS NS        NS  NS       NS NS NS        NS       NS     NS       NS NS   NS NS

A Case Study in Transparency

By: Kurt Dobbins -
One of the overriding themes in the Network Neutrality debate, and what triggered much of the recent activity with Comcast and the FCC, has to do with transparency.  Or in the recent words of FCC Chairman Kevin Martin, “Consumers must be completely informed about the exact nature of the service they are purchasing”.  When it comes to transparency about service plans, and the business necessity behind them, I can think of many good examples, but one service provider stands above the rest, an ISP in the UK called PlusNet. In the spirit of transparency, PlusNet is an Arbor customer.

PlusNet offers an array of residential broadband services, called “Broadband Your Way” shown in the following diagram, ranging from a “pay as you go” service for light users – casual users who are typically migrating from a dial service to always-on broadband – to high-end broadband subscribers that enjoy the heavy use of gaming, VoIP, and peer to peer file sharing. PlusNet also has a specific plan for gaming that offers quality broadband with low ping and latency, called “Broadband Your Way PRO.”



Each of the service options has some form of traffic management associated with it, so each plan can appeal to a different demographic: from a light user that does not use file sharing to a heavy user that wants file sharing and streaming 24×7. Rather than have a one-plan-fits-all service, PlusNet offers consumers a plan that fits their service and economic requirements.

What makes PlusNet really interesting is that they clearly explain each of the service options, and even go on to explain that there is no such thing as “unlimited” broadband bandwidth; i.e., the network needs to be managed during peak busy hours to ensure fairness and to deliver real-time and interactive applications with a good quality of experience. PlusNet employs three methods of ensuring fairness on their network during peak busy hours:


1) Traffic Management: For certain plans, maximum bandwidth rates for peer to peer file sharing and other file downloading services are managed during peak busy hours. Each service plan comes with a higher or lower degree of traffic management;

2) Prioritization: For all plans, interactive applications like Web and real-time applications such as VoIP are given priority over non real-time or “background” applications.

3) Changing Human Behavior: For all usage-based plans where there is a monthly Usage Allowance, subscribers are given an economic incentive to use the network during non-busy off-peak hours. Any usage during off-peak hours is considered “free” and does not count against the monthly allowance.


PlusNet fully discloses maximum downstream and upstream bandwidth rates for specific application types, such as peer to peer file sharing, as well as what applications are prioritized, for each service option. Because the need for some form of traffic management is driven by the ISP cost model, PlusNet also discloses how UK ISPs pay for bandwidth in order for their customers to understand the business drivers for employing traffic management techniques during peak hours as well as future plans for capacity planning and traffic management. The consumer continues to be informed about the services they are purchasing.

But explanation details in a service plan are not always enough. “Seeing is believing” as they say, so PlusNet even publishes network traffic graphs depicting how the network is used during peak and off-peak hours, clearly demonstrating  the benefits of their traffic management policies. Winning awards is also a nice way to demonstrate better service too!

 By prioritizing interactive applications like Web and Streaming, PlusNet ensures a great customer experience during peak busy hours, as shown in the graph below for the hours between 8pm and 9pm.



Conversely, by managing peer to peer file sharing during peak hours, in conjunction with encouraging consumers to do file sharing at night (off hours) when it is “free” and not counted against any monthly usage allowance, PlusNet is able to get better utilization of their network bandwidth by time-shifting the bandwidth used for file sharing into the off-peak hours when there is unused capacity on the network. The effects of this are dramatic, as shown in the following graph during the hours 4am to 5am, allowing PlusNet to keep its costs lower by deferring expensive upgrades to bandwidth capacity.


 So, regardless where the Network Neutrality debate ends up, one thing is certain: ISPs will be required to inform consumers about the exact nature of the service they are purchasing. ISPs can learn a valuable lesson in transparency by taking a closer look at the PlusNet model.








Ono and ISP Coziness

By: Danny McPherson -

Some of you may have seen the coverage that Ono picked up today because of it’s ability to optimize P2P transaction speeds by enabling more topologically optimal distribution – all while requiring no interaction with the ISP. On one hand, I’m happy about this, as the whole P4P thing, and the topology intelligence dependence doesn’t seem a viable long-term option. However, given where the bottlenecks are in the current system, Ono leaves some room for concern as well.

Specifically, in measurements we’ve seen the peak to trough bandwidth on the fixed broadband access edge in both cable and DSL is around 4x peak-trough (although the x value itself isn’t particularly relevant to this discussion). So, for example, if there were 1Gbps peak utilization, trough utilization would be around 250Mbps. Given that ISPs during the capacity planning process need to plan for peak loads, they’ll typically engineer capacity expansion with peak loads of 1Gbps, plus some variable that accommodates incremental bandwidth utilization increases based on historical growth, as well as projected new markets, subscriber acquisition, etc..

Needless to say, much of this peak load is driven by P2P and other protocols. So, when folks come up with solutions for improving P2P transfer rates, for example, by a professed 207% as with Ono, that 1 Gbps might now be 2.1 Gbps, and the peak-trough ratio may now be 6x or 8x, versus 4x. Arguably, this exacerbates the problem where it’s most obvious, in the access network, and in particular, in the cable access network where downstream bandwidth is shared among multiple subscribers. Now, given these peak burst rates, ensuring fairness among users of the network is even more critical.

Other applications have improved transactions rates considerably as well. For example, most browsers opening multiple connections to web servers in order to download multiple images and text in parallel, or iTunes and Google Maps opening tens of connections in order to obviate TCP’s inherent per-session (v. per-user) congestion control mechanisms in order to optimize aggregate transactions rates. When your single smtp (email) connection is contending for network resources with 300 TCP connections from your neighbors ‘optimized’ application, ensuring fairness among subscribers by the ISP is critical IF contention for resources exists, in particular those of access loop capacities.

The implications of this aren’t felt just on the network from an available bandwidth and packet forwarding quality of service perspective, but also, by devices like NATs and stateful firewalls that need to track all of these connections. Applications that arguably exploit TCP’s per-session Internet congestion-friendly characteristics in order to optimize the local user’s experience are becoming more and more common. More focus on fariness across users, as opposed to fairness across transport connections, is sure to be a critical issue in transport protocol design and network architecture in the coming months.

I believe that if Ono-like solutions enable topologically optimal distribution of content, that’s a good thing. However, there will always be a bottleneck, and ensuring it’s in the place that scales best and is most manageable is critical.

Vuze, TCP RSTs, and Educated Guesswork

By: Danny McPherson -

Triggered by this report (pdf) from Vuze, and Iljitsch’s ars technica article, my friend Eric Rescorla (ekr) posted on his Educated Guesswork blog this morning some bits regarding how many TCP transactions end in RSTs. I’m glad he did this (he saved me the work), as the variances in the data and the methodology employed have been frustrating me since the Vuze report was published. I’ve heard many ISPs taking issue with the report, and several doing so publicly (e.g., AT&T), and while all the appropriate disclaimers are provided by Vuze, a typical consumer might heavily weigh the results in this report when selecting ISPs, or presupposing which ISPs might employ blunt instrumentation in attempts to throttle P2P traffic.

I commend Vuze for attempting to add some actual empirical data points to the P2P throttling discussion, and for making both summarized raw data (ZIP) and their plug-in openly available. I firmly believe empirical evidence is a fine thing, assuming stated “facts” so represented by that evidence are verifiable, and specifically, the methodology used to collect that evidence is indeed measuring the Right Thing. This is where I take issue with the methodology employed to collect this empirical evidence, as do Eric and Iljitsch, and believe the “first results” in the report are misleading, at best.

Given that the objective of the Vuze plug-in, as stated in the report, was to “add relevant data to the traffic throttling debate, and encourage that decisions be made based on facts“, I trust they’ll be updating both their report and methodology to accommodate any misrepresentations that the data might provide.

Go Back In Time →