Happy Holidays: Point of Sale Malware Campaigns Targeting Credit and Debit Cards

By: cwilson -

Inside Recent Point-of-Sale Malware Campaign Activities

Curt Wilson, Dave Loftus, Matt Bing

An active Point of Sale (PoS) compromise campaign designed to steal credit and debit card data using the Dexter and Project Hook malware has been detected. Indicators of compromise will be provided for mitigation and detection purposes. Prior to the publication of this Threat Intelligence document (embedded at the end of this post), members of the FS-ISAC, major Credit Card vendors and law enforcement were notified.

It appears that there are at least three distinct versions of Dexter:

  1. Stardust (looks to be an older version, perhaps version 1)
  2. Millenium (note spelling)
  3. Revelation (two observed malware samples; has the capability to use FTP to exfiltrate data)

In early November 2013, ASERT researchers discovered two servers hosting Dexter and other POS malware to include Project Hook.  The Dexter campaign looks more active, especially in the Eastern Hemisphere and therefore shall be the main focus herein. Dexter, first documented by Seculert in December 2012, is a Windows-based malware used to steal credit card data from PoS systems. The exact method of compromise is not currently known, however PoS systems suffer from the same security challenges that any other Windows-based deployment does. Network and host-based vulnerabilities (such as default or weak credentials accessible over Remote Desktop and open wireless networks that include a PoS machine), misuse, social engineering and physical access are likely candidates for infection. Additionally, potential brittleness and obvious criticality of PoS systems may be a factor in the reportedly slow patch deployment process on PoS machines, which increases risk. Smaller businesses are likely an easier target due to reduced security. While the attackers may receive less card data from smaller retailers, infections may be more numerous and last longer due to the lack of security reporting and security staff in such environments.

Figure 1: Dexter (Purple) and Project Hook (Orange) infections in the Eastern Hemisphere

Dexter and Project Hook infections in the eastern hemisphere

Figure 2: Dexter (Purple) and Project Hook (Orange) infections in the western hemisphere

Screen Shot 2013-12-03 at 1.22.00 AM

For the full document to include a list of various compromise indicators and information about the back-end infrastructure, please download the full public report -

Dexter and Project Hook Break the Bank


DDoS attacks targeting traditional telecom systems

By: cwilson -

DDoS affects many types of systems. Some have used the term TDoS to refer to DDoS or DoS attacks on telecommunications systems (Telecommunications Denial of Service).  This is just another application for a DDoS attack, and was mentioned in 2010 by law enforcement and since discussed on a variety of blogs. Typical motives can be anything from revenge, extortion, political/ideological, and distraction from a larger set of financial crimes.  Just as we’ve seen the Dirt Jumper bot used to create distractions by launching DDoS attacks upon financial institutions and financial infrastructure at the same time that fraud is taking place (with the Zeus Trojan, or other banking malware or other attack technique), DDoS aimed at telecommunications is being used to create distractions that allows other crimes to go unnoticed for a longer period.

Recently, ASERT came across a few advertisements for traditional DDoS services that also included phone attack services starting at $20 per day. Screenshot (translated from Russian):


The original advertisement was posted around the end of 2011. On June 27 2012 the DDoS service provider placed another advertisement focusing only on the telephone flooding capabilities:

Another DDoS provider has advertised this at $30 per hour:

And a third provider also advertising such attacks charges $5 per hour, $20 for 10 hours, and $40 per day (roughly translated from Russian).

When discussing a recent ideological telecommunications-based DDoS attack upon a law enforcement entity around April of 2012, the attackers revealed some details about their approach. In that case, their attack script was based around Asterisk and put to use on a compromised server.

ASERT has helped mitigate SIP flooding attacks on several occasions. Often, SIP flooding attacks take place because attackers are running brute-force password guessing scripts that overwhelm the processing capabilities of the SIP device, but we have also seen pure flooding attacks on SIP servers. Once the attackers obtain credentials into a VoIP or other PBX system, that system can become a pawn in their money-making scheme to perform DoS, Vishing, or other types of attacks. Default credentials are one of the security weaknesses that the attackers leverage to gain access to the VoIP/PBX systems, so organizations should ensure that their telecommunications systems credentials are strong enough to resist brute force attack, and that the ability to reach the telephone system is limited as much as possible in order to reduce the attack surface and convince the attacker to move on to the next victim.

In other instances, I have seen telephone systems connected to the Internet that were very brittle – even a simple port scan could bring them to their knees quickly. In such cases, an attacker could bring down an organizations phone system quickly if they were able to reach the controller. The benefits of proactive security testing can help identify such brittle systems ahead of time, before an attacker might latch onto the vulnerability.

Any system is subject to availability attacks at any point where an application layer or other processor-intensive operation exists as well as the networks that supply these systems via link saturation and state-table exhaustion. Telecommunications systems are no exception to this principle, as we have seen. Clearly, there is money to be made in the underground economy or these services would not be advertised.

Thanks to Roland Dobbins of Arbor ASERT for operational insight.




Conficker Working Group Lessons Learned Document

By: Jose -

On the Conficker Working Group’s website, the Lessons Learned document has finally been made public. Sponsored by the US DHS, with key efforts at getting it written from Rick Wesson and David Dagon, the document was prepared by in large part by interviewing key folks in the CWG. The purpose was to explore all of the issues we encountered in the CWG, which was an unprecedented event. In short, the document helps illuminate challenges the information security community as a whole faces in the coming years.

As a member of the CWG, there are a number of takeaways for me. I think they illuminate a path for work in the coming years for many of us, which we will have to address collaboratively.

First, it should be clear that technology alone isn’t the solution here. One of the focuses of the CWG was to ensure that all of the AV, IDS and related companies had timely access to the samples to write signatures against. These technologies and companies represent the front line of defense for all of us, end users, enterprises, and ISPs. As should be clear from the infection data, the numbers haven’t plummeted, suggesting that gaps in addressing the problem exist. We have to explore how to get defenses and cleanup to more people more efficiently, if not preventing the infection in the first place. As someone in the CWG said, “we can’t patch our way out of these worms.”

Secondly, the world needs even better global coordination for such events, and clear authority to act for certain groups. In the case of the CWG, some organizations – such as ICANN – assumed authority for coordination when no one had such a clear mandate. In all cases everyone tread carefully and with the goals of protecting users forefront. You can see how contentious this winds up being by looking at the DNS-CERT discussions at ICANN, where issues like roles and responsibilities raise a lot of objections. Figuring out the groups that will choose issues to tackle and coordinating that globally is an open question.

A third – and technical – issue made visible in the experiences of the CWG is that we need tools to quickly tackle complex malware. Our tools are labor and time intensive, things that are in short supply when addressing the volume of threats we face in 2011. There’s a clear set of technical needs and accomplishments that can easily be funded here.

I think the CWG report is worth a study for these and many more reasons. I’m proud to represent Arbor as we battle the worm and protect the global Internet.

Another after action report came from ICANN, who was instrumental in the response. The report was published in May, 2010, and is largely a timeline of events. The two together are very worthwhile reading if you are involved in the operational security community.

Distributed SSH Brute Force Attacks

By: Jose -

Recently a couple of news reports have come in that suggest that someone has changed how they do SSH brute force attacks:

The change is this: instead of the hosts from the SSH botnet pounding away as fast as possible from the same IP over and over and over again, where you see it failing and failing and failing, these guys have moved to what they should have been doing, coordination. They’re only trying one or two logins from a single IP before moving on; another IP from the botnet tries a new login. The IP may re-appear but only after a while. This defeats some of the simple rate-based triggers for local protection. What’s more is they’re only trying very specific SSH servers. They seem to not be trying everything in the book.

The answer to this is to use a blacklist, working on the theory that someone else has seen this IP scanning and trying logins and failing. Here’s a list of blacklists you can use (import them with caution, use at your own risk, etc).

These lists MAY help you prevent the attempts from the botnet (and many others). I’ve worked with the person (let’s call him C) who both gathered this list and did more analysis of this distributed, patient scanning to look at an overlap between Arbor’s SSH scanner and bruter blacklist and his own blacklist and we came up with about 12% overlap. Not great, and I wonder how much overlap there will be in the future (ie if we go forward one day would the Arbor SSH blacklist have prevented a bruter from trying logins). I would suggest contributing to those blacklists to help everyone, there’s a lot of SSH-bots out there at this point!

Also, here’s a 2d snapshot of ATLAS’ SSH blacklist: http://atlas-public.ec2.arbor.net/public/ssh_attackers

What we’re lacking so far is a capture of the tools on the box, the bot code. I analyzed a case earlier this week where an SSH server was broken into via SSH scanning and it was just a typical IROFFER network. This looks far more substantial than that.

If you have tracks matching this AND you want to help analyze this, please be in contact.

Many thanks to C for his great analysis of the events so far. He, too, is looking for “what comes next”.

Reblog this post [with Zemanta]

Timeline: Atrivo/Intercage Depeering, Dissolution

By: Jose -

I’m no slacker, really, I’ve just been very busy with a lot of things behind the scenes. One of the things that’s consumed my time has been the Atrivo/Intercage saga. Here’s a timeline I assembled for myself recently. It’s based on the NANOG mailing list, some private lists, the CIDR Report tools, BGP analysis, and some private emails, as well as this blog post.

  • Pre-history
    • Oodles of badnes, much of it with a line through Intercage
  • 28 Aug, 2008
    • HostExploit report
  • 28 Aug, 2008
    • WaPo Krebbs piece
  • 30 Aug, 2008
    • GBLX de-peers
  • 12 Sep, 2008
    • No more upstreams
    • Atrivo CIDRs appear elsewhere (Cernel, Pilosoft, etc)
    • WVFiber provides connectivity
  • 20 Sep 2008
    • Pacific Internet Exchange gets involved …
  • 21 Sep 2008
    • Atrivo again off the air
  • 22 Sep 2008
    • Atrivo back online, UnitedLayer provides upstream
  • 25 Sep 2008
    • Atrivo takes itself offline, says it will be out of business with no customers

Corrections welcome, this is roughly accurate I think.

So, some thoughts on this whole thing: no one is behind bars for what appears to have been blatantly criminal software that was hosted on this network; no one knows who was behind the operation’s malicious “customers”; no one has investigated this, it seems. And now the badness is popping up elsewhere.

We’ll have to continue to monitor this one and map the badness. We now know more rogue networks that are welcoming the hosting, and so this cycle will start again.

This is not a long-term victory.

Drive By Downloads: Links and Insights

By: Jose -

I spend a lot of my time looking at malicious code and where it gets loaded, but I don’t get to spend much time digging into big, widespread attacks or specialized exploits. However, here’s a few links from my reading this morning that help keep me informed since I can’t spend all of my time digging too deeply into every event.

On DDoS Attack Activity

By: Danny McPherson -

We’ve been doing analysis on the DDoS attack and network traffic distribution data some of our Peakflow SP customers are providing and I figured I’d share a bit of a teaser. The data is shared with Arbor via an optional module within Peakflow SP, so if you’re wondering how it’s gathered have a look here.

We’ve got 26 SP deployments participating at the moment (and still growing) and have been archiving attack and traffic data daily for about four months now. Some stats on the data gathered thus far:

  • the data is representative of only inter-provider traffic and attack activity (customer and internal attack activity explicitly excluded)
  • about .5 Tbps in aggregate
  • about 500 routers, 30,000 unique interfaces
  • ~126 day collection period
  • ramp from 12 to 26 participants during that period
  • 120,231 attacks reported (954 attacks/day average)

A daily high of 1991 attacks was observed on 11/8/2006. There are also some discernible drops in aggregate attack activity around Christmas and New Years, perhaps the miscreants were distracted with the holidays?

Plot of Aggregate Attacks Per Day

Lots of interesting information can be gleaned from the data. For example, TCP attacks lead the pack at the moment, followed by ICMP and UDP-based attacks. Of the TCP-based attacks, SYN floods are the most prominent attacks, followed closely by Null and “Christmas tree” attacks.

Attack and routing data is shared via XML, a typical attack fragment looks like this:

Attack 122002

This attack was one of the larger attacks observed over the current period, with a maximum packet rate of ~6.2m packets per second. It appears to have been source IP and port spoofed (hence the and 0-65535 fields versus something more specific). The SP reporting the attack above observed it ingressing the network via 19 different routers, 52 different interfaces, pretty well distributed. It was a Null TCP attack (no flags) targeting a, umm.., “popular” IRC server (TCP/6667), whose IP has been anonymized here. Not surprisingly, given the scale and distribution of this attack, several of the other participating SPs reported some of the attack flows via their networks as well.

Many other attack and traffic attributes are available, from packet sizes, ports and protocols to detailed Network and Transport Layer attack vectors for each reported attack, to include sources and targets, etc.. Both the specific attacks and data on aggregate, correlated over time will provide some interesting perspective.

If you’ve got thoughts, questions, or comments, ping Craig Labovitz or I. Otherwise, stay tuned, as you’ll be seeing a great deal more analysis of this and related data in the near future…

Snipers from Southeast Asia

By: Jose -

In 2006 we saw an increase in the number of attacks targeting Windows, specifically MS Office, file formats. We’ve seen some also attack WinAmp and other media players, and even a few targeting AV software and file formats, but MS Office appears to be the main target of interest. Indications show that hundreds of such attacks are lurking in Office, and are being slowly revealed by attackers, who are doing their own research. Others have blogged about this, such as on the Symantec Security Response Weblog. This is going to continue for a while, start looking at MS Office security steps sooner rather than later.

Suffice it to say, Office documents represent a great breeding ground for such attacks. They provide rich functionality, enabling linked content and actions inside a normal document, and on the human side they represent a great social engineering vector. This is, by all accounts, going on. 2006 saw these Office flaws sometimes paired with very targeted, very specific and high power attacks, sometimes to great effect. In these cases the adversary appears to profile the organization and carefully craft a few messages (and documents) to select individuals. This provides the attacker with three things:

  1. The right victim if you’re interested in highly sensitive information
  2. Limited detection capabilities and a limited chance to draw attention to your attack, after all you’re working with a 0-day here!
  3. Finally, with properly crafted messages, specific to the individual, you have a higher chance of them viewing it (and launching the exploit)

The bad guys know this and they’re slowly revealing their Microsoft Office bugs this way, choosing very specific targets and sending only a handful of messages. This makes detection slower than normal as people try and understand the attack. The document has often been carefully crafted to drop a small executable that downloads a new payload. In most cases these guys aren’t using stock malware, but they are using some techniques and approaches that aren’t unique (but not too common). All of this is exactly what you would expect from a 0-day, and all of it is designed to avoid signature-based detection and even having a chance to alert your security provider.This sort of attack will continue, and it will continue to come from determined adversaries who are talented and doing their own vulnerability research, and it will continue to be very targeted. There’s no shortage of bugs, no shortage of reasons to commit these attacks (corporate espionage, state espionage, etc), and clearly no shortage of talent at operationalizing these.

The latest attack, as described by the MSRC blog, is a continuation of this MO: a new MS Word vulnerability, dropped malware, and only a handful of targets. This attack drops a downloader, which grabs another file. The downloader in at least one case modifies the registry to keep itself available.

So far the good news about these attacks is that millions may not really be threatened. The actual vulnerability details are held pretty quietly right now, and operationalizing them appears to be difficult for many garden variety attackers (as opposed to setSlice() or the WMF setAbortProc() attacks). However, the timing is something else we’ve been seeing for a while, released to avoid being patched immediately de to MS’s patch release cycle. No word on when this one will be patched.

Our new ATF policy for Download.Sniper detects it. The policy looks for the traffic generated by the file dropped by the malicious documents as they hit servers either in China or Korea (depending on the document). We’ll continue to watch this one and see how we can further improve this detection, of course. AV detection is limited but growing. Currently the malicious files grabbed by the downloader are still on the website.

Multi-stage Phishing

By: Jose -

I got an interesting phish this morning for Amazon. What makes it interesting is that it uses not one but two different redirectors, one from Yahoo! and one from Google, and then what appears to be a bot in Chinese IP space before it finally lands on the phishing site. The URL in the mail and a simple representation are shown below.

People do this to avoid simple URL filters that don’t look beyond the first host. In that case, they’ll see “rds.yahoo.com”. Some of the smarter ones may see that the Yahoo! site is redirecting to a Google site and still pass it; Yahoo! tends to use a lot of internal redirections for their web-content via RSS, so it makes sense that some people have started to look at where the Yahoo! redirect goes. In this case it goes somewhere benign (at first), Google, so a simple checker would allow it. The third stage of the phish lands on a bot in China, which itself has a simple meta refresh to the ultimate destination.

Not terribly scary, but it does mean that if you analyze URLs for a living, you have to really follow the whole path. Scammers and other folks are abusing these open redirectors to their benefit, and your tools need to adapt. More importantly, people hosting open redirectors need to respond, as well. Remember, we went through this sort of thing before: whitelists are preferred over blacklists, and check all input for conformance to your standard. No sense being a party to a problem when you can do something about it.

I have a whole host of phishing mails that showcase various techniques used by scammers in a Phishing Corpus set I maintain. You can study these for any good, upstanding purpose, such as writing a better phish detector.

Tracking Moving Objects

By: Jose -

A few images from the past few workdays of my life, and some explanation:

vulnerability tag cloud

To the left is a tag cloud associated with vulnerabilities. These are pouring into an ASERT-internal application we use to track activity in news and vuln reports, as well as malware reports from third parties. We have tagging built into it, and it even auto-tags things based on matching tags and words that it knows. Very useful. The tag cloud here easily demonstrates that informix, MS, exploits, and remote attacks dominate the picture this morning.

We spend some of our time analyzing malware so we can determine threats and such. We found that we couldn’t rely on external reports for enough detail. So, I used some of our internal tools over the weekend to analyze the new Mocbot that eployed the MS06-040 vulnerability. Below, you can find some some screenshots:

Here’s a screenshot of an ASERT-internal application that conducts active DNS tracking; I’m specifically observing some of the bot’s IRC servers and associated IPs. We can see that the authors are moving them around, and we have a good idea of when:

DNS Track of a Mocbot IRC Server

What about Mocbot’s internals? I used IDA Pro to discover some things for an ATF fingerprint that we published later that day, just in time for the world to start its work week. The bot’s not that interesting when you get down to it, aside from employing a new exploit vector:

Mocbot in IDA Pro

Mocbot modifies several Windows registry keys. Here’s a few, basically attempting to disable the firewall, networking, and AV. Once it’s in, it doesn’t want to go, and it doesn’t want anyone else there, either:

Mocbot in RegEdit32

These are just some of the tools we use – both internal and private as well as third-party tools – that help us stay abreast of the security threat landscape. There’s a lot of good research and work being done, and a ton o’ threats. Managing (and automating) that information flow has become paramount, primarily because we just don’t have the time to manually inspect everything, and part of being efficient is to develop tools to assist those efforts. I think we’re getting there.

Go Back In Time →