Under Attack? Call (844) END.DDoS

DNS Updates – The cat, a now empty bag, and poison

Last week I posted a piece on the Internet Effects of the DNS bug disclosure, looking at a week’s worth of DNS traffic. Some folks had assumed massive patching and updates causing an uptick in DNS traffic (due to cache refreshes), and our Internet statistics revealed that we didn’t see any such traffic uptick. I assumed it was because people weren’t patching. Fast forward a few days: the bug was leaked, and Kaminsky’s confirmed the attack. Exploit code is out. The pressure to patch is rising.

Here’s a quick summary of Internet-scale measurements and observations from my reading this morning to give you an idea of global progress on this:

  • The Austrian CERT has a paper analyzing patch application, suggesting it’s been slow.
  • Our own Danny notes that ISPs have been slow to update their core DNS servers, mostly due to performance issues with the patches and the heavy testing they go through prior to massive infrastructure changes. Size requires stability, which requires testing and when you find issues, you have to work through them.
  • GovCERT in The Netherlands has released The Kaminsky Code, a fact sheet (PDF, in Dutch) about the issues and how to handle it if you’re a provider or need to update your DNS infrastructure.
  • The find folks at DNS-ORAC have some measurements showing emphemeral port randomization on the increase, but it’s still not dominant.
  • The folks at ONZRA have released CacheAudit to detect cache poisoning attacks.

I think Danny has some measurements he’ll be releasing soon showing increases in scanning and attack activity (specifically BIND version probes) since the exploit went live, so I’ll leave it be for now.

All that said, now that the cat is out of the bag, I’ll talk about my position when this first came out, that the vulnerability was something we knew about: I still think the root-cause of the issue (failure to randomize source ports due to laziness or performance issues, poor use of the 16 bits of entropy in the TXID) was obvious to anyone who’s ever sniffed a lot of traffic and looked at DNS traffic in particular, but Dan’s attack – and methods – is very neat, and some of the side effects he found are quite elegant. Nice going.

UPDATE (Saturday, July 26) … I should note two things that have been lost in all of this. First, folks should patch. Just because the bug isn’t new – like I said before, this was obvious to anyone sniffing traffic and obvious long before – doesn’t mean that you shouldn’t do something about it. What Dan’s done is made it easy. Exploit code is out and has been used, that should compel you to do what you can to address this issue.

Secondly, I should say clearly for for the record that I appreciate Dan’s efforts to be responsible. He did the right thing, got the right people involved, and handled this pretty fairly. It’s a shame this blew up in his face, but it wasn’t for lack of trying. I hope his experiences don’t cause other folks with similarily large issues to forgoe responsible disclosure, participation in the vendor notification process, and working to get things addressed.