US FTC Shutdown of 3FN Effect on Spam: None?

By: Jose -

I’ve been monitoring spam using ATLAS for a while now, keeping track of spamming botnets etc using a spam trap. The shutdown of 3FN by the US FTC supposedly had an effect on spam levels. We saw this last fall with the 75% drop in spam after McColo was de-peered. Are we seeing it with 3FN?

I’m not so sure.

Below is a graphic showing you the top 50 countries sending our spam traps messages uniqued by sending IP and subject (which is designed to measure how big a spam botnet is, as opposed to mail message volumes). Note that the US FTC announcement was June 4, 2009.

spam top 50 ccs around 3fn.png

On June 4 we see a big spike in spam activities (more bots sending more unique messages). This isn’t what we expected.

Still investigating but I don’t think we’ll see the kind of drops we saw last fall here.

Torrent Sites and The Pirate Bay: DDoS Afoot?

By: Jose -

Around the time of the convictions late this week of the folks behind The Pirate Bay (also see The Pirate Bay Trial: The Official Verdict – Guilty on TorrentFreak), a well known BitTorrent tracker distribution site, we started seeing reports of DDoS attacks on other torrent tracker sites. Never one to miss an opportunity to look for massive DDoS attacks against important sites, I went looking in our archives to see if this was indeed the case.

It’s interesting to note that TPB may indeed be a key to illicit torrents and pirated material (although if it’s shut down something will surely replace it). See P2P researchers fear BitTorrent meltdown from the Iconoclast website. According to the piece, TPB is so heavily connected with over half of the torrents tracked there disrupting it may have an impact on all BitTorrent traffic for a while:

Raynor told TorrentFreak that if The Pirate Bay goes down, many of the other trackers might collapse as well. “If The Pirate Bay goes down the load will automatically shift to others. This is because most of the Pirate Bay swarms also include other trackers. When Pirate Bay goes down it would overload others until they fall also. Meaning even more stress and further casualties. This is likely to end in a BitTorrent meltdown.”

This makes sense that someone, perhaps a vigilante or just a “griefer”, is hoping to shut down pirate activities by going after major torrent tracking sites. This bears some analysis.

All in all, except for free-torrents.org getting attacked by a Black Energy botnet run out of China (using the C&C at hack-off.ru), we can’t corroborate this spate of attacks. Free-torrents.org has been getting pounded by this botnet since mid March, 2009, in fact. But none of the other major sites appear to be receiving such packet love.

As such, while this may be happening we’re not seeing it hit the massive scale that we would expect.

US Government Moves Fast on DNSsec

By: Jose -

I honestly didn’t think I would live to see it, and this interview with Mockapetris about DNSsec adoption didn’t help.

$ dig +dnssec president.gov

; <<>> DiG 9.3.5-P1 <<>> +dnssec president.gov
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 33216
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 6, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;president.gov.                 IN      A

;; AUTHORITY SECTION:
gov.                    10800   IN      SOA     A.GOV.ZONEEDIT.COM. govcontact.ZONEEDIT.COM. 1226785404 3600 900 1814400 86400
gov.                    10800   IN      RRSIG   SOA 5 1 259200 20081215220741 20081115220741 45162 gov.
    UREQjZUJ9/40y/kZytGcBX0jonfNf/yiu0XKDHlVWeKjLkOFqqwY9cf2 gON/ThzPpWRF7aJyo785PQDhYttg5cjDfSF0GKKhsnNcZjYC3u1nluH6
    noQVYGsQ7MpZrNiQnbzg83I4a8z5DIdj1rksaQddAMmR2kIsB0Jh3Duj zq6tfmCcqyQVxzXPUO9rhq87yuYM9gEttm+zlyqBO+TZrykd5u0OMIXK
    YNHchhYX/KYebwfgUq0jo4AZRyVx8fVNu0WXsedjLMtByokwI26u5TpU DsfDUYabOXWjXn40Dg5Se9msQUzKBXgFZEHTCBQ8N9JN9Z9gM+pY5JO5 7mNDvg==
gov.                    10800   IN      NSEC    2010CENSUSJOBS.gov. NS SOA RRSIG NSEC DNSKEY
gov.                    10800   IN      RRSIG   NSEC 5 1 86400 20081215220741 20081115220741 45162 gov.
    s5iu9X5tFvRZCZqkayZbbAXQfSi3Kjj8sh4qyFdDnIqXKXLB/fFRH2rw 2E3QDFLE6mLRbfvwJzJ16xwrtUuVliUK0H0ktP3jU03zcYcK8nRjtsn7
    jPTmD+qcaXc1lbGzdi2srTKrPAqbVdetBQgQ9rDV+ZPMzcUZ5LUqcOVe tqgKGiKbB2xGEZySK0R+dyAmPkhhlcyqpfJtYcyd+nTP2XJ5EqRM9S14
    8A1vb0zZgJwrBaJNEOZL9ZHSyWLRiCqlegu4qyDnVWBC2uKB8Nkwdl9a RR7IgZ4D4K2vgbqprk7U7G+xSp8CMVfK4wAgTVM7MG23U0R3PndrS217 rQa2KQ==
PRESERVEAMERICA.gov.    10800   IN      NSEC    PRESIDENTIALSERVICEAWARDS.gov. NS RRSIG NSEC
PRESERVEAMERICA.gov.    10800   IN      RRSIG   NSEC 5 2 86400 20081215220741 20081115220741 45162 gov.
    U7zNw6u1syRBTv9uuU2mFEBANbCkJuVNprtU/K0rn3NgCmlt5MNQPKmV oobpjqfoolqPIPeU5TgM3L+CokDvhSXzuM8pmwQwlqD0l/oH3JE5K3zT
    kLsevS2piYeotJAPE4mWl4wZgAkSwuHluwaOqVhjGL6nU01ide5q45HQ lDgjpcTe4VHh38szXOoBNMCDTD6+nvpguniULV6gWj6Cat2cp6vetZc8
    xnxhUXcCBgZbU5Qx876bDy3m1KIoc2A7kgWCDuEuvurvQjXR8UCijigf pIAtVGrXZMOg+TNOk+5eIY/B4oOOY1bdAZHwvVD223BOO8QLdyHycT8S oh8oJA==

;; Query time: 106 msec
;; SERVER: 10.1.2.41#53(10.1.2.41)
;; WHEN: Mon Nov 17 15:40:42 2008
;; MSG SIZE  rcvd: 1088

I know that EXECUTIVE OFFICE OF THE PRESIDENT OFFICE OF MANAGEMENT AND BUDGET Memo M-08-23, dated August 22, 2008, stated:

The Federal Government will deploy DNSSEC to the top level .gov domain by January 2009. The top level .gov domain includes the registrar, registry, and DNS server operations. This policy requires that the top level .gov domain will be DNSSEC signed and processes to enable secure delegated sub-domains will be developed. Signing the top level .gov domain is a critical procedure necessary for broad deployment of DNSSEC, increases the utility of DNSSEC, and simplifies lower level deployment by agencies

But I did not expect to have “dig +dnssec” showing me that the .gov root was signed working before, well, December 2009.

Hats off to the folks involved in getting this moving ahead very swiftly.

You can follow this project’s progress at the FISMA website.

Timeline: Atrivo/Intercage Depeering, Dissolution

By: Jose -

I’m no slacker, really, I’ve just been very busy with a lot of things behind the scenes. One of the things that’s consumed my time has been the Atrivo/Intercage saga. Here’s a timeline I assembled for myself recently. It’s based on the NANOG mailing list, some private lists, the CIDR Report tools, BGP analysis, and some private emails, as well as this blog post.

  • Pre-history
    • Oodles of badnes, much of it with a line through Intercage
  • 28 Aug, 2008
    • HostExploit report
  • 28 Aug, 2008
    • WaPo Krebbs piece
  • 30 Aug, 2008
    • GBLX de-peers
  • 12 Sep, 2008
    • No more upstreams
    • Atrivo CIDRs appear elsewhere (Cernel, Pilosoft, etc)
    • WVFiber provides connectivity
  • 20 Sep 2008
    • Pacific Internet Exchange gets involved …
  • 21 Sep 2008
    • Atrivo again off the air
  • 22 Sep 2008
    • Atrivo back online, UnitedLayer provides upstream
  • 25 Sep 2008
    • Atrivo takes itself offline, says it will be out of business with no customers

Corrections welcome, this is roughly accurate I think.

So, some thoughts on this whole thing: no one is behind bars for what appears to have been blatantly criminal software that was hosted on this network; no one knows who was behind the operation’s malicious “customers”; no one has investigated this, it seems. And now the badness is popping up elsewhere.

We’ll have to continue to monitor this one and map the badness. We now know more rogue networks that are welcoming the hosting, and so this cycle will start again.

This is not a long-term victory.

$10M in 10 Minutes? – Security Implications of UAL Bankruptcy Snafu

By: Danny McPherson -

Unless you’re living under a rock, it’s likely you’ve already shaken your head at least once today at the impact an archived, 6 year old newspaper article had on United Airline’s (UAUA) stock today.  If you are living under a rock, then read this.

For anyone even remotely security minded, reading stories like this bring so many attack vectors to mind that one could ramble for hours, but since, coincidentally, I’m about to jump on a UAL flight to LHR in a couple minutes, I’ll just share a couple thoughts.  

Given the near immediate reaction of “leaks” in today’s Internet age, much less misinformation, and certainly [not] “old information”, one might surmise that an attacker could easily compromise a few targeted assets – not at a financial, or government, or exchange, but at a media outlet, and cause significant potentially cascading financial impact.  You could certainly buy stock and sell it short with such a ploy, or simply buy low and sell high (given Nasdaq’s “the rest of the trades will stand” response) – with trading volumes we’ve seen here, a couple million dollars might easily fly under the radar!

I asked a journalist-type friend of mine on IM a moment ago what he thought about this and he replied “That’d never happen here!” – then he quickly recant and stated “Well, at least, let’s hope it never happens.”

With yet another express lane into financial gain, I’d suspect this won’t be the last you’ll be hearing regarding employment of such an attack vector…

Hello from KL

By: Jose -

DSC00559.JPG

Greetings from Kuala Lumpur in Malaysia. I’m here for the IMPACT Alliance summit. We’ve spent the past few days discussing strategy and approaches for combating global Internet, including cyber terrorism, fraud, and the like through coordinated international cooperation. There’s a lot of ground to break and a lot of challenges ahead, but everyone sees the need and is committed to the cause at hand. I gave our view of the political DDoS landscape in a presentation yesterday. Along the way we’ve been strenthening our contacts and growing our cooperations around the world.

Net Neutrality’s Unintended Consequence

By: Danny McPherson -

Most of the discussion surrounding net neutrality to date seems to revolve around fixed broadband consumers complaining that ISPs are unjustly mucking with their traffic, discriminating between different traffic types, and providing prioritization without their consent — and the providers shouldn’t be. Rob took a bit of a consumer-centric view to practically addressing net neutrality in his blog post here, I’m going to take more of a ISP-centric view.

ISP actions have centered around attempts to ensure fairness across subscribers (e.g., where contention may exit because of asymmetries in available bandwidth, or shared medium such as cable environments), and optimize network resources, in particular bandwidth utilization at various points in the network. Generically speaking, these network resources exist in one of three primary locations:

  1. Access network capacity: This relates to the bandwidth available between the subscriber’s cable modem and the MSO’s Cable Modem Termination System (CMTS) with cable-based Internet services, or the subscriber DSL modem and Digital Subscriber Line Access Multiplexer (DSLAM) in DSL networks. One of the primary distinctions between these two types of access networks architectures is that, because cable employs the existing shared HFC cable plant, cable access is “shared” across multiple subscribers. Downstream traffic (network -> subscriber) is multiplexed onto a TV channel that is shared between a group of cable modem subscribers. DSL, on the other hand, is “dedicated” in the sense that each digital subscriber loop is allocated a discrete channel from a set of available channels. The offshot with the cable access model is that during periods of heavy utilization subscribers within a given group may experience performance degradation because of high-bandwidth users in the same group. With both cable and DSL, traditionally, available bandwidth was much narrower upstream (subscriber -> network) than downstream, hence asymmetric DSL (ADSL) services, which are similar in cable offerings. The primary reason for this was so that, unlike a synchronous service, higher frequencies could be allocated downstream, providing subscribers nearly 2x download speeds. This, of course, makes assumptions about what applications subscribers are using (assumes larger download capacities v. upstream P2P traffic, video uploads, etc..).
  2. Internal network capacity: Internal capacity includes traffic from DSLAM or CMTS to on-net access servers, local caches or content distribution infrastructure, often including email and other ISP-offered services, and traffic being carried to points at which the ISP inteconnects with other ISPs and content providers. Also, in both cable and DSL architectures, subscribers can’t communicate directly, so it’s necessary for traffic to be switched locally by the DSLAM or CMTS, or routed between devices further upstream within the broadband ISPs network. Often P2P and other local user-user traffic would be included here.
  3. Transit “Internet” capacity: Because ISPs that offer broadband services typically focus in access markets, they’re often considered eyeball heavy. That is, they’ve got lots of users, but little content. What this means is that most of their users access content that doesn’t reside on their network. In order to provider connectivity and access to this content, they either engage in bi-lateral interconnection agreements with content companies and other types of ISPs, or acquire transit services from ISPs that provide this connectivity. Lower-priced transit services from ISPs may range from ~$100-$300/mbps or more monthly recurring, just for IP access, in addition to transport infrastructure costs, which includes the local loop and often long-haul backbone circuits upon which IP traffic is carried. And they’ve got all the associated capital investment, of course. Scaling this capacity for millions of users is a significant cost. Recall that unlike a utility, your Internet transactions could be with devices just down the street, or across the globe. Infrastructure has to be provisioned to accommodate these any-any connectivity models.

Broadband access has obviously not been what most folks might refer to as a high-margin market, with profits in the 5-10% range (after removing less-fortunate outliers). Given that most of these ISPs are publicly-traded companies and have shareholder expectations to meet, something has to give. Two things broadband ISPs have traditionally been most concerned with are subscriber growth, and minimizing subscriber churn. Growing average revenue per user (ARPU), and ideally, associated profit margins, has been a key driver for services convergence in the form of triple play (voice, video, and data). However, with the higher revenue services comes higher consumer expectations, in particular for services availability, performance and security. Furthermore, minimizing call center volume is key to optimizing profitability, as with traditional data services only yielding 5-10% profits with low ARPU services, a single customer help desk call can often snuff out profitability for 3 years or more!

ISPs simply can’t continue growing subscriber numbers and access speeds without continuing to invest heavily in Internet transit and interconnection bandwidth, internal network capacity, and access infrastructure. Looking for ways to offset the operational costs associated with these various investments is critical. Some ways of doing this include initiatives such as P4P, or investing in infrastructure that time-shifts lower-priority traffic into network utilization “troughs”, providing more real-time protocols such as VoIP with the network performance characteristics they demand. Peak-to-trough access network utilization rates today are often 4x or more, and because networks have to be engineered to perform without capacity problems during peak utilization periods – capacity planning for ISPs becomes a painfully expensive ordeal. This is why traffic management for ISPs is a critical function – liken it to the PSTN’s Mother’s Day problem.

I don’t believe exploiting protocol behaviors by doing arguably unethical things like injecting spoofed spurious TCP RSTs is the right answer, as that’s just going to piss subscribers off, increase call center volumes and subscriber churn, further compromise transparency, potentially break applications and result in non-deterministic network behavior. Unless there are incentive structures in place to encourage users to change behaviors (e.g., that of how their P2P protocols utilize network bandwidth) then nothing is going to change.

Which finally, leads me to my point (yes, there is one). Consumer advocates say ISPs shouldn’t discriminate, shouldn’t attempt to time-shift traffic, and basically, shouldn’t gate subscriber traffic in any way. So, under the auspices of the net neutrality proponents, if you’re an ISP, pretty much all you can do is one thing: change your billing models to change subscriber behaviors, and generate more revenue to fund further build-out of your infrastructure. That is, bill customers differently. Be it either strict metered services (usage-based), or more sophisticated usage-based services that introduce peak v. off-peak, on-net versus off-net, distance-sensitive, domestic versus international! It certainly costs more for a US broadband subscriber to obtain content or engage in a VoIP conversation with someone in London, or Singapore, or Cairo, as opposed to someone across the street. Who bears these costs, costs which have traditionally been “transparent” to subscribers? Think about it, this all you can eat model can’t continue, and users don’t change behavior without being incentivized to do so.

Why do the mobile folks have such high ARPU, and better profitably per subscriber? Because of a slew of billing models they have that make users accountable for network resources they consume. Ever use data roaming services on your mobile handset in another country and discover at the end of the month how incredibly expensive it can be? There’s a good reason for this.

Interestingly, technology hasn’t been at a place where these types of IP-based metered billing models were a viable option. Today, they are, and ISPs are being given little alternative. And trust me, your Internet access bill is NOT going to get cheaper as a result. We’re starting to see basic metered models emerge already.

Net neutrality advocates, be careful what you wish for…

5 Ways to Molest Internet Users

By: Danny McPherson -

A good bit of the attention garnered by DMK’s ToorCon presentation focused on how ISPs are employing Provider-In-The-Middle Attacks (PITMAs) to collect ad-related revenue from their customers, and how security “of the web” ends up being fully gated by the security of the ad server folks. While I completely agree with this, I would emphasize (as DMK did subtly note) that, even for the attacks DMK outlined, you do NOT have to be the ISP/packet data path at all to molest Internet users, just in the DNS “control path”.

While certainly not meant to be an exhaustive list, here are five techniques that various folks in the DNS control path can employ to perform similar or adjacent questionably ethical activities.

  • Domain tasting: Exploit the add grace period (AGP) and perform domain tasting. Register a domain you think to be clever or closely associated with something useful (e.g., googgle.com) — you’ve got 5 days to see how many hits you get on a site associated with some newly registered domain. If you garner enough activity to cover the domain’s registration fee, register the domain. If not, return it under the AGP policy and find some new ones, or perhaps consult a more-clever colleague for other recommendations. Expand upon this with domain kiting
  • Domain name front running: Do you run a whois server? Are you a DNS registrar? If so, engage in domain name front running. Field all the queries checking for availability of new domains, and if they’re not registered, take’m, register them yourself! Then you can park spam-like crap there, or force those unsuspecting clueless Internet users who used your site to check for availability to register them with you or not at all.
  • Domain name front running enabled by non-existent domain (NXDOMAIN) data: Determine what the most common typos or queried domain names are. Register them, park’m somewhere, and collect click revenue. If you’re anywhere in the DNS query resolution path, from the local resolver to the root, you’re in the money! And you’ve even got a good source of historical data for forecasting hit rates, no need for that unnecessary domain tasting business, it’s just overhead. Got integrity issues with this? There are folks that will buy the NXDOMAIN data from you if you prefer the hands-off approach.
  • Become a DNS services provider and hijack customer subdomains: Cash in on customer subdomains. Make it legal by writing some subtle contractual language (e.g., Schedule A, number 11) buried deep in the service agreement, then park a bunch of crap on generic pages within your customers domains and generate some new revenue sources.
  • Synthesize DNS query responses that result in NXDOMAIN: Operate DNS resolvers? Or Authoritative DNS servers? Or TLD servers? Replace responses that would normally result in NXDOMAIN responses with wildcards to sites that contain a bunch of ad-related crap. Sit back and get fat as the money rolls in! This is most akin to what DMK was speaking off, though you might find various related mechanisms in the preceding technique as well.

What’s your DNS resolution provider’s policy with regards to handling query data or fielding responses for non-existent domains? What’s your DNS service provider’s policy? What’s your ISP’s policy? Note that not all providers maintain their own resolvers, some may use the resolvers provided by upstream ISPs, or perhaps companies expressly focused on “DNS services”.

This discussion relates closely to the “Internet transparency” comments I had yesterday. I don’t believe any of the net neutrality discussions to date include DNS providers or resolution services, nor am I convinced they should. However, I believe the scope of this to be much larger than with just the ISPs themselves.

IPv4 Exhaustion::Trading Routing Autonomy for Security

By: Danny McPherson -

To see who’s been paying attention, let’s kick this off with a quiz. What do the following three items have in common:

  1. Allocation authentication (i.e., titles) for Internet numbers (i.e., IPv4 & IPv6 addresses, AS Numbers)
  2. Inter-domain routing security on the Internet
  3. IPv4 exhaustion

If your answer was along the likes of:

  • 2 requires 1
  • 3 requires 1 if IP addresses become resources
  • 3 has implications on 2, in particular from a scalability perspective

Then you’re pretty close.

One of the big problems with securing inter-domain routing on the Internet is that no central authoritative verifiable source exists for who owns a particular routing domain identifier or set of IP addresses, and which routing domain(s) are authorized to advertise that address space. Now, I know you’re thinking, what about IANA, and Regional Internet Registries (RIRs) such as APNIC, ARIN and AfriNIC, and all that stuff? Well, true, IANA allocates blocks of addresses and AS numbers to RIRs, who subsequently allocate those numbers to Local Internet Registries (LIRs), National Internet Registries (NIRs), or directly to ISPs or other network operators.

However, that’s pretty much as far as it goes. That is, allocations from RIRs have no impact on what’s actually routed on the Internet, or who uses what IP address space. We’d all like to think it does, especially the RIRs, and a sort of MAD model suggests it to be the case, but it isn’t. To illustrate this point consider The CIDR Report, which suggests 317 potentially bogus AS numbers, and 392 potentially bogus route announcements that are being advertised in the global routing system today. Note that ‘bogus’ in this case refers to unallocated by IANA or an RIR, or no record of allocation exists.

Basically, if you’ve got an AS number and a BGP session with a willing peer or two, you could advertise into the routing system pretty much whatever IP space you’d like and start using that space – as illustrated with the YouTube and Africa Online Kenya route hijacks as of late. Heck, you could even register it in one of 50 or so Internet Routing Registries (IRRs) as yours, assuming they don’t verify actual RIR allocations (e.g., as RIPE does). It’s not something I’d recommend, but if there’s not contention for that advertisement and use of that address space (i.e., someone else is using it, either legitimately or not) then it’s all yours – until it’s legitimately allocated (or not) and someone else starts using it – then there’s contention.

Now, as you might suspect, if you’re an RIR and your whole reason for existence is management of Internet number resources, you might consider this a threat. Or, if you’re me, you might consider it something that’s been fundamentally broken and in need of attention for a long time now, but mostly ignored because the appropriate folks didn’t have the right incentive to invest in the egg part of this chicken/egg problem.

Enter the egg incentive::IPv4 exhaustion. Consider this:

Then one might surmise that the value of an IPv4 address is about to increase considerably. Not only is the value going to increase, a market for trading of IP numbers resources is about to emerge. Don’t believe me? Have a look at the last ten thousands or so emails on ARIN’s public policy mailing list (PPML) – which is full of network engineers turned economists. But wait, isn’t management of IP resources the responsibility of RIRs? But they don’t actually have any control over this today, and as such, how could they possibly maintain some semblance of control?

Ahh, enter Resource PKI and SIDR, with community and specifically RIR work on Resource Certification. In short, this work is aimed at providing an infrastructure that enables certification of “Right of Use” for IP addresses and AS numbers with X.509 Resource Certificates. If this infrastructure exists then it can be used by RIRs to maintain control of IP numbers resources. It could also be used by folks for informational purposes, or to define routing policies based on a verifiable source, or even directly employed by the routing system itself through protocols such as SBGP.

Upon full employment of such a system, the fundamental change here is that the IP resources allocation hierarchy that exists today, which is sort of an out of band function that has no direct consequence on what’s actually routed, now could have direct control over what’s routeable, what’s actually routed on the Internet, and perhaps most importantly, what’s not. So, if you don’t pay your RIR membership fees, your address allocations could actually be revoked, and this could trickle its way into the routing system, where filters might be augmented to discard your route announcements, or into a protocol like SBGP where it’s actually automated.

This to me represents a fundamental change – RIRs will be taking on an operational role they’ve never had. If their systems are compromised or unavailable, or have some policy mandated by government or other entities, it could have considerable consequences.

With that, let’s take a step up. With this RPKI thing who’d be the trust anchors (TAs) in the certificate hierarchy? Well, IANA gives address space to RIRs, so maybe they should be the root? Or should RIRs be the root TA for space they’ve been allocated from IANA? Or is it a multi-TA system with the RIRs and IANA? Surely IANA will need to be the root for at least the legacy space they allocated that pre-dates RIRs, as well as reserved IPv4 and IPv6 space, and all that space that has yet to be allocated to RIRs.

OK, so IANA and/or the RIRs “hold the keys”. But wait, doesn’t IANA fall under an ICANN umbrella? Doesn’t ICANN operate under an agreement from the US government, specifically, the Department of Commerce, something from which they’d prefer to become more independent? Wasn’t there some reluctance from the DNSSEC community because of perceived Internet governance by the US?

If this RPKI thing exists, and folks use it to secure the routing system, could sanctions or embargoes now include what essentially results in revocation of a country’s Internet address space and associated Internet connectivity privileges? That’s certainly a feature that does NOT exist in today’s Internet routing system. And such a capability is actually far more powerful than that of a DNS corollary, as you could have multiple root DNS systems on the same IP infrastructure (ask China), but you can’t have multiple disjoint IP allocation structures in the same routing system (which one might argue we have today).

I’ve been rambling for a bit, I should probably wrap this up. Takeaways:

  • SIDR and RPKI work are being driven by the RIRs for reasons well beyond that of simply enabling secure routing on the Internet
  • IPv6 is coming, IPv4 address space exhaustion is for real, and we’ll all likely be feeling some pain from this very soon
  • If you operate a network, you’d better be paying attention, as some fundamental changes to your world are on the horizon.

FWIW, I think the SIDR and related work is, as evidenced, necessary. It’s just that we need to be well aware of what we’re trading off.

Net Neutrality Gumbo

By: Danny McPherson -

I made it up to San Francisco Saturday for the USF-hosted symposium titled The Toll Roads? The Legal and Political Debate Over Net Neutrality. I dropped in just to listen, as most of the folks there, and discussions on the agenda, were more of the legal, academic, economic, and even political science flavors. I did see several folks I knew, but technical folks were definitely in scarce supply. I figured I’d share a random set of my notes from the the meeting here, until the USF folks make podcasts available.

There have been several incidents that seem to have thrown fuel on the fire as of late, many of these oft-revisited by several of the panelists. These incidents included:

  • 2002 the High Tech Broadband Coalition (HTBC) petitioned the FCC to prevent cable companies from imposing restrictions on the connectivity enabled for broadband subscribers. The discussion was triggered by cable firms such as Comcast and Cox Communications adding user contract provisions that limited the types of access that might be provided, and the types of devices broadband subscribers were permitted to use at the customer premise.
  • 2005 Madison River Communications, LLC for blocking VoIP access (e.g., Vonage) via port filtering. MRC was fined $15k by the FCC, and promised to play nice in the future. The Consent Decree cited section 201(b) of the Communications Act of 1934
  • 2006 Verizon Wireless temporal blocking of Naral’s opt-in text messaging program
  • Comcast’s throttling bittorrent and similar applications
  • AT&T’s discussion of DRM enforcement, and the IFPI’s 2008 Digital Music Report, that’s scattered with sections like “Making ISP Responsibility a Reality in 2008″ and “Time for Governments and ISPs to Take Responsibility”

You can find lots more, for instance here, but the ones above were mostly all that were cited explicitly during the symposium.

A bunch of charts were put up from the Organization for Economic Co-Operation and Development (OECD) Broadband Portal, mostly with the apparent intent of illustrating that the current broadband services models in the United States were lagging internationally in both speed and percentage availability — namely because of the current systems enabling of monopoly and duopoly practices is short-sighted at best, and such practices have huge implications on both affordability and accessibility. Some folks, such as Bob Frankston, taking a bit more radical line, argued that this is all simply a symptom of artificial scarcity, as much of his work outlines.

Another interesting topic discussed was that of Reverse Net Neutrality, in particular that of ESPN360’s behavior. The infamous “We’re sorry, but you don’t have access to ESPN360. Please contact your Internet Service Provider and ask them to partner with ESPN360″. ESPN360 was only permitting access to their sites from ISPs that partnered with them, or wrote them a check, or distributed their content via mobile video or other means, or something of the sort.

I quite liked the Lunch Keynote by Rachelle Chong, Commissioner, California Public Utilities Commission. In December of 2007 she authored “The 31 Flavors of the Net Neutrality Debate: Beware the Trojan Horse“, where she essentially argues that imposing any rules would likely have negative consequences. During the Q&A someone in the audience asked a set of questions that contained about three “What IFs”, to which she replied [paraprhased] “The world of what ifs.. Hrm.. I live in the world of real companies that make real money, and whatever it is that we do, or attempt to regulate, has real a impact on this.” She supported a practical perspective, but clearly understood the bigger issue as well.

The insights provided by economist types; Scott Wallsten, Tom Koutsky and Lawrence Spiwak were all helpful as well. I believe most of these folks settled on more pragmatic cost-benefits analysis side. I believe it was Lawrence (Larry) that said “Consumer frustration is NOT a market failure. Carrier stupidity is NOT a market failure.George Ou shared the observation that this whole issue is being poisoned by political partisanship and that no one talks about actual legislation.

An argument for more transparency was made by some, stating that the ISPs and carriers should provide more information about what they block or throttle and why. Others quickly latched onto this, stating that ISPs and carriers should also be required to provide network design information such as full interconnection policies, over-subscription ratios, and other such information.

In opening statements by Timothy Wu, a Professor at Columbia Law School, he seemed to posit that in today’s information age we all rely on private firms, such as carriers, ISPs and search engines, to provide information, and that intermediaries imposing policy is an issue. However, I didn’t hear Tim and folks in the same camp huffing about $-biased search results returned by search engines, or the ESPN360 issue, for that matter.

Colette Vogele, who focuses on intellectual property law, specializing in media, technology, and arts, seems most concerned with the possibility that any traffic preference models employed by ISPs would hinder growth of smaller firms and individuals, thereby Keeping the Little Man Down. In particular, she cited Alive In Baghdad and Political Lunch, two such firms that operate on shoe-string budgets who could not make their content available if ISPs impose preferential treatment to Internet content.

Others mentioned content and mobile as primary concerns as well (beyond wired broadband), although well over the majority of the discussions were clearly about wireline and broadband, and were very U.S.-centric.

I was surprised I didn’t hear more arguments about the implications on end-user security (e.g., the fact that most of the world seems to be of the opinion that ISPs should do more to protect end users AND the greater Internet), or critical infrastructure availability, or emergency services implications and the like. I think there are many more details under the hood than most folks would care to acknowledge. For example, is it reasonable to throttle web or peer-peer traffic in order to ensure that an emergency services transaction receives the necessary network resources? How about network control protocols that provide Internet destination reachability information? or access authentication? or alerting and performance management? And, of course, what about the fact that Applications and Users vary widely in their politeness to others, as Richard Clarke of AT&T intuitively pointed out.

While I don’t think we ever settled anywhere near a common understanding of what Net Neutrality encompasses, nor what permissible discrimination would entail, or what transparency might reasonably require, amazingly, I do feel a bit more informed about arguments and motivators for many of the folks involved in this debate, and some of what’s currently being lumped into the Net Neutrality gumbo.

I also feel a bit more strongly about the need for more technologists to be involved in the debate, as there’s an obvious lack of technical expertise regarding how things actually work, and how what folks might care to overlook or marginalize today might have grave implications on tomorrow.

Go Back In Time →