Lessons learned from the FCC decision

By: Kurt Dobbins -

In the FCC ruling today, the commission criticized both the technology and the transparency of Comcast’s network management practices.  Network management in and of itself is not a problem, as the Commission has said in their FCC principles protecting consumer access to the Internet. The particular method used by Comcast is at the heart of the problem.

In his written statement to the Senate Committee on Commerce, Science and Transportation in April, FCC Chairman Kevin Martin referred to this particular method of traffic management as a “blunt means to reduce peer-to-peer traffic by blocking certain traffic completely.”

The blunt means was referring to how some Deep Packet Inspection (DPI) platforms manage traffic when placed out of line. If a device is out of line, the only way to control traffic is to directly or indirectly signal the sender to slow down or terminate the communication session. Terminating communication sessions is essentially a form of denying access to an application or content, which violates one of the four principles, established to encourage broadband deployment and to preserve and promote the open and interconnected nature of the public Internet. The other, more alarming problem, is that the termination was made to appear to come from somewhere other than from Comcast itself.

Beyond the technology, Comcast fell short in how they notified consumers. As the FCC Chairman indicated in his statement before the Senate Committee on Commerce, Science and Transportation on April 22, 2008, “Consumers must be fully informed about the exact nature of the service they are purchasing and any potential limitations associated with that service.”

We should take away two things from today’s decision. First, traffic management itself is not the issue. Second, providers have to be fully transparent with consumers about the reasons why they are managing the network and what that means to them. It is our hope that Congress will allow service providers to use what Chairman Martin referred to previously as “more modern equipment” to ensure the best customer experience rather than move to increase regulation of the Internet. As he indicated today,

“Our action today is not about regulating the Internet. Network neutrality rules are unnecessary because the commission already has the tools to enforce (open standards).”

Comcast was trying to solve a very real problem

Comcast had very valid reasons for managing their network, and the problems they face are universal across all service providers.

Providers planned for networks with protocols and order and what they are faced with today is a tsunami of innovative applications that they don’t control but run over their networks.

According to the Telecom Industry Associations 2008 Telecommunications Market Review and Forecast, bandwidth consumption doubled in 2006 and quadrupled in 2007. The explosion in the volume and diversity of traffic has created real network challenges. The emergence of over-the-top applications, such as You Tube, online gaming and P2P file sharing are dramatically impacting network resources.

Over the top applications are not ‘fair sharers’ of network resources. For example, video and P2P file sharing are heavy consumers of bandwidth while VOIP and gaming are not. Providers are facing a situation where 10% of customers can use up to 80% of the bandwidth. This is hardly a “fair” situation for the 90% who are bound to suffer severe service quality issues if the network is not managed during peak hours.

Network management can be ‘fair’ and fully transparent

For these very sound reasons, service providers are looking at numerous technologies in order to better manage their network resources in order to ensure service quality and fairness for all subscribers. Network management can be ‘fair’ and fully transparent PlusNet, an Arbor customer, is really executing what we believe will be the model for the future. They are delivering fully transparent, tiered services for business and consumer broadband customers with plans and price points to meet everyone’s needs.

As Telephony reported in a recent profile titled “The consumer-friendly version of DPI’, PlusNet is demonstrating that technology can be used today to promote fairness, while still providing the freedom of subscribers to access content and services of their choice. Transparency is the key lesson of the Comcast controversy, and it is for PlusNet as well. PlusNet’s model validates the FCC’s decision today, and emphasizes the importance of transparency.

PlusNet discloses maximum downstream and upstream bandwidth rates for specific application types, such as peer-to-peer file sharing, as well as which applications are prioritized for each service option. In addtion, consumers are given an economic incentive to use the network during off-peak hours. The ISP even publishes network traffic graphs demonstrating the benefits of traffic management policies. PlusNet offers clearly defined service packages that meet the usage and economic needs of their customers, and they provide full transparency as to when and why they are managing bandwidth.

Is managing the network self-serving? Do consumers really care? In an independent survey of 4,000 broadband households in the UK, PlusNet was named “Best Consumer ISP” by a leading Internet industry association. PlusNet was also named as the “Best Broadband Provider in the UK.

The mechanisms to solve the problems of network congestion in a fair and equitable manner are available today.  The obligation is now on service providers to be fully transparent about how and why they are managing their networks.

Hunting Unicorns: Myths and Realities of the Net Neutrality Debate

By: Kurt Dobbins -

Image Source: http://www.unicorns.com
In many ways, the emotionally charged debate on Network Neutrality (NN) has been a lot like hunting Unicorns. While hunting the mythical horse could be filled with adrenalin, emotion, and likely be quite entertaining, the prize would ultimately prove to be elusive. As a myth, entertaining; but when myths become reality, then all bets are off. The Network Neutrality public and private debate has been filled with more emotion than rational discussion, and in its wake a number of myths have become accepted as reality. Unfortunately, public policy, consumer broadband services, and service provider business survival hang in the balance.

Myth 1: The Internet can be “neutral” towards all types of applications

A neutral network implies that every packet and every application is treated the same, even under conditions of congestion. Let the network be agnostic and randomly drop packets. Let the network treat business class customer traffic exactly the same as residential traffic. Let non real-time traffic impact the service of real-time applications like voice.

The fact is that not all applications are the same; different applications have different tolerances for neutrality. A voice application is much more sensitive to packet latency, jitter, and packet loss then is a file sharing application, which can adapt its rate of transmission and recover lost packets. Applications, like VoIP and gaming, demand real-time priority because they require real-time interaction. Even interactive applications like web browsing, while not necessarily a real-time application, can benefit from prioritization. Do you ever wonder why your Web browsing right after dinnertime seems a bit sluggish? It has a lot more to do with network congestion then it does with the meal you just ate.

Myth 2: Network management is unfair

The words “fairness” and “freedom” has been bandied about a great deal by the proponents of Network Neutrality. If it is somehow “unfair” to manage the network, we will see an era of true unfairness, not to mention unhappiness, with dire consequences for the future of our economy.

Unmanaged networks result in serious degradation of service availability and quality for all users. It will also means that customers will be paying more for less, as providers are forced to continually build out their networks to stay ahead of the massive bandwidth consumption growth. Case in point: one broadband service provider had had to double access network capacity every six weeks, just to keep up with bandwidth demand – with no new subscriber growth – before they began managing network traffic. Capacity costs with no new subscriber revenue will ultimately be passed onto users.

But managing the network does not mean taking away the “freedom” to access content and applications of your choice. It just means freedom within fairness. The best of both worlds. Nor does it mean closing subscriber accounts.

Myth 3: Network management violates privacy

When a service provider deploys technology to manage their networks to improve capacity and quality of service, all they care about is the type of application it is –video streaming, gaming, web, or email, for example – not the content itself.

More importantly, managing the network in this fashion does not use or require:

  • Any Personally Identifiable Information (PII);
  • Knowledge of a user’s content;
  • Knowledge of a user’s URL browsing history;
  • Knowledge of a user’s Internet search activity;
  • Knowledge of a user’s email topics or content;
  • Storing content accessed by a subscriber;
  • Capturing and playing back any communications exchange.

And lastly, managing the network in this fashion does not install or require any specific software on user machines.

Myth 4: DPI is just a P2P “Throttling” Technology

DPI is more than just a peer-to-peer management tool. Unfortunately, the particularly “blunt means” used by Comcast has led to some serious misrepresentations and misunderstanding of how the technology is actually used in today’s networks. Few who have been following this debate would understand that DPI is at the heart of ensuring fairness on the network.

DPI is a critical network element that provides information on how the network is being used, when it is used, and optionally, by which applications and groups of subscribers.

On a tactical level, this information supports decisions about capacity planning, investments in access networks and peering networks, and how to improve service quality, especially during peak hours when the network may be congested. On a strategic level, it provides the ability for service providers to transform their business models and their service brands, by offering a variety of service tiers and consumption-based billing models.

Overall, DPI provides the tools necessary to manage the network to ensure fairness, reduce costs and optimize revenues.

A Case Study in Transparency

By: Kurt Dobbins -
One of the overriding themes in the Network Neutrality debate, and what triggered much of the recent activity with Comcast and the FCC, has to do with transparency.  Or in the recent words of FCC Chairman Kevin Martin, “Consumers must be completely informed about the exact nature of the service they are purchasing”.  When it comes to transparency about service plans, and the business necessity behind them, I can think of many good examples, but one service provider stands above the rest, an ISP in the UK called PlusNet. In the spirit of transparency, PlusNet is an Arbor customer.

PlusNet offers an array of residential broadband services, called “Broadband Your Way” shown in the following diagram, ranging from a “pay as you go” service for light users – casual users who are typically migrating from a dial service to always-on broadband – to high-end broadband subscribers that enjoy the heavy use of gaming, VoIP, and peer to peer file sharing. PlusNet also has a specific plan for gaming that offers quality broadband with low ping and latency, called “Broadband Your Way PRO.”



Each of the service options has some form of traffic management associated with it, so each plan can appeal to a different demographic: from a light user that does not use file sharing to a heavy user that wants file sharing and streaming 24×7. Rather than have a one-plan-fits-all service, PlusNet offers consumers a plan that fits their service and economic requirements.

What makes PlusNet really interesting is that they clearly explain each of the service options, and even go on to explain that there is no such thing as “unlimited” broadband bandwidth; i.e., the network needs to be managed during peak busy hours to ensure fairness and to deliver real-time and interactive applications with a good quality of experience. PlusNet employs three methods of ensuring fairness on their network during peak busy hours:


1) Traffic Management: For certain plans, maximum bandwidth rates for peer to peer file sharing and other file downloading services are managed during peak busy hours. Each service plan comes with a higher or lower degree of traffic management;

2) Prioritization: For all plans, interactive applications like Web and real-time applications such as VoIP are given priority over non real-time or “background” applications.

3) Changing Human Behavior: For all usage-based plans where there is a monthly Usage Allowance, subscribers are given an economic incentive to use the network during non-busy off-peak hours. Any usage during off-peak hours is considered “free” and does not count against the monthly allowance.


PlusNet fully discloses maximum downstream and upstream bandwidth rates for specific application types, such as peer to peer file sharing, as well as what applications are prioritized, for each service option. Because the need for some form of traffic management is driven by the ISP cost model, PlusNet also discloses how UK ISPs pay for bandwidth in order for their customers to understand the business drivers for employing traffic management techniques during peak hours as well as future plans for capacity planning and traffic management. The consumer continues to be informed about the services they are purchasing.

But explanation details in a service plan are not always enough. “Seeing is believing” as they say, so PlusNet even publishes network traffic graphs depicting how the network is used during peak and off-peak hours, clearly demonstrating  the benefits of their traffic management policies. Winning awards is also a nice way to demonstrate better service too!

 By prioritizing interactive applications like Web and Streaming, PlusNet ensures a great customer experience during peak busy hours, as shown in the graph below for the hours between 8pm and 9pm.



Conversely, by managing peer to peer file sharing during peak hours, in conjunction with encouraging consumers to do file sharing at night (off hours) when it is “free” and not counted against any monthly usage allowance, PlusNet is able to get better utilization of their network bandwidth by time-shifting the bandwidth used for file sharing into the off-peak hours when there is unused capacity on the network. The effects of this are dramatic, as shown in the following graph during the hours 4am to 5am, allowing PlusNet to keep its costs lower by deferring expensive upgrades to bandwidth capacity.


 So, regardless where the Network Neutrality debate ends up, one thing is certain: ISPs will be required to inform consumers about the exact nature of the service they are purchasing. ISPs can learn a valuable lesson in transparency by taking a closer look at the PlusNet model.








Ono and ISP Coziness

By: Danny McPherson -

Some of you may have seen the coverage that Ono picked up today because of it’s ability to optimize P2P transaction speeds by enabling more topologically optimal distribution – all while requiring no interaction with the ISP. On one hand, I’m happy about this, as the whole P4P thing, and the topology intelligence dependence doesn’t seem a viable long-term option. However, given where the bottlenecks are in the current system, Ono leaves some room for concern as well.

Specifically, in measurements we’ve seen the peak to trough bandwidth on the fixed broadband access edge in both cable and DSL is around 4x peak-trough (although the x value itself isn’t particularly relevant to this discussion). So, for example, if there were 1Gbps peak utilization, trough utilization would be around 250Mbps. Given that ISPs during the capacity planning process need to plan for peak loads, they’ll typically engineer capacity expansion with peak loads of 1Gbps, plus some variable that accommodates incremental bandwidth utilization increases based on historical growth, as well as projected new markets, subscriber acquisition, etc..

Needless to say, much of this peak load is driven by P2P and other protocols. So, when folks come up with solutions for improving P2P transfer rates, for example, by a professed 207% as with Ono, that 1 Gbps might now be 2.1 Gbps, and the peak-trough ratio may now be 6x or 8x, versus 4x. Arguably, this exacerbates the problem where it’s most obvious, in the access network, and in particular, in the cable access network where downstream bandwidth is shared among multiple subscribers. Now, given these peak burst rates, ensuring fairness among users of the network is even more critical.

Other applications have improved transactions rates considerably as well. For example, most browsers opening multiple connections to web servers in order to download multiple images and text in parallel, or iTunes and Google Maps opening tens of connections in order to obviate TCP’s inherent per-session (v. per-user) congestion control mechanisms in order to optimize aggregate transactions rates. When your single smtp (email) connection is contending for network resources with 300 TCP connections from your neighbors ‘optimized’ application, ensuring fairness among subscribers by the ISP is critical IF contention for resources exists, in particular those of access loop capacities.

The implications of this aren’t felt just on the network from an available bandwidth and packet forwarding quality of service perspective, but also, by devices like NATs and stateful firewalls that need to track all of these connections. Applications that arguably exploit TCP’s per-session Internet congestion-friendly characteristics in order to optimize the local user’s experience are becoming more and more common. More focus on fariness across users, as opposed to fairness across transport connections, is sure to be a critical issue in transport protocol design and network architecture in the coming months.

I believe that if Ono-like solutions enable topologically optimal distribution of content, that’s a good thing. However, there will always be a bottleneck, and ensuring it’s in the place that scales best and is most manageable is critical.

U.S. Broadband Not Soooo Bad..

By: Danny McPherson -

Yesterday ITIF published a special report titled Explaining International Broadband Leadership. The report was aimed at digging into why the U.S. ranked a dismal 15th in broadband performance among OECD nations, as indicated in reports available here, and what variables might spur expanded broadband performance. In addition, discussion is provided on what models other nations are employing that appear to be successful and the U.S. might learn from.

I’ll spare you a recap of the executive summary, but here are some additional things I liked about the report:

  • The analysis considered statistical outliers for things like average price data, while the OCED results were susceptible to such outliers
  • I’m a fan of the pragmatic approach the authors take, for example “it’s time for the United States to move beyond free market fundamentalism on the right and digital populism on the left, and begin to craft pragmatic, realistic public policies that focus on the primary goal….”
  • The acknowledgment, consideration and dissection of economic, social, geographic and political factors and variances between nations that impact broadband performance
  • Highlighting the intuitively obvious things, such as how the fact that over 50 percent of South Koreans
    live in large, multi-tenant apartment buildings makes it significantly cheaper on a per-subscriber basis to roll out fast broadband there compared to the United States, where many people live in single-family suburban homes
  • Documenting that, as a share of total households, almost three times as many homes can subscribe to fiber-optic broadband in the United States than in the EU as a whole
  • And, the conclusion that the U.S. actually sucks less than the OECD report indicates in several categories, for example, how the OECD measures penetration on a per capita basis rather than a per household basis, and that when measured on a household basis, the U.S. rank improves somewhat, to 12th (from the OCED’s 15th place ranking)

Coincidentally, Kurtis Lindqvist, one of my IAB colleagues, gave a presentation closely related to this topic as it relates to Sweden, who favored quite well in both the ITIF and OECD reports, during the INEX meetings in Dublin earlier this week.

In summary, if you’ve got the time, it’s well worth the read. With any luck, it’ll contribute to a national broadband strategy that focuses on and accommodates both supply and demand, and factors the experiences of other nations in this area, to improve broadband performance.

Vuze, TCP RSTs, and Educated Guesswork

By: Danny McPherson -

Triggered by this report (pdf) from Vuze, and Iljitsch’s ars technica article, my friend Eric Rescorla (ekr) posted on his Educated Guesswork blog this morning some bits regarding how many TCP transactions end in RSTs. I’m glad he did this (he saved me the work), as the variances in the data and the methodology employed have been frustrating me since the Vuze report was published. I’ve heard many ISPs taking issue with the report, and several doing so publicly (e.g., AT&T), and while all the appropriate disclaimers are provided by Vuze, a typical consumer might heavily weigh the results in this report when selecting ISPs, or presupposing which ISPs might employ blunt instrumentation in attempts to throttle P2P traffic.

I commend Vuze for attempting to add some actual empirical data points to the P2P throttling discussion, and for making both summarized raw data (ZIP) and their plug-in openly available. I firmly believe empirical evidence is a fine thing, assuming stated “facts” so represented by that evidence are verifiable, and specifically, the methodology used to collect that evidence is indeed measuring the Right Thing. This is where I take issue with the methodology employed to collect this empirical evidence, as do Eric and Iljitsch, and believe the “first results” in the report are misleading, at best.

Given that the objective of the Vuze plug-in, as stated in the report, was to “add relevant data to the traffic throttling debate, and encourage that decisions be made based on facts“, I trust they’ll be updating both their report and methodology to accommodate any misrepresentations that the data might provide.

Blunt Instruments

By: Kurt Dobbins -

In his written statement to the Senate Committee on Commerce, Science and Transportation on Tuesday of this week, FCC Chairman Kevin Martin referred to a particular method of traffic management as a “blunt means to reduce peer-to-peer traffic by blocking certain traffic completely.”

The blunt means was referring to how some Deep Packet Inspection (DPI) platforms manage traffic when placed out of line. If a device is out of line, then one of the ways to control traffic is to directly or indirectly signal the sender to slow down or terminate the communication session. Terminating a session is the low hanging fruit because:

1) It’s easy to send a TCP reset message to the sender;

2) How harmful can it be to reset a peer to peer connection, most peer to peer file sharing clients have hundreds of sessions open!

For DPI, out of line versus in line has been an ongoing debate. Overall, out of line DPI is easier to deploy in a network, and easier to engineer as a product. Because out of line placements “tap” into the optical links or receive a mirrored copy of the packet from another network device, there is less risk in inserting into the network. The DPI processing is not in the critical service delivery path of subscriber traffic. If the DPI element fails, it is not service-effecting. “Hey the out of line DPI element crashed! No worries! The subscriber’s real packets are somewhere else!” By being out of line, there are significantly less performance constraints to engineer into the product because the packet seen by the DPI is only a copy. “One half second of latency? No worries! Dropped packets? No worries!”

Out of line is not the only traffic management option, as FCC Chairman Martin, in his written statement, alludes “… more modern equipment can be finely tuned to slow traffic to certain speeds based on various levels of congestion.” This type of finely tuned traffic management requires DPI to be placed in line. In line DPI can gently slow aggressive peer to peer applications during periods of congestion, while simultaneously ensuring that every active subscriber has an equal share of the bandwidth.  In line DPI can promote fairness, while still providing the freedom of subscribers to access content and services of their choice.

Traffic management made possible by in line DPI should resonate with the Chairman. Why? In 2005 the FCC established four principles to promote the open and interconnected nature of the public Internet:

• Consumers are entitled to access the lawful Internet content of their choice;
• Consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement;
• Consumers are entitled to connect their choice of legal devices that do not harm the network;
• Consumers are entitled to competition among network providers, application and service providers, and content providers.

The Commission also noted that these principles were subject to reasonable network management, so it seems to support in line traffic management, under periods of congestion, which ensures fair access to available bandwidth, content, and services without blocking or denying service.

But, deploying DPI in line is a whole different ball game. Every “live” packet of every subscriber is directly processed by the DPI platform. In line placement has to be certified for being in the service delivery path of subscribers; it has to have failover and redundancy so there is no service loss in case of failure, and it has to have low latency and no dropped packets in order to preserve the service quality of real time applications like voice and video. However, engineering this type of traffic management, say at 10G Full Duplex bandwidth rates, with low latency, no packet loss, and full failover capability is both costly and difficult. But more importantly, it takes a certain mindset and discipline for in line DPI, as evidenced by the European Advanced Networking Test Center AG (EANTC) P2P test report.

The benefits of DPI for traffic management, while well known to service providers, are not well understood by consumers, and the entire debate about Network Neutrality has been misshaped by the lack of transparency. I’ll talk about transparency in a future blog.


Net Neutrality’s Unintended Consequence

By: Danny McPherson -

Most of the discussion surrounding net neutrality to date seems to revolve around fixed broadband consumers complaining that ISPs are unjustly mucking with their traffic, discriminating between different traffic types, and providing prioritization without their consent — and the providers shouldn’t be. Rob took a bit of a consumer-centric view to practically addressing net neutrality in his blog post here, I’m going to take more of a ISP-centric view.

ISP actions have centered around attempts to ensure fairness across subscribers (e.g., where contention may exit because of asymmetries in available bandwidth, or shared medium such as cable environments), and optimize network resources, in particular bandwidth utilization at various points in the network. Generically speaking, these network resources exist in one of three primary locations:

  1. Access network capacity: This relates to the bandwidth available between the subscriber’s cable modem and the MSO’s Cable Modem Termination System (CMTS) with cable-based Internet services, or the subscriber DSL modem and Digital Subscriber Line Access Multiplexer (DSLAM) in DSL networks. One of the primary distinctions between these two types of access networks architectures is that, because cable employs the existing shared HFC cable plant, cable access is “shared” across multiple subscribers. Downstream traffic (network -> subscriber) is multiplexed onto a TV channel that is shared between a group of cable modem subscribers. DSL, on the other hand, is “dedicated” in the sense that each digital subscriber loop is allocated a discrete channel from a set of available channels. The offshot with the cable access model is that during periods of heavy utilization subscribers within a given group may experience performance degradation because of high-bandwidth users in the same group. With both cable and DSL, traditionally, available bandwidth was much narrower upstream (subscriber -> network) than downstream, hence asymmetric DSL (ADSL) services, which are similar in cable offerings. The primary reason for this was so that, unlike a synchronous service, higher frequencies could be allocated downstream, providing subscribers nearly 2x download speeds. This, of course, makes assumptions about what applications subscribers are using (assumes larger download capacities v. upstream P2P traffic, video uploads, etc..).
  2. Internal network capacity: Internal capacity includes traffic from DSLAM or CMTS to on-net access servers, local caches or content distribution infrastructure, often including email and other ISP-offered services, and traffic being carried to points at which the ISP inteconnects with other ISPs and content providers. Also, in both cable and DSL architectures, subscribers can’t communicate directly, so it’s necessary for traffic to be switched locally by the DSLAM or CMTS, or routed between devices further upstream within the broadband ISPs network. Often P2P and other local user-user traffic would be included here.
  3. Transit “Internet” capacity: Because ISPs that offer broadband services typically focus in access markets, they’re often considered eyeball heavy. That is, they’ve got lots of users, but little content. What this means is that most of their users access content that doesn’t reside on their network. In order to provider connectivity and access to this content, they either engage in bi-lateral interconnection agreements with content companies and other types of ISPs, or acquire transit services from ISPs that provide this connectivity. Lower-priced transit services from ISPs may range from ~$100-$300/mbps or more monthly recurring, just for IP access, in addition to transport infrastructure costs, which includes the local loop and often long-haul backbone circuits upon which IP traffic is carried. And they’ve got all the associated capital investment, of course. Scaling this capacity for millions of users is a significant cost. Recall that unlike a utility, your Internet transactions could be with devices just down the street, or across the globe. Infrastructure has to be provisioned to accommodate these any-any connectivity models.

Broadband access has obviously not been what most folks might refer to as a high-margin market, with profits in the 5-10% range (after removing less-fortunate outliers). Given that most of these ISPs are publicly-traded companies and have shareholder expectations to meet, something has to give. Two things broadband ISPs have traditionally been most concerned with are subscriber growth, and minimizing subscriber churn. Growing average revenue per user (ARPU), and ideally, associated profit margins, has been a key driver for services convergence in the form of triple play (voice, video, and data). However, with the higher revenue services comes higher consumer expectations, in particular for services availability, performance and security. Furthermore, minimizing call center volume is key to optimizing profitability, as with traditional data services only yielding 5-10% profits with low ARPU services, a single customer help desk call can often snuff out profitability for 3 years or more!

ISPs simply can’t continue growing subscriber numbers and access speeds without continuing to invest heavily in Internet transit and interconnection bandwidth, internal network capacity, and access infrastructure. Looking for ways to offset the operational costs associated with these various investments is critical. Some ways of doing this include initiatives such as P4P, or investing in infrastructure that time-shifts lower-priority traffic into network utilization “troughs”, providing more real-time protocols such as VoIP with the network performance characteristics they demand. Peak-to-trough access network utilization rates today are often 4x or more, and because networks have to be engineered to perform without capacity problems during peak utilization periods – capacity planning for ISPs becomes a painfully expensive ordeal. This is why traffic management for ISPs is a critical function – liken it to the PSTN’s Mother’s Day problem.

I don’t believe exploiting protocol behaviors by doing arguably unethical things like injecting spoofed spurious TCP RSTs is the right answer, as that’s just going to piss subscribers off, increase call center volumes and subscriber churn, further compromise transparency, potentially break applications and result in non-deterministic network behavior. Unless there are incentive structures in place to encourage users to change behaviors (e.g., that of how their P2P protocols utilize network bandwidth) then nothing is going to change.

Which finally, leads me to my point (yes, there is one). Consumer advocates say ISPs shouldn’t discriminate, shouldn’t attempt to time-shift traffic, and basically, shouldn’t gate subscriber traffic in any way. So, under the auspices of the net neutrality proponents, if you’re an ISP, pretty much all you can do is one thing: change your billing models to change subscriber behaviors, and generate more revenue to fund further build-out of your infrastructure. That is, bill customers differently. Be it either strict metered services (usage-based), or more sophisticated usage-based services that introduce peak v. off-peak, on-net versus off-net, distance-sensitive, domestic versus international! It certainly costs more for a US broadband subscriber to obtain content or engage in a VoIP conversation with someone in London, or Singapore, or Cairo, as opposed to someone across the street. Who bears these costs, costs which have traditionally been “transparent” to subscribers? Think about it, this all you can eat model can’t continue, and users don’t change behavior without being incentivized to do so.

Why do the mobile folks have such high ARPU, and better profitably per subscriber? Because of a slew of billing models they have that make users accountable for network resources they consume. Ever use data roaming services on your mobile handset in another country and discover at the end of the month how incredibly expensive it can be? There’s a good reason for this.

Interestingly, technology hasn’t been at a place where these types of IP-based metered billing models were a viable option. Today, they are, and ISPs are being given little alternative. And trust me, your Internet access bill is NOT going to get cheaper as a result. We’re starting to see basic metered models emerge already.

Net neutrality advocates, be careful what you wish for…

5 Ways to Molest Internet Users

By: Danny McPherson -

A good bit of the attention garnered by DMK’s ToorCon presentation focused on how ISPs are employing Provider-In-The-Middle Attacks (PITMAs) to collect ad-related revenue from their customers, and how security “of the web” ends up being fully gated by the security of the ad server folks. While I completely agree with this, I would emphasize (as DMK did subtly note) that, even for the attacks DMK outlined, you do NOT have to be the ISP/packet data path at all to molest Internet users, just in the DNS “control path”.

While certainly not meant to be an exhaustive list, here are five techniques that various folks in the DNS control path can employ to perform similar or adjacent questionably ethical activities.

  • Domain tasting: Exploit the add grace period (AGP) and perform domain tasting. Register a domain you think to be clever or closely associated with something useful (e.g., googgle.com) — you’ve got 5 days to see how many hits you get on a site associated with some newly registered domain. If you garner enough activity to cover the domain’s registration fee, register the domain. If not, return it under the AGP policy and find some new ones, or perhaps consult a more-clever colleague for other recommendations. Expand upon this with domain kiting
  • Domain name front running: Do you run a whois server? Are you a DNS registrar? If so, engage in domain name front running. Field all the queries checking for availability of new domains, and if they’re not registered, take’m, register them yourself! Then you can park spam-like crap there, or force those unsuspecting clueless Internet users who used your site to check for availability to register them with you or not at all.
  • Domain name front running enabled by non-existent domain (NXDOMAIN) data: Determine what the most common typos or queried domain names are. Register them, park’m somewhere, and collect click revenue. If you’re anywhere in the DNS query resolution path, from the local resolver to the root, you’re in the money! And you’ve even got a good source of historical data for forecasting hit rates, no need for that unnecessary domain tasting business, it’s just overhead. Got integrity issues with this? There are folks that will buy the NXDOMAIN data from you if you prefer the hands-off approach.
  • Become a DNS services provider and hijack customer subdomains: Cash in on customer subdomains. Make it legal by writing some subtle contractual language (e.g., Schedule A, number 11) buried deep in the service agreement, then park a bunch of crap on generic pages within your customers domains and generate some new revenue sources.
  • Synthesize DNS query responses that result in NXDOMAIN: Operate DNS resolvers? Or Authoritative DNS servers? Or TLD servers? Replace responses that would normally result in NXDOMAIN responses with wildcards to sites that contain a bunch of ad-related crap. Sit back and get fat as the money rolls in! This is most akin to what DMK was speaking off, though you might find various related mechanisms in the preceding technique as well.

What’s your DNS resolution provider’s policy with regards to handling query data or fielding responses for non-existent domains? What’s your DNS service provider’s policy? What’s your ISP’s policy? Note that not all providers maintain their own resolvers, some may use the resolvers provided by upstream ISPs, or perhaps companies expressly focused on “DNS services”.

This discussion relates closely to the “Internet transparency” comments I had yesterday. I don’t believe any of the net neutrality discussions to date include DNS providers or resolution services, nor am I convinced they should. However, I believe the scope of this to be much larger than with just the ISPs themselves.

Net Neutrality: Are You Ready For It?

By: Rob Malan -

May you live in interesting times. I have seen the futures — both of them: one in which net neutrality isn’t adopted by the Internet carriers, and one where it is. I have a particularly bad track record of predicting how the majority of the electorate will respond to external stimuli. Since there continues to be no shortage of emotion surrounding the net neutrality debate, it’s reasonable to play out the scenario where we do have ‘free’ access to the network, where ‘free’ means that the carriers will do absolutely nothing — aside from queueing — to the packets coming and going from your network connection. I’ve touched on why this this ‘no management’ management strategy is insane from a fairness perspective in an earlier post (what ‘no management’ means when multiple subscribers are contending for an oversubscribed resource — or fairness between subscribers) and will come back to it another day. This post investigates what ‘no management’ means for a single broadband consumer in isolation (assuming no contention for resources upstream with their neighbors).

To play this scenario out, one needs to remember that broadband carriers are businesses. First fact: bandwidth demands are going through the roof. The Internet usage model has evolved from the pricing model used when almost all of the bandwidth was consumed serving either email and web pages. Video delivery has changed this forever. Second fact: the easiest way for carriers to increase their capacity to carry your video bits is to charge you more money. If the electorate votes that the carriers can’t touch their packets, then the carriers only have two options: manage the network via dropped packets with their increasingly under-provisioned infrastructure — not a way to make happy customers; or they can increase their capacity. The only way that they can increase their capacity is to spend a lot more capital (routers, cables, trenches, etc.) and spend a lot more on operational outlays like peering and transit costs — remember that carriers don’t have a fixed-price all-you-can-eat arrangement with their upstream network peers and transit providers.

Here’s the punchline: I’ve traveled the world talking to carriers and have seen the future: the only solution — they are going to stay solvent executing — to a net neutral policy is metered billing for both up and downstream packets to a subscriber.

No problem, right! Hands off my damn packets! Vive la revolution! Off with their heads!

What you may not have considered is that if the carrier isn’t managing your traffic, and you want control over your bill (you thought roaming charges were fun), let alone what’s happening with your household devices, you will need to manage your own traffic. Having built systems that help people manage packets from both a security and quality perspective, I can honestly say that it’s a hard problem. It’ll be like trying to manage your electricity bill without any idea what appliances you have, how much power they use, and when and how they turn on and off.

Some questions that you don’t care about when you have a fixed price network access suddenly become much more relevant when you are paying per bit:

  • Did your son use up all of your bits downloading something to his laptop over a p2p network and you didn’t know about it?
  • Do you have enough bits left over for that Hi-Def movie download?
  • How big is that download going to be anyway?
  • Are your teenagers (or adults) spending their evenings watching web videos instead of network television?
  • Do you have a botnet on one of your pc’s that is injecting spam or generating scanning and attacks outbound?

Are you sure you can answer all of these precisely? Can your Mom and Dad? Grandparents? Do you really want to wait until the end of the month to find out? Knowing how many bits of certain applications you use would be pretty helpful, and maybe which family members are using them would be good to know too. You may want tools to help you police and manage your network so that you don’t have ‘roaming’ overage charges.

A second dimension to the network management problem you will face is making the network ‘work’ in your house: since you don’t want the carrier to touch your packets upstream, you’ve got to do all the management in your own network sphere of control. You may have all the bits you can pay for, but you are still limited by the bandwidth upstream in the carrier’s network. It’ll be your job to control and provision your converged IP network spigot: make voice, Hi-Def IPTV, p2p bulk file downloading, work on multiple concurrent devices with different traffic demands including realtime game consoles, settop boxes, voip handsets, laptops, desktops. Mom… put down that policy server!

Last but not least… it’s not just your own traffic you will need to worry about! Unless you want to have a lot of cars parked near your curb, you’d better make sure you’ve locked down your household network. You will pay for every packet that the neighbors (or those parked curb-side) use. Because security research moves pretty quickly, I hope you plan on keeping up with the latest and greatest here. Nothing’s more difficult from an IT perspective than rock-solid security.

From a company perspective, I’m agnostic about which way the electorate votes here. There is an equally large role for companies like Arbor to play either fork we take. Believe me when I say that it will only be more interesting for us if suddenly there are several hundred million more people interested in how they may need to manage and secure their networks. However, I personally would rather the carriers take on the role they prefer: building out their networks with increased capacity that is managed in a fair and transparent manner to the benefit of their subscribers. Managing a network is a hard job for actual network and security engineers; I’m positive that the majority of the people affected by the outcome of the net neutrality ‘debate’ don’t want to have their end-of-the-month bill depend on their ability to take it up as a hobby.

Go Back In Time →