Five Sinkholes of newGOZ

By: Dennis Schwarz -

By Dennis Schwarz and Dave Loftus

It has been a few weeks since news broke of the Zeus Gameover variant known as newGOZ. As has been reported, the major change in this version is the removal of the P2P command and control (C2) component in favor of a new domain generation algorithm (DGA).

The DGA uses the current date and a randomly selected starting seed to create a domain name. If the domain doesn’t pan out, the seed is incremented and the process is repeated. We’re aware of two configurations of this DGA which differ in two ways: the number of maximum domains to try (1000 and 10,000) and a hardcoded value used (0x35190501 and 0x52e645).

Date based domain generation algorithms make for excellent sinkholing targets due to their predictability, and provides security researchers the ability to estimate the size of botnets that use them. With this in mind, we have gathered five days worth of newGOZ sinkhole data. Our domains are based on the first configuration, since this configuration seems to be used the most in the wild.

As with all sinkhole data, many variables can affect the accuracy of victims such as network topology (NAT and DHCP), timing, and other security researchers. However, we feel that the data provides a good estimation of the current scope of this new threat.

Monday, July 14

july_14_map

Four days after the discovery of newGOZ, our first sinkhole saw 127 victims. To corroborate our initial data set, SecureWorks reported seeing 177 victims connect to their sinkhole a few days earlier on July 11.

Friday, July 18

july_18_map

An 89% increase to 241 victims.

Monday, July 21

july_21_map

Over the weekend we saw a 78% increase to 429 victims, mostly in the eastern half of the United States.

Friday, July 25

july_25_map

As reported by Malcovery Security on July 22, they saw a large spam campaign distributing newGOZ by the Cutwail botnet. This campaign appears to have been very successful. On July 25, we saw an 1879% increase to 8494 victims—the rest of the United States is covered.

Monday, July 29

july_29_map

Over the weekend and 19 days after its discovery, our fifth and final sinkhole for this post saw a 27% decrease to 6173 victims. This is most likely due to victims cleaning themselves up from that last spam campaign. Latin America, South Africa, South East Asia, and New Zealand start filling in.

Aggregates

In aggregate and over three weeks, our five sinkholes saw 12,353 unique source IPs from all corners of the globe:

all_map

The most infected country was the United States followed by India. The top 10 were:

top10_cc

In addition, a number of organization types were affected, the top being:

top_verts

Conclusion

Pondering on the five days worth of newGOZ sinkhole data above, some thoughts come to mind:

First, will the threat actor continue to use the same DGA configuration that they’ve been using so far? Empirically, there seems to be more security research sinkholes populating the DGA namespace than actual C2 servers. There is also the second DGA configuration that hasn’t received much use yet. Additionally, as we’ve seen, the actor is willing to completely replace the C2 mechanism altogether.

Second, will the botnet continue to grow and at what rate? The sinkhole data for July 25 suggests that the second Cutwail spam campaign was relatively successful. Will future waves continue this trend?

Finally, with the infection numbers at a fraction of what they were in the P2P version of Zeus Gameover, how long will the threat actor focus on rebuilding their botnet before they return to focusing on stealing money?

The Citadel and Gameover Campaigns of 5CB682C10440B2EBAF9F28C1FE438468

By: Dennis Schwarz -

As the infosec community waits for the researchers involved to present their Zeus Gameover take down spoils at the next big conference; ASERT wanted to profile a threat actor that uses both Citadel, “a particularly sophisticated and destructive botnet”, and Gameover, “one of the most sophisticated computer viruses in operation today”, to steal banking credentials.

Citadel Campaign

When a threat actor decides that they would like to start a Citadel campaign they: buy the builder software, build the malware, distribute it to the wild, and then, unfortunately, usually profit. A “login key” in Citadel parlance identifies a specific copy of the builder. This key is also copied into the generated binaries so a link between malware builder and malware is formed. Login keys are supposed to be unique, but due to builders being leaked to the public, some aren’t. For all intents and purposes though, malware researchers use login keys to distinguish between distinct Citadel campaigns.

On October 29, 2013, security researcher Xylitol tweeted that login key 5CB682C10440B2EBAF9F28C1FE438468 was not associated with any of the defendants in Microsoft’s Citadel botnet lawsuit:

tweet

ASERT has the following command and control (C2) URLs linked with that campaign. Most of these were hosted in the 46.30.41.0/24 netblock—owned by EuroByte:

MD5 Command and Control URL
280ffd0653d150906a65cd513fcafc27 http://46.30.41.118/QHasdHJsadbnMQWe/file.php
02968192220a94996ac20ae78f8714a2 http://46.30.41.217/street/file.php
f1c8cc93d4e0aabd4713621fe271abc8 http://46.30.41.23/AshjkyuiHKJLuhjka/file.php
80ec7b373282bbaaca52851a46dfcf0b http://46.30.41.51/WBHJSAKJghasjkdJHAGSDAu8/file.php
8c8c69ea9c84c68743368cc66c0962f3 http://46.30.41.98/werqfGADSHAJWe/file.php
8d484829fbbfff9aacf94f7d89949ee7 http://46.30.43.93/WhjyyuqwvbnqwjhERW/file.php
6646b55acb84ad05f57247e7aaa51b86 http://delprizmanet.com/hjkl123678qwe12lkj012/file.php
9c18247e6394f3d07ce9fcc43eb27a35 http://sdspropro.co.ua/1123asdASdqeqwoijlkj/file.php
6646b55acb84ad05f57247e7aaa51b86 http://sdspropro.co.ua/rrrguudness/file.php

 

Using archived copies of the campaign’s configuration files from KernelMode.info and ZeuS Tracker it can be seen that the threat actor was using 28 webinjects to target 14 financial institutions in the Netherlands and Germany:

set_url: *abnamro.nl/nl/ideal/identification.do*
set_url: *abnamro.nl/nl/logon/identification*html*
set_url: *accessonline.abnamro.com/fss/open/welcome.do*
set_url: *banking.berliner-bank.de/trxm/bb*
set_url: *banking.postbank.de/rai/login*
set_url: *icscards.nl/nlic/portal/ics/login*
set_url: *ideal.ing.nl/internetbankieren/SesamLoginServlet*
set_url: *ideal.snsreaal.nl/secure/sns/Pages/Payment*
set_url: *ideal.snsreaal.nl/secure/srb/Pages/Payment*
set_url: *meine.norisbank.de/trxm/noris*
set_url: *mijn*.ing.nl/internetbankieren/SesamLoginServlet*
set_url: *regiobank.nl/internetbankieren/homepage/secure/homepage/homepage.html
set_url: *regiobank.nl/internetbankieren/secure/login.html
set_url: *regiobank.nl/internetbankieren/secure/login.html*action_prepareStepTwo=Inloggen
set_url: *regiobank.nl/internetbankieren/secure/logout/logoutConfirm.html
set_url: *snsbank.nl/mijnsns/bankieren/secure/betalen/overschrijvenbinnenland.html
set_url: *snsbank.nl/mijnsns/bankieren/secure/verzendlijst/verzendlijst.html*
set_url: *snsbank.nl/mijnsns/homepage/secure/homepage/homepage.html
set_url: *snsbank.nl/mijnsns/secure/login.html
set_url: *snsbank.nl/mijnsns/secure/login.html*action_prepareStepTwo=Inloggen
set_url: *snsbank.nl/mijnsns/secure/logout/logoutConfirm.html
set_url: http://www.rabobank.nl/bedrijven/uitgelogd/*
set_url: http://www.rabobank.nl/particulieren/uitgelogd*
set_url: https*abnamro.nl*
set_url: https*de*portal/portal*
set_url: https*paypal*
set_url: https://bankieren.rabobank.nl/klanten*
set_url: https://betalen.rabobank.nl/ideal-betaling*

 

As an example and reference for later, here are a few snippets of one of the webinjects:

webinject1

Per ZeuS Tracker and VirusTotal passive DNS data, it seems as if this particular campaign started fizzling out around the end of 2013.

Zeus Gameover Campaign

As noted by security researcher Brian Krebs, the “curators of Gameover also have reportedly loaned out sections of their botnet to vetted third-parties who have used them for a variety of purposes.” Analyzing webinject data from the global configuration file that was being distributed on the peer-to-peer network shortly before its takedown on June 2, 2014; it looks as if the threat actor behind Citadel login key 5CB682C10440B2EBAF9F28C1FE438468 had joined the ranks of Gameover’s coveted third party. Checking historical versions of the config show that this collaboration goes back to at least January 2014.

In the analyzed configuration, there was 1324 total web injects targeting many financial institutions. 12 of these were associated with the profiled actor and will be focused on here.  First, the banking credentials extracted by this group of injects were being exfiltrated to IP address 46.30.41.23. This IP had previously hosted a C2 panel of the above Citadel campaign. Second, there were eight financial institutions targeted; seven of which were a subset of the previous campaign: 

match: ^https.*?de.*?portal/portal.*?
match: ^https://.*?regiobank.nl/internetbankieren/secure/login.html
match: ^https://.*?regiobank.nl/internetbankieren/homepage/secure/homepage/homepage.html
match: ^https://.*?bankieren.rabobank.nl/klanten.*?
match: ^https://.*?meine.deutsche-bank.de/trxm/db.*?
match: ^https://.*?meine.norisbank.de/trxm/noris.*?
match: ^https://.*?banking.berliner-bank.de/trxm/bb.*?
match: ^https://.*?banking.postbank.de/rai/login.*?
match: ^https://.*?snsbank.nl/mijnsns/secure/login.html
match: ^https://.*?snsbank.nl/mijnsns/homepage/secure/homepage/homepage.html
match: ^https://.*?snsbank.nl/mijnsns/bankieren/secure/betalen/overschrijvenbinnenland.html
match: ^https://.*?snsbank.nl/mijnsns/bankieren/secure/verzendlijst/verzendlijst.html.*?

 

Finally, the coding style, function/variable naming, and formatting of the webinjects themselves were akin to the above and looked to have been retrofitted from Citadel to work with Gameover:

webinject2

The drop site itself is a Ruby on Rails application that logs and displays the data sent from infected hosts:

bots

 

Each entry can be formatted a bit better by clicking “Show”:

bot_detail

Some of the logging text seen in these screenshots—for example: “Wait tan from holder”—can be correlated back to the earlier snippets of the webinjects.

The initial entries in the list are dated from around March and June of 2012, but these entries may be old or in error as there is a jump to December 2013 and then consistent logging from there. At the time of this writing there were approximately 1089 entries.

In addition, up to five Jabber IDs can be configured in the application and then messaged on receipt of freshly stolen credentials:

jabber

At the time of writing, the configured Jabber IDs were:

  • bro2@jabbim.cz
  • airhan@jabbim.cz
  • fapache@jabber.me

But, there wasn’t much open source intelligence on these.

Conclusion

Pondering on the data available…this threat actor ran a fairly targeted Citadel campaign focusing on a small set of banks in the Netherlands and Germany. Based on ZeuS Tracker data, most of the Citadel C2s became active after the start of Microsoft’s lawsuit on June 5, 2013, so this likely explains the exclusion of 5CB682C10440B2EBAF9F28C1FE438468 from the legal notices.

The Citadel campaign looks like it closed up shop at the end of 2013. In December 2013, logging on the out-of-band Gameover drop site started in earnest, so this might be when the threat actor moved to stealing banking credentials via Gameover.

So far, it seems as if this threat actor has escaped the clutches of the great Citadel take-down and, since the drop site is still receiving stolen credentials, has evaded the Zeus Gameover take-down as well. In the spirit of “see something, say something” and with the recency of the legal action, ASERT has provided the data available to our law enforcement contacts.

The MegaUpload Shutdown Effect

By: Jose -

The popular file sharing site MegaUpload was shut down by the US FBI and Department of Justice on Thursday, January 19, and executives from the company were taken into custody. This story is very well covered by the Wall Street Journal and includes a copy of the indictment for your reading.

As you would expect, this was a wildly popular site with users from all over the world. So much so that even notable celebrities appear in a video discussing MegaUpload, almost endorsing it. Previous work by Arbor Networks showed that content providers and hosting sites like MegaUpload are the new “Hyper Giants”. With enough global data, you can actually see the traffic drop when the shutdown occurs. Based strictly on the traffic rates it appears that the shutdown started just after 19:00 GMT on January 19, with traffic plummeting down over the next two hours. The graphic here shows three main client regions – Asia-Pacific, Europe, and the US.

Over the past 24 hours, the top countries (in aggregate) using MegaUpload were the United States, France, Germany, Brazil, Great Britain, Turkey, Italy, and Spain, although dozens more countries are represented.

As for the traffic drop off, we’re not the only ones to notice. As seen on Twitter, South America experienced a dramatic traffic drop at about the same time, presumably due to this MegaUpload shutdown. Furthermore, we’re seeing reports of a fake MegaUpload site that is supposedly a malware infection site.

Friends of mine from elsewhere in the world have been joking that the Internet seems to be running a bit smoother today. That may be, given how much bandwidth appears to have been freed up.

MegaUpload

Torrent Sites and The Pirate Bay: DDoS Afoot?

By: Jose -

Around the time of the convictions late this week of the folks behind The Pirate Bay (also see The Pirate Bay Trial: The Official Verdict – Guilty on TorrentFreak), a well known BitTorrent tracker distribution site, we started seeing reports of DDoS attacks on other torrent tracker sites. Never one to miss an opportunity to look for massive DDoS attacks against important sites, I went looking in our archives to see if this was indeed the case.

It’s interesting to note that TPB may indeed be a key to illicit torrents and pirated material (although if it’s shut down something will surely replace it). See P2P researchers fear BitTorrent meltdown from the Iconoclast website. According to the piece, TPB is so heavily connected with over half of the torrents tracked there disrupting it may have an impact on all BitTorrent traffic for a while:

Raynor told TorrentFreak that if The Pirate Bay goes down, many of the other trackers might collapse as well. “If The Pirate Bay goes down the load will automatically shift to others. This is because most of the Pirate Bay swarms also include other trackers. When Pirate Bay goes down it would overload others until they fall also. Meaning even more stress and further casualties. This is likely to end in a BitTorrent meltdown.”

This makes sense that someone, perhaps a vigilante or just a “griefer”, is hoping to shut down pirate activities by going after major torrent tracking sites. This bears some analysis.

All in all, except for free-torrents.org getting attacked by a Black Energy botnet run out of China (using the C&C at hack-off.ru), we can’t corroborate this spate of attacks. Free-torrents.org has been getting pounded by this botnet since mid March, 2009, in fact. But none of the other major sites appear to be receiving such packet love.

As such, while this may be happening we’re not seeing it hit the massive scale that we would expect.

Conficker Did Not Melt the Internet

By: Jose -

But it is busy.

Last week’s April 1 trigger date for the new routines in Conficker.C/D (depending on the vendor) was mis-reported by some press agencies as the date many in the CWG said the Internet would melt down. Not quite. The press has been busy with the story as a hype leading to fizzle angle etc. Sure enough, the Internet kept on trucking. Here’s a view from one BGP peer during that time, no big change in traffic from the days prior:

internet_1wk_no_conficker_effect.png

Some folks even said Conficker was sleeping.

Not quite. It’s not DDoSing, but it is apparently doing it’s thing, or “Confickering” as a friend said. It’s dropped a new E variant over P2P. Some reports are seeing it talk to a Waledac domain but not everyone is.

As for why a possible Waledac (aka Storm) connection, we do not know.

So, what happened? It looks like the DNS lockout has been working; I suspect the attackers also noticed the great de-peerings of bad ISPs that have been going on for a while and decided to avoid hosting a C&C in one of those and instead went the P2P route. For details on how the bots know how to talk to other bots over P2P see the LEXSI CERT blog.

We have no additional info to give outside of those above links. If you’re following this story this is a major development.

Other Conficker Stuff

IBM/ISS has interesting stats on Conficker infections based on their P2P models. Some in the press have said these numbers mean 4% of the Internet is infected with Conficker. Not quite. As Vint Cerf pointed out, it helps to know the denominator when you make those sorts of population claims. As I’ll point it, it also helps to know what you’re measuring. We know Conficker has a big population, but we also know our measurements are only a lower limit (based on IP visibility etc).

We’re seeing a rise in TCP/445 scanning. 3wks of ATLAS data for the top 50 scanners is shown below, a clear rise in this time:

445 tcp scans.png

We think this is related to new Conficker activity in this timeframe.

UPDATE April 10

Oh and Mafiaboy, Conficker was not a ruse.

iWorkServices == P2P iBotnet

By: Jose -

If you want iWork 09 and didn’t want to pay for it, you may have grabbed a pirated copy. That may not have been all you got. If you wanted your Mac to be a part of a P2P botnet, then you’re in luck!

It turns out the package you may have downloaded over BitTorrent, a massive 450MB ZIP installer, is really just a huge Trojan horse package that installs a simple P2P bot tool on your box. Running the installer will not install iLife but instead the official sounding “iWorkServices”. This is not what you think it is. The binary has these characteristics:

MD5 (iWorkServices) = 046af36454af538fa024fbdbaf582a49
SHA1(iWorkServices)= 55d754b95ab9b34bdd848300045c3e11caf67ecf
SHA(iWorkServices)= 6b83df2636a4813ef722f3fad7c65b5419044889
file size: 413568 bytes
iWorkServices: Mach-O universal binary with 2 architectures
iWorkServices (for architecture ppc):   Mach-O executable ppc
iWorkServices (for architecture i386):  Mach-O executable i386

When run as root it creats a couple of files and directories to get set up:

/System/Library/StartupItems/iWorkServices
/System/Library/StartupItems/iWorkServices/StartupParameters.plist
/usr/bin/iWorkServices

This will now run whenever your box boots. The installer makes sure that the script is runnable:

chmod 755 /System/Library/StartupItems/iWorkServices/iWorkServices

And the script just launches the binary:

#!/bin/sh
/usr/bin/iWorkServices &

Not very sophisticated. On startup it creates a “dot” directory under /tmp:

/tmp/.iWorkServices

It fires up some connections:

69.92.177.146:59201
qwfojzlk.freehostia.com:1024

It will keep on trying until it connects. It also grabs a list of seed P2P peers from the file itself by decrypting the running file (thwarting static analysis) and managing the known peers as you would expect. It generates a port to listen on as needed (although it’s not quite clear to me how it would handle being behind a NAT device).

The bot software itself appears to be a Kadima-related P2P protocol with the expected commands to manage the peer list, but also to provide a remote shell, download and run arbitrary code, and to give full access to the box:

socks
system
httpget
httpgeted
rand
sleep
banadd
banclear
p2plock
p2punlock
nodes
leafs
unknowns
p2pport
p2pmode
p2ppeer
p2ppeerport
p2ppeertype
clear
p2pihistsize
p2pihist
platform
script
sendlogs
uptime
shell
rshell

What’s more is that there is an embedded Lua interpreter, giving a very sophisticated command language some additional structure.

So, what’s this botnet been up to? DDoS it seems, via a downloaded and executed PHP script. Clever.

Looking to find if anyone else is monitoring this botnet …

Bear in mind that this is just like all of the other OS X malware: you have to willingly install it. It’s much more of a Trojan Horse than a virus or worm.

Related info:

Edited to fix the name of the product this Trojan package masqueraded as.

Fast Flux and New Domains for Storm

By: Jose -

At last week’s FIRST conference in Vancouver I presented on some of our ATLAS fast flux data. The slides aren’t yet available, but the ongoing reports in ATLAS have been reflected to continuously update some of the analysis we did. Some of the new reports include the lifetimes for each network, and the “distinct networks” section, which identifies related domains through shared botnet membership. ATLAS users can also get the updated blocklist of fast flux domains for use in stopping such attacks.

Just in time, too, the Storm Worm has begun using new fast flux domains. Messages look like this:

> Date: Sun, 29 Jun 2008 00:56:18 +0700
> From: hp_ejer@levelton.com
> Subject: You make my world special

> My heart belongs to you ht tp:/ /latinlovesite.com/

Here’s a list of all of the domains we’ve identified so far.

theloveparade.com        NS     ns5.lollypopycandy.com
latinlovesite.com        NS     ns5.lollypopycandy.com
youronlinelove.com       NS     ns5.lollypopycandy.com
yourloveletter.com       NS     ns5.lollypopycandy.com
makinglovedirect.com     NS     ns5.lollypopycandy.com
lollypopycandy.com       NS     ns5.lollypopycandy.com

Storm has changed its tactics constantly in the past year and a half, and this “love theme” is nothing new. We’ll see how long this theme lasts.

UPDATE 1 July 2008

Here’s a full list of domains:

superlovelyric.com      NS      ns.verynicebank.com
bestlovelyric.com       NS      ns.verynicebank.com
makingloveworld.com     NS      ns.verynicebank.com
wholoveguide.com        NS      ns.verynicebank.com
gonelovelife.com        NS      ns.verynicebank.com
loveisknowlege.com      NS      ns.verynicebank.com
lovekingonline.com      NS      ns.verynicebank.com
lovemarkonline.com      NS      ns.verynicebank.com
makingadore.com NS      ns.verynicebank.com
greatadore.com  NS      ns.verynicebank.com
loveoursite.com NS      ns.verynicebank.com
musiconelove.com        NS      ns.verynicebank.com
knowholove.com  NS      ns.verynicebank.com
whoisknowlove.com       NS      ns.verynicebank.com
theplaylove.com NS      ns.verynicebank.com
wantcherish.com NS      ns.verynicebank.com
verynicebank.com        NS      ns.verynicebank.com
shelovehimtoo.com       NS      ns.verynicebank.com
makeloveforever.com     NS      ns.verynicebank.com
wholovedirect.com       NS      ns.verynicebank.com
grupogaleria.cn NS      ns.verynicebank.com
activeware.cn   NS      ns.verynicebank.com
nationwide2u.cn NS      ns.verynicebank.com

A Case Study in Transparency

By: Kurt Dobbins -
One of the overriding themes in the Network Neutrality debate, and what triggered much of the recent activity with Comcast and the FCC, has to do with transparency.  Or in the recent words of FCC Chairman Kevin Martin, “Consumers must be completely informed about the exact nature of the service they are purchasing”.  When it comes to transparency about service plans, and the business necessity behind them, I can think of many good examples, but one service provider stands above the rest, an ISP in the UK called PlusNet. In the spirit of transparency, PlusNet is an Arbor customer.

PlusNet offers an array of residential broadband services, called “Broadband Your Way” shown in the following diagram, ranging from a “pay as you go” service for light users – casual users who are typically migrating from a dial service to always-on broadband – to high-end broadband subscribers that enjoy the heavy use of gaming, VoIP, and peer to peer file sharing. PlusNet also has a specific plan for gaming that offers quality broadband with low ping and latency, called “Broadband Your Way PRO.”

 

 

Each of the service options has some form of traffic management associated with it, so each plan can appeal to a different demographic: from a light user that does not use file sharing to a heavy user that wants file sharing and streaming 24×7. Rather than have a one-plan-fits-all service, PlusNet offers consumers a plan that fits their service and economic requirements.

What makes PlusNet really interesting is that they clearly explain each of the service options, and even go on to explain that there is no such thing as “unlimited” broadband bandwidth; i.e., the network needs to be managed during peak busy hours to ensure fairness and to deliver real-time and interactive applications with a good quality of experience. PlusNet employs three methods of ensuring fairness on their network during peak busy hours:

 

1) Traffic Management: For certain plans, maximum bandwidth rates for peer to peer file sharing and other file downloading services are managed during peak busy hours. Each service plan comes with a higher or lower degree of traffic management;

2) Prioritization: For all plans, interactive applications like Web and real-time applications such as VoIP are given priority over non real-time or “background” applications.

3) Changing Human Behavior: For all usage-based plans where there is a monthly Usage Allowance, subscribers are given an economic incentive to use the network during non-busy off-peak hours. Any usage during off-peak hours is considered “free” and does not count against the monthly allowance.

 

PlusNet fully discloses maximum downstream and upstream bandwidth rates for specific application types, such as peer to peer file sharing, as well as what applications are prioritized, for each service option. Because the need for some form of traffic management is driven by the ISP cost model, PlusNet also discloses how UK ISPs pay for bandwidth in order for their customers to understand the business drivers for employing traffic management techniques during peak hours as well as future plans for capacity planning and traffic management. The consumer continues to be informed about the services they are purchasing.

But explanation details in a service plan are not always enough. “Seeing is believing” as they say, so PlusNet even publishes network traffic graphs depicting how the network is used during peak and off-peak hours, clearly demonstrating  the benefits of their traffic management policies. Winning awards is also a nice way to demonstrate better service too!

 By prioritizing interactive applications like Web and Streaming, PlusNet ensures a great customer experience during peak busy hours, as shown in the graph below for the hours between 8pm and 9pm.

 

 

Conversely, by managing peer to peer file sharing during peak hours, in conjunction with encouraging consumers to do file sharing at night (off hours) when it is “free” and not counted against any monthly usage allowance, PlusNet is able to get better utilization of their network bandwidth by time-shifting the bandwidth used for file sharing into the off-peak hours when there is unused capacity on the network. The effects of this are dramatic, as shown in the following graph during the hours 4am to 5am, allowing PlusNet to keep its costs lower by deferring expensive upgrades to bandwidth capacity.

 

 So, regardless where the Network Neutrality debate ends up, one thing is certain: ISPs will be required to inform consumers about the exact nature of the service they are purchasing. ISPs can learn a valuable lesson in transparency by taking a closer look at the PlusNet model.

 

 

 

 

 
 
 

 

 

Ono and ISP Coziness

By: Danny McPherson -

Some of you may have seen the coverage that Ono picked up today because of it’s ability to optimize P2P transaction speeds by enabling more topologically optimal distribution – all while requiring no interaction with the ISP. On one hand, I’m happy about this, as the whole P4P thing, and the topology intelligence dependence doesn’t seem a viable long-term option. However, given where the bottlenecks are in the current system, Ono leaves some room for concern as well.

Specifically, in measurements we’ve seen the peak to trough bandwidth on the fixed broadband access edge in both cable and DSL is around 4x peak-trough (although the x value itself isn’t particularly relevant to this discussion). So, for example, if there were 1Gbps peak utilization, trough utilization would be around 250Mbps. Given that ISPs during the capacity planning process need to plan for peak loads, they’ll typically engineer capacity expansion with peak loads of 1Gbps, plus some variable that accommodates incremental bandwidth utilization increases based on historical growth, as well as projected new markets, subscriber acquisition, etc..

Needless to say, much of this peak load is driven by P2P and other protocols. So, when folks come up with solutions for improving P2P transfer rates, for example, by a professed 207% as with Ono, that 1 Gbps might now be 2.1 Gbps, and the peak-trough ratio may now be 6x or 8x, versus 4x. Arguably, this exacerbates the problem where it’s most obvious, in the access network, and in particular, in the cable access network where downstream bandwidth is shared among multiple subscribers. Now, given these peak burst rates, ensuring fairness among users of the network is even more critical.

Other applications have improved transactions rates considerably as well. For example, most browsers opening multiple connections to web servers in order to download multiple images and text in parallel, or iTunes and Google Maps opening tens of connections in order to obviate TCP’s inherent per-session (v. per-user) congestion control mechanisms in order to optimize aggregate transactions rates. When your single smtp (email) connection is contending for network resources with 300 TCP connections from your neighbors ‘optimized’ application, ensuring fairness among subscribers by the ISP is critical IF contention for resources exists, in particular those of access loop capacities.

The implications of this aren’t felt just on the network from an available bandwidth and packet forwarding quality of service perspective, but also, by devices like NATs and stateful firewalls that need to track all of these connections. Applications that arguably exploit TCP’s per-session Internet congestion-friendly characteristics in order to optimize the local user’s experience are becoming more and more common. More focus on fariness across users, as opposed to fairness across transport connections, is sure to be a critical issue in transport protocol design and network architecture in the coming months.

I believe that if Ono-like solutions enable topologically optimal distribution of content, that’s a good thing. However, there will always be a bottleneck, and ensuring it’s in the place that scales best and is most manageable is critical.

Vuze, TCP RSTs, and Educated Guesswork

By: Danny McPherson -

Triggered by this report (pdf) from Vuze, and Iljitsch’s ars technica article, my friend Eric Rescorla (ekr) posted on his Educated Guesswork blog this morning some bits regarding how many TCP transactions end in RSTs. I’m glad he did this (he saved me the work), as the variances in the data and the methodology employed have been frustrating me since the Vuze report was published. I’ve heard many ISPs taking issue with the report, and several doing so publicly (e.g., AT&T), and while all the appropriate disclaimers are provided by Vuze, a typical consumer might heavily weigh the results in this report when selecting ISPs, or presupposing which ISPs might employ blunt instrumentation in attempts to throttle P2P traffic.

I commend Vuze for attempting to add some actual empirical data points to the P2P throttling discussion, and for making both summarized raw data (ZIP) and their plug-in openly available. I firmly believe empirical evidence is a fine thing, assuming stated “facts” so represented by that evidence are verifiable, and specifically, the methodology used to collect that evidence is indeed measuring the Right Thing. This is where I take issue with the methodology employed to collect this empirical evidence, as do Eric and Iljitsch, and believe the “first results” in the report are misleading, at best.

Given that the objective of the Vuze plug-in, as stated in the report, was to “add relevant data to the traffic throttling debate, and encourage that decisions be made based on facts“, I trust they’ll be updating both their report and methodology to accommodate any misrepresentations that the data might provide.

Go Back In Time →