The Heartburn Over Heartbleed: OpenSSL Memory Leak Burns Slowly

By: Arbor Networks -

Marc Eisenbarth, Alison Goodrich, Roland Dobbins, Curt Wilson

Background
A very serious vulnerability present in OpenSSL 1.0.1 for two years has been disclosed (CVE-2014-0160). This “Heartbleed” vulnerability allows an attacker to reveal up to 64kb of memory to a connected client or server. This buffer-over-read vulnerability can be used in rapid succession to exfiltration larger sections of memory, potentially exposing private keys, usernames and passwords, cookies, session tokens, email, or any other data that resides in the affected memory region. This flaw does not affect versions of OpenSSL prior to 1.0.1.  This is an extremely serious situation, which highlights the manual nature of the tasks required to secure critical Internet services such as basic encryption and privacy protection.

As the vulnerability has been present for over two years, many modern operating systems and applications have deployed vulnerable versions of OpenSSL. OpenSSL is the default cryptographic library for Apache and nginx Web server applications, which together account for an estimated two-thirds of all Web servers. OpenSSL is also used in a variety of operating systems, including BSD variants such as FreeBSD, and Linux distributions such as Ubuntu, CENTOS, Fedora and more. Other networking gear such as load-balancers, reverse proxies, VPN concentrators, and various types of embedded devices are also potentially vulnerable if they rely on OpenSSL, which many do. Additionally, since the vulnerability’s disclosure, several high-profile sites such as Yahoo Mail, Lastpass, and the main FBI site have reportedly leaked information. Others have discussed the impact on underground economy crime forums, which were reportedly vulnerable to the matter and were attacked.

A key lesson is that OpenSSL, which is a vital component of the confidentiality and integrity of uncounted systems, applications and sites across the Internet, is an underfunded, volunteer-run project, which is desperately in need of major sponsorship and attendant allocation of resources.

Mitigation
Anyone running OpenSSL on a server should upgrade to version 1.0.1g. For earlier versions, re-compiling with the OPENSSL_NO_HEARTBEATS flag enabled will mitigate this vulnerability. For OpenSSL 1.0.2, the vulnerability will be fixed in 1.0.2-beta2. In terms of remediation, there’s a huge amount of work that must be done, not only for servers, but also for load-balancers, reverse proxies, VPN concentrators, various types of embedded devices, etc.  Applications which were statically compiled against vulnerable versions of the underlying OpenSSL libraries must be re-complied; private keys must be invalidated, re-generated, and re-issued; certificates must be invalidated, re-generated, and re-issued – and there are a whole host of problems and operational challenges associated with these vital procedures. Some systems may be difficult to patch, so network access control restrictions or the deployment of non-vulnerable proxies may be considered where possible to reduce the attack surface.

Exploitation
In most cases, exploitation of this vulnerability leaves no sign in server logs, making it difficult for organizations to know if they have been compromised. In addition, even after applying the OpenSSL patch, private keys, passwords, authentication credentials or any other data that was stored in heap memory used by OpenSSL may have already been compromised by attackers, potentially going as far back as two years. Of significant concern is the compromise of private key material, and one security organization reported that they were able to obtain this material during testing. Others reported difficulty in obtaining certificate material but were able to discover significant amounts of other sensitive data. Considering how easy it is for attackers to hammer this vulnerability over and over again in a very quick sequence, the amount of memory being disclosed can be quite substantial. Memory contents will vary depending on program state and controlling what is returned and what position in memory the contents are read from is much like a game of roulette.

Risk to Private Key Material
Security researchers in a Twitter exchange starting on April 8 2014 indicate that private keys have been extracted in testing scenarios, and other researchers suggest that attacking the servers during or just after log rotation and restart scripts run could expose private key material. This allegation has not been tested by ASERT.

For further details, please see the Twitter thread at https://twitter.com/1njected/status/453781230593769472

image002

Incident Response and Attack Tools
While there has been some call to avoid over-reaction, organizations should strongly consider revoking and reissue certificates and private keys; otherwise, attackers can continue to use private keys they may have obtained to impersonate Websites and/or launch man-in-the-middle attacks. Users should change usernames and passwords as well, but should not enter login credentials on Websites with vulnerable OpenSSL deployments. To do so could invite attackers to compromise both the old and new credentials if they were exposed in memory.

Many tools have been made available to test for the vulnerability and these same tools are available for attackers to use as well. It is also reasonable to consider that the password reuse problem will again cause additional suffering, because the same passwords shared across multiple systems create an extension of attack surface. A shared password that provides access to a sensitive system, or to an e-mail account used for password resets, can be all that an attacker needs to infiltrate an organizations defenses along multiple fronts.

Multiple proof-of-concept exploits have already been published, and a Metasploit module has been published. Attackers of all shapes and sizes have already started using these tools or are developing their own to target vulnerable OpenSSL servers. There have been reports that scanning for vulnerable OpenSSL servers began before the disclosure of the bug was made public, although other reports suggest that these scans may not have been specifically targeting the Heartbleed vulnerability.

ATLAS Indicates Scanning Activity
ASERT has observed an increase in scanning activity on tcp/443 from our darknet monitoring infrastructure, over the past several days, most notably from Chinese IP addresses (Figure 1, below). Two IP addresses (220.181.158.174 and 183.63.86.154) observed scanning tcp/443 have been blacklisted by Spamhaus for exploit activity. Scans from Chinese sources are predominately coming from AS4143 (CHINANET-BACKBONE) and AS23724 (CHINANET-IDC-BJ-AP).

Figure 1:       TCP/443 scans, Tuesday – Wednesday (April 8-9)

image003

image005

Attacks observed by ASERT decreased by Thursday as of this report writing. China still accounted for the largest percentage of detected scan activity:

Figure 2:       TCP/443 scans, Thursday (April 10)

image007

image010

Pravail Security Analytics Detection Capabilities

Arbors Pravail Security Analytics system provides detection for this vulnerability using the following rules:

2018375 ­ ET CURRENT_EVENTS TLS HeartBeat Request (Server Intiated)

2018376 ­ ET CURRENT_EVENTS TLS HeartBeat Request (Client Intiated)

2018377 ­ ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Client Init Vuln Server)

2018378 ­ ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Server Init Vuln Client)

Examples of detection capabilities are reproduced below.

 

Heartbleed detection tool screenshot

Heartbleed detection tool screenshot

 

Analysis of Historical Packet Captures Using New Indicators
In the event of this, and other security threats that are highly emergent, organizations may wish to consider implementing analysis capabilities on archived packet captures in order to detect first signs of attack activity. Granular analysis using fresh indicators can help pinpoint where and when a targeted attack (or a commodity malware attack, for that matter) may have first entered the network, or when such attackers may have exfiltrated data using a technique that was not yet being detected on the wire during the time of the initial attack and infiltration. The capabilities of Pravail Security Analytics will give organizations the means to accomplish such an analysis. A free account is available at https://www.pravail.com/ and rest assured that this site is using the latest non-vulnerable OpenSSl version.

Longer-Term Implications and Lessons Learned
Serious questions have been raised regarding the notification process surrounding this vulnerability.  The operational community at large have voiced serious disapproval surrounding the early notification of a single content delivery network (CDN) provider, while operating system vendors and distribution providers, not to mention the governmental and financial sectors, were left in the dark and discovered this issue only after it was publicly disclosed via a marketing-related weblog post by the CDN vendor in question.  It has been suggested that the responsible disclosure best practices developed and broadly adopted by the industry over the last decade were in fact bypassed in this case, and concerns have been voiced regarding the propriety and integrity of the disclosure process in this instance.

Recent indications that a significant number of client applications may be utilizing vulnerable versions of OpenSSL as well has broad implications, given the propensity of non-specialist users to ignore software updates and to continue unheedingly running older versions of code.

Furthermore, only ~6% of TLS-enabled Websites (and an undetermined, but most probably even-smaller percentage of other types of systems) make use of Perfect Forward Secrecy (PFS). This configurable option ensures that if an issue of this nature arises, previously encrypted traffic retained in packet captures isn’t susceptible to retrospective cryptanalysis.

Without PFS, there are no automated safeguards that can ameliorate these issues, once a vulnerability of this nature has been exposed.  Many operators and users may not realize that if attackers captured packets of encrypted traffic in the past from vulnerable services/applications which weren’t configured with PFS – i.e., the overwhelming majority of such systems – and have retained those captured packets, they’ve the opportunity now to use analysis tools to replay those packets and decrypt the Internet traffic contained in those packets. This means that attackers can potentially unearth their credentials, intellectual property, personal financial information, etc. with access to previously captured packet-dumps.

The ability for an attacker to decrypt packet capture archives requires that the attacker has obtained the private keys used to encrypt that traffic. As recent research shows, this is not a theoretical vulnerability – private key material has been compromised in a lab environment and therefore we must assume that attackers have at least the same, if not more substantial capabilities.

The ‘Heartbleed’ vulnerability may well result in an underground market in ‘vintage’ packet captures – i.e., packet captures performed after the date this vulnerability was introduced into OpenSSL, and prior to some date in the future after which it is presumed that the most ‘interesting’ servers, services, applications, and devices have been remediated.

This incident has the potential to evolve into a massive 21st-Century, criminalized, Internet-wide version of the Venona Project, targeting the entire population of Internet users who had the ill fortune to unknowingly make use of encrypted applications or services running vulnerable versions of OpenSSL. This highlights the paradox of generalized cryptographic systems in use over the Internet today.

While the level of complexity required to correctly design and implement cryptosystems means that in most situations, developers should utilize well-known cryptographic utilities and libraries such as OpenSSL, the dangers of a cryptographic near-monoculture have been graphically demonstrated by the still-evolving Heartbleed saga.  Further complicating the situation is the uncomfortable fact that enterprises, governments, and individuals have been reaping the benefits of the work of the volunteer OpenSSL development team without contributing the minimal amounts time, effort, and resources to ensure that this vital pillar of integrity and confidentiality receives the necessary investment required to guarantee its continued refinement and validation.

This is an untenable situation, and it is clear that the current development model for OpenSSL is unsustainable in the modern era of widespread eavesdropping and rapid exploitation of vulnerabilities by malefactors of all stripes. Information on how to support the OpenSSl effort can be found here: https://www.openssl.org/support/

Heartbleed and Availability
While Heartbleed is a direct threat to confidentiality, there are also potential implications for availability.

In some cases, attackers seeking exploitable hosts may scan and/or try to exploit this vulnerability so aggressively that they inadvertently DoS the very hosts they’re seeking to compromise. Organizations should be cognizant of this threat and ensure that the appropriate availability protections are in place so that their systems can be defended against both deliberate and inadvertent DDoS attacks.

It should also be noted that initial experimentation seems to indicate that it’s easiest for attackers to extract the private keys from vulnerable OpenSSL-enabled applications and services, using the least amount of exploit traffic, immediately after they have been started.  Accordingly, organizations should be prepared to defend against DDoS attacks intended to cause state exhaustion and service unavailability for SSL-enabled servers, load-balancers, reverse proxies, VPN concentrators, etc.  The purpose of such DDoS attacks would be to force targeted organizations to re-start these services in order to recover from the DDoS attacks, thus providing the attackers with a greater chance of capturing leaked private keys.


References
http://www.openssl.org/news/vulnerabilities.html#2014-0160
http://heartbleed.com
http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html
http://news.netcraft.com/archives/2014/04/02/april-2014-web-server-survey.html
http://threatpost.com/seriousness-of-openssl-heartbleed-bug-sets-in/105309
http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/
https://www.openssl.org/news/secadv_20140407.txt
https://isc.sans.edu/diary/Heartbleed+vendor+notifications/17929
http://possible.lv/tools/hb/
http://filippo.io/Heartbleed/
https://zmap.io/heartbleed/
https://github.com/rapid7/metasploit-framework/blob/master/modules/auxiliary/scanner/ssl/openssl_heartbleed.rb
https://gist.github.com/sh1n0b1/10100394
http://www.seacat.mobi/blog/heartbleed Note: This event may have been a false positive caused by ErrataSec’s masscan software (http://blog.erratasec.com/2014/04/no-we-werent-scanning-for-hearbleed.html)
http://arstechnica.com/security/2014/04/heartbleed-vulnerability-may-have-been-exploited-months-before-patch/
https://twitter.com/1njected/status/453781230593769472

Venona Project: http://en.wikipedia.org/wiki/Venona>
PFS: https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy

Conficker Working Group Lessons Learned Document

By: Jose -

On the Conficker Working Group’s website, the Lessons Learned document has finally been made public. Sponsored by the US DHS, with key efforts at getting it written from Rick Wesson and David Dagon, the document was prepared by in large part by interviewing key folks in the CWG. The purpose was to explore all of the issues we encountered in the CWG, which was an unprecedented event. In short, the document helps illuminate challenges the information security community as a whole faces in the coming years.

As a member of the CWG, there are a number of takeaways for me. I think they illuminate a path for work in the coming years for many of us, which we will have to address collaboratively.

First, it should be clear that technology alone isn’t the solution here. One of the focuses of the CWG was to ensure that all of the AV, IDS and related companies had timely access to the samples to write signatures against. These technologies and companies represent the front line of defense for all of us, end users, enterprises, and ISPs. As should be clear from the infection data, the numbers haven’t plummeted, suggesting that gaps in addressing the problem exist. We have to explore how to get defenses and cleanup to more people more efficiently, if not preventing the infection in the first place. As someone in the CWG said, “we can’t patch our way out of these worms.”

Secondly, the world needs even better global coordination for such events, and clear authority to act for certain groups. In the case of the CWG, some organizations – such as ICANN – assumed authority for coordination when no one had such a clear mandate. In all cases everyone tread carefully and with the goals of protecting users forefront. You can see how contentious this winds up being by looking at the DNS-CERT discussions at ICANN, where issues like roles and responsibilities raise a lot of objections. Figuring out the groups that will choose issues to tackle and coordinating that globally is an open question.

A third – and technical – issue made visible in the experiences of the CWG is that we need tools to quickly tackle complex malware. Our tools are labor and time intensive, things that are in short supply when addressing the volume of threats we face in 2011. There’s a clear set of technical needs and accomplishments that can easily be funded here.

I think the CWG report is worth a study for these and many more reasons. I’m proud to represent Arbor as we battle the worm and protect the global Internet.

Another after action report came from ICANN, who was instrumental in the response. The report was published in May, 2010, and is largely a timeline of events. The two together are very worthwhile reading if you are involved in the operational security community.

Cyber Black Monday Traffic – Non Issue?

By: Jose -

Yesterday was “Cyber Monday”, so named for the kickoff of the online sales binge that so many Americans go through. The thinking is that Friday is for brick and mortar sales, and Monday is when folks get to work and use their high speed links to browse, shop, and spend. A few folks measure this specifically. Check the dynamic report Net Usage Index for Retail from Akamai is a very neat visualization of backbone traffic to e-commerce sites.

Looking at HTTP flows from a US provider to the outside world, we can see that there isn’t a big spike Monday relative to the previous Wednesday. We can clearly see the US holiday Thanksgiving (Thursday) and the associated day off work Friday in the traffic depressions. By the time Monday peaks we’re back at pre-holiday levels.

1wk_http_cyber_black_monday.png

Year on year, however, is another story, but I don’t think it’s all attributable to the growth of Cyber Monday, it looks just like traffic growth. Bloggers and news reports, such asCyber Monday Internet Traffic Spikes 30 Percent from MarketWatch, are citing data from Integra Telecom about a 30% Y-on-Y growth. Cyber Monday traffic better than last year by ZDNet blogger Sam Diaz also discusses these numbers. If we look at backbone HTTP traffic over a 52 week period we can see growth even before Cyber Monday. This analysis is just raw web traffic, not analyzing specific sites or types of sites.

1yr_http_cyber_black_monday.png

The net is indeed growing, but to come to conclusions it’s important to keep context in mind. If you just measure one day a year, you’ll infer growth due to one phenomenon. I trust Akamai’s data as it’s targeted at e-commerce sites. So there is growth, but not all of it is specific to Cyber Monday.

2008 Worldwide Infrastructure Security Report

By: Craig Labovitz -

Growing financial pressures, unforeseen threats, and a volatile and rapidly changing business landscape — apt descriptions for both the world economy and this years Worldwide Infrastructure Security Survey.

Arbor Networks once again has completed a survey of the largest ISPs and content providers around the world. Some 70 lead security engineers responded to 90 questions covering a spectrum of Internet backbone security threats and engineering challenges. This fourth annual survey covered the 12-month period from August 2007 through July 2008.

A copy of the full report is available at http://www.arbornetworks.com/report

The most significant findings:

  • ISPs Fight New Battles
    In the last four surveys, ISPs reportedly spent most of their available security resources combating distributed denial of service (DDoS) attacks. For the first time, this year ISPs describe a far more diversified range of threats, including concerns over domain name system (DNS) spoofing, border gateway protocol (BGP) hijacking and spam. Almost half of the surveyed ISPs now consider their DNS services vulnerable. Others expressed concern over related service delivery infrastructure, including voice over IP (VoIP) session border controllers (SBCs) and load balancers.
  • Attacks Now Exceed 40 Gigabits
    From relatively humble megabit beginnings in 2000, the largest DDoS attacks have now grown a hundredfold to break the 40 gigabit barrier this year. The growth in attack size continues to significantly outpace the corresponding increase in underlying transmission speed and ISP infrastructure investment. The below graph shows the yearly reported maximum attack size.
  • Services Under Threat
    Over half of the surveyed providers reported growth in sophisticated service-level attacks at moderate and low bandwidth levels attacks specifically designed to exploit knowledge of service weakness like vulnerable and expensive back-end queries and computational resource limitations. Several ISPs reported prolonged (multi-hour) outages of prominent Internet services during the last year due to application-level attacks.
  • Fighting Back
    The majority of ISPs now report that they can detect DDoS attacks using commercial or open source tools. This year also shows significant adoption of inline mitigation infrastructure and a migration away from less discriminate techniques like blocking all customer traffic (including legitimate traffic) via routing announcements. Many ISPs also report deploying walled-garden and quarantine infrastructure to combat botnets.

Overall, ISP optimism about security issues reported in previous surveys has been replaced by growing concern over the new threats and budget pressures. ISPs say they are increasingly deploying more complex distributed VoIP, video and IP services that often poorly prepared to deal with the new Internet security threats. More than half of the surveyed ISPs believe serious security threats will increase in the next year while their security groups make do with “fewer resources, less management support and increased workload.”

ISPs were also unhappy with their vendors and the security community. Most believe that the DNS cache poisoning flaw disclosed earlier this year was poorly handled and increased the danger of the threat.

Finally, the surveyed ISPs also said their vendor infrastructure equipment continues to lack key security features (like capacity for large ACL lists) and suffers from poor configuration management and a near complete absence of IPv6 security features. While most ISPs now have the infrastructure to detect bandwidth flood attacks, many still lack the ability to rapidly mitigate these attacks. Only a fraction of surveyed ISPs said they have the capability to mitigate DDoS attacks in 10 minutes or less. Even fewer providers have the infrastructure to defend against service-level attacks or this year’s reported peak of a 40 gigabit flood attack.

As always, this work would not be possible without the support and participation of the Internet security community. The 2008-2009 survey will be released next Fall.

Reblog this post [with Zemanta]

P4P Missing the Bandwidth Utilization Boat

By: Kurt Dobbins -

Verizon recently made public research data on reducing the amount of P2P traffic flowing over an ISP’s network, thereby reducing costs to ISPs and in theory Verizon’s costs.The basic premise is to add routing and topology awareness to P2P protocols, keeping traffic localized and reducing an ISP’s costs.

The researchers measured the average per-hop count and the average completion time of downloading a 12M file, with a module for simulating BitTorrent traffic. The findings are a result of the P4P Working Group, a partnership among ISPs and P2P networks. The stated goals of P4P are:

  • Improve throughput to P2P users
  • Allow ISPs to manage link utilization
  • Reduce number of links transited by content
  • Push traffic from undesirable (expensive/limited capacity) links to more desirable (inexpensive/available capacity) links

It seems as though the P4P Working Group has fallen victim to the law of unintended consequences. If P2P file sharing is localized, that means files being downloaded by one subscriber are being served (uploaded) from other local subscribers. Bottom line is that P4P will force more upstream traffic on the local access links then otherwise used by “traditional” P2P techniques. Sure, in a world of “infinite bandwidth” then the P4P concept works wonders. The problem is that bandwidth is not infinite, and in the case of wireless access networks, the radio bandwidth is typically the most expensive and the most limited of the bandwidth resources in the network. Even though the goal of P4P is to push traffic from expensive/limited capacity links to more inexpensive available capacity links, a P4P architecture will actually have the affect of drastically increasing the amount of concurrent upstream bandwidth used for sharing files over the access network.

The current study only measured ISP hop counts and download time as a cost factor. When considering the cost reduction advantages of P4P, the key measurement should be bandwidth utilization on the access network, rather than the ISP hop count. At Arbor, we’ve had customers that have actually had to double their access capacity every 6 weeks prior to deploying intelligent bandwidth management techniques to upstream P2P file sharing. And this was with traditional P2P topologies. Consider the extreme service and economic impact of localized file sharing with P4P, where the majority of downloads would be purposely serviced by the local access links.

The bottom line is that the findings of the P4P Working Group should be considered preliminary, as they need to take into consideration all aspects of today’s network architectures.

The “User Experience” on Mobile Handsets

By: Kurt Dobbins -

There has been a lot of media attention recently on the “user experience” for handsets. Much of this attention, of course, originated with Apple’s iPhone.iPhone

More recent excitement was generated during the Mobile World Congress held in Barcelona with the launch of Google’s Android open source operating system and software platform for mobile phones and Sony/Ericsson’s XPERIA X1 for the convergence of communication and entertainment. These new entries promise to transform how consumers use data services on 3G and 4G networks with exceptional user interfaces that drastically improve the usability of data services driven by applications such as web browsing, interactive messaging, mapping and navigation, music, and of course video. Just as graphical user interfaces on personal computers transformed how personal computers were used; the same type of transformation is now occurring with mobile handsets (see Android demo at the end of this post).

Case in point: During his keynote at the Mobile World Congress event in Barcelona, AT&T Mobility President and CEO Ralph de la Vega stated that “51 percent of iPhone owners have watched a YouTube video.”

This is one sound bite that clearly demonstrates how changes in the user interface on the handset impact the user experience. Just like YouTube, most of these new applications are “open” IP-based applications which, in fact, are supported by packet data services on the mobile network. Indeed, improved usability will drive usage. However, unlike the early personal computers which saw an increase memory usage and CPU cycles, new applications and interfaces on the handset means increased usage on the packet data network. Already, data traffic on the data packet network has quadrupled over the past year alone. Expect non-linear growth when new handsets supporting a wide-array of IP-based applications come to mass market adoption, especially when bandwidth-hogging applications like BitTorrent become more widely used.

So the near future of traffic will be a variety of real time and non-real time applications. Some of these applications will behave fair; some, just as in wireline networks, will not. We’ve generated reports on cell site traffic that show how a single BitTorrent subscriber can “hog” the radio bandwidth at the service expense of currently active users on the same cell site.BitTorrent Chart

Regardless of how fancy the user interface is on handsets, the underlying network infrastructure has to deliver the applications fairly and with the service quality expected. The mobile packet data network has to move to a more dynamic and intelligent network, in order to ensure the user experience of these new handsets. Service fairness includes dynamically adapting to network traffic conditions, instantaneously, and delivering IP-based application services on individual subscriber basis.

So, in my opinion, whenever there is a discussion about the user experience there must be an accompanying discussion on how the mobile network will deliver these new applications. How the network ensures fairness and service quality of applications and services, on an individual subscriber handset basis is at the heart of delivering and meeting the demands and consumer expectations of the “user experience” on mobile devices.

[youtube 1FJHYqE0RDg 210 175]

Internet Routing Insecurity::Pakistan Nukes YouTube?

By: Danny McPherson -

So, assume you’re an ISP in Pakistan and, for whatever reason, you receive an order such as this (PDF) from the Pakistan Telecommunication Authority (PTA). The letter is from the Deputy Director of Enforcement with the PTA, and is requiring that you immediately block access to a YouTube URL, or more specifically (actually, less specifically, but that’s a different issue), that you block access to 3 specific IP addresses: 208.65.153.238, 208.65.153.253 and 208.65.153.251.

These three IP addresses correspond to the DNS A resource records associated with www.youtube.com:

danny@rover% host -t a www.youtube.com
www.youtube.com has address 208.65.153.238
www.youtube.com has address 208.65.153.251
www.youtube.com has address 208.65.153.253

So, avoiding all discussion about whether or not said censorship is appropriate, and just focusing on how you’d actually go about blocking access to these IPs, or YouTube in general, you have a few options. Realistically, as a network engineer you could either:

  1. deploy access-control lists (ACLs) on all your router interfaces dropping packets to or from these IPs
  2. OR statically route the three IPs, or perhaps the covering prefix (208.65.153.0/24), to a null or discard interface on all the routers in your network
  3. OR employ something akin to a BGP blackhole routing function that results in all packets destined to those three specific IPs, or the covering prefixes, being discarded as a result of null or discard next hop packet forwarding policies, as discussed here

The first of which would require that you augment all existing ACL filtering policies on all router interfaces in your network, the second would require that you add static routes to every router in your network, and the last of which typically requires only that you announce a route for 208.65.153.0/24 to all your routers, tagged with a BGP community that maps to a “blackhole” policy on routers in your network.

So, assume you pick option 2. However, what you fail to recall is that your routing policies currently result in redistribution of all configured static routes into your set of globally advertised BGP routes. The net result is that you start announcing to the world that you provide destination reachability for the YouTube 208.65.153.0/24. Or, assume you pick option 3 above but your policies are broken such that you inadvertently announce reachability for 208.65.153.0/24 to your upstream provider, who happily conveys this to the global Internet. Same effect…

Either way, the net-net is that you’re announcing reachability to your upstream for 208.85.153.0/24, and your upstream provider, who is obviously not validating your prefix announcements based on Regional Internet Registry (RIR) allocations or even Internet Routing Registry (IRR) objects, is conveying to the rest of the world, via the Border Gateway Protocol (BGP), that you, AS 17557 (PKTELECOM-AS-AP Pakistan Telecom), provide reachability for the Internet address space (prefix) that actually belongs to YouTube, AS 36561.

To put icing on the cake, assume that YouTube, who owns 208.65.153.0/24, as well as 208.65.152.0/24 and 208.65.154.0/23, announces a single aggregated BGP route for the four /24 prefixes, announced as 208.65.152.0/22. Now recall that routing on the Internet always prefers the most specific route, and that global BGP routing currently knows this:

  • 208.65.152.0/22 via AS 36561 (YouTube)
  • 208.65.153.0/24 via AS 17557 (Pakistan Telecom)

And you want to go to one of the YouTube IPs within the 208.65.153.0/24. Well, bad news.. YouTube is currently unavailable because all the BGP speaking routers on the Internet believe Pakistan Telecom provides the best connectivity to YouTube. The result is that you’ve not only taken YouTube offline within your little piece of the Internet, you’ve single-handedly taken YouTube completely off the Internet.

A complete denial of service (DoS), intentional or not.

Even uglier is that even if the folks at YouTube begin announcing the /24 as well, and the global routing table looks like this:

  • 208.65.152.0/22 via AS 36561 (YouTube)
  • 208.65.153.0/24 via AS 36561 (YouTube)
  • 208.65.153.0/24 via AS 17557 (Pakistan Telecom)

YouTube reachability will still be half-broken, as the prefix length for the route via Pakistan Telecom is the same length as the prefix length for the YouTube announced route, and so BGP will [usually] next consider the shortest BGP path as the optimal route to the destination based solely on number of AS ‘hops’, resulting in a large portion of the Internet still preferring the /24 via Pakistan Telecom. You’re probably asking yourself now, then why doesn’t YouTube announce two /25s for the /24 in question? The reality is that most providers on the Internet don’t accept anything longer than a /24 BGP route announcement, so it’d be filtered and not installed in their routing tables.

So, what’s the root problem here? Let’s see, where to start:

  • no authoritative source for who owns and/or is permitted to provide transit services for what IP address spaces on the Internet
  • little or no explicit BGP customer prefix filters on the Internet
  • little or no inter-provider prefix filtering on the Internet
  • no route authentication and authorization update mechanism (eg., SBGP, soBGP, etc..) in today’s global routing system

I fully suspect that the announcements from Pakistan Telecom for YouTube address space were the result of a misconfiguration or routing policy oversight, and seriously doubt impact to YouTube reachability [beyond Pakistan's Internet borders] was intentional. The route announcements from Pakistan Telecom have long since been withdrawn (or filtered). We had a similar event at an ISP I worked for in 1998 (YES, a decade ago) – obviously, nothing has changed regarding this extremely fragile and vulnerable piece of Internet infrastructure since that time.

Some pointers to different discussions regarding prefix filtering on the Internet are available here, and here (search for ‘filter’). Our friends at Renesys, who blogged in parallel about some of the routing aspects of the event here, the Prefix Hijack Alert System (PHAS), a few various features in our very own Arbor Peakflow, and some other products do help detect hijackings of this sort. As far as prevention, well, as unbelievable as it may seem, you’re mostly out of luck today, unfortunately.

Interestingly, in the latest edition of the Infrastructure Security Report, BGP route hijacking yet again took a back seat to pretty much everything else in the list (in the world?). I suspect until the next event, it will again….

Scaling the good guys for the new year

By: Jose -

It’s custom – and a good habit – to look at major events like new years and such and review your path and direction. 2007 saw “Team Good” (the global group of Internet security pros and such) swamped with work fighting off a lot of attacks. Anticipating that 2008 wont be any better, I ask myself this set of questions:

  • What did we do well in 2007?
  • What do we need to improve in 2008?
  • What good things do we need to strength and nurture in 2008?

I think there’s more to this than tools, and any answers shouldn’t be based solely on technology.

Looking back to 2007, one of the biggest things We did as a group of people was communicate more, and more quickly. One shining example was the dissection of a Microsoft “patch” Trojan a couple of months ago. A team of people worked together to analyze it in a matter of a few hours, and we had detailed analysis completed. This happens from time to time, but it’s more the exception than the norm. But overall in 2007, bright, highly motivated people came together in various forums from all over the world to share tools and techniques to fight online crime and threats, and I think everyone benefited from it.

The biggest thing we can do strengthen our practices from 2007 would be to continue building human relationships. As the recent challenges with contacting NIC.RU show during the Storm worm, crossing geographic and operational boundaries is still one of the toughest things to do. The more friends you have the more you benefit when it comes to problem remediation.

Data has quickly been approaching a commodity, and 2007 saw some movements towards that. Data sharing – systematic, large volume data sharing – is slowly increasing, and in doing so we improve everyone’s visibility to begin dealing with a problem. The flipside of this is that gathering data is no longer the challenge, it’s acting on it and managing it.

In 2007 we had some real weaknesses though, so for 2008 I think we should focus on some of the following things. First, we saw a lot of uneven coverage of threats, a problem that can be addressed in part through a division of labor. Pushdo, for example, had been seen for months before SecureWorks posted their analysis, very few people had the inclination to do the follow through, including many AV companies. In contrast, many people and teams were focused on the Storm worm, often duplicating efforts.

With a decentralized group of people under no single umbrella, it’s hard to enforce a division of labor. However, if enough people share knowledge openly (in trusted circles free of bad guys, usually) and accept that there’s enough badness out there to keep us all in business as good guys, then all people have to do is to start to tackle the wide open plains as opposed to crowded areas.

Another big issue we failed to address well in 2007 – and need to for 2008 – is knowledge capture. There’s a handful of people sufficiently plugged in to everything and can remember it all, but they’re few and far between. Storing all of this accumulated knowledge goes beyond big disks and large inbox spools. You need to have it accessible, cataloged, and cross referenced, preferably automatically. That’s a tall order, but it’s a technical challenge.

These are the sorts of questions that are rolling around in my head, and the solutions I am working towards this year to solving the problems we face on the Internet.

Information Security and NFL Espionage

By: Danny McPherson -

In late January 2007 several NFL-related web sites were hacked, to include www.dolphinsstadium.com and www.miamidolphins.com. Considering the Miami Dolphins stadium was about to host the NFL’s biggest game of the year, Superbowl XLI, this seemed a reasonable enough target. The sites were modified to serve malicious JavaScript code that would compromise victim’s computers, providing a good dose of nastiness to vulnerable clients. Some additional details on the incident are available in this Websense alert.

Over the past several weeks, just as the the 2007-08 NFL regular season comes into full swing, the contents of email boxes everywhere have shifted from being bombarded with e-card Storm malware spam, to yet another NFL-driven social engineering vector, as outlined by our friends at TrendsLabs. And, of course, given that this is employing social engineering vectors, a slightly more inviting version of the spammed malware email has been introduced. In the latter edition, the involved miscreants have got themselves an actual domain name in the included link, rather than an IP address, and replaced most of the text with some nifty graphics, raising the bar from quite obviously malicious to just obviously malicious. Both messages profess to provide unsuspecting users a free game tracking system.

As if this weren’t enough, now fans are being duped by coaches and players themselves.. One of many recent events involves Coach Bill Belichick and his New England Patriots, who last week were punished by the NFL for illegally videotaping defensive signals of their competitors. Now, clearly, they’re not the only ones that have done this, but they are the first to get caught. With the Patriots often being touted as the NFL’s model team, it was sure to disappoint.

And, as you might expect, such behavior is typically followed by considerable additional scrutiny. For example, as discussed here, last season the Green Bay Packers “had issues with a man wearing Patriots credentials who was carrying a video camera on their sideline” and “There also are questions regarding the Patriots’ use of radio frequencies during the game”. There were even reports of untimely audio problems experienced by competing teams, problems that just may have been masterminded by the Patriots.

If the Patriots were able to decode the defensive signals real-time and relay match-ups to their offensive squad on the field via helmet communications systems, surely they’d be capable of adjusting to mismatches and afforded a huge competitive advantage. Else, perhaps at half-time they could train Patriots’ quarterback Tom Brady and team to read the signals themselves, detecting blitzes and the like and adjusting by calling audibles to accommodate.

Interestingly enough, radio communications for defensive signal calling has been voted down again, to include just last year. Now, one might think that if it were approved that this wouldn’t have happened; i.e., filming of competing teams wouldn’t yield defensive signals. Well, perhaps that is the case. Or, perhaps lip readers and body language experts would then be put to use. Or RF interception, or taps or other communications snooping mechanisms, all of which would occur even further behind the scenes.

If I heard the commentators correctly (the television was on in the other room), this evening during the New England/San Diego game the NFL purportedly had scanning gear looking for “stray signals” (whatever those are) and the NY Jets were planning to file something with the league regarding the Patriots having their defensive players miked during earlier games.

The Patriots’ code interception incident got me thinking: If the Denver Broncos are looking for a CISO (or a new field goal kicker), I’m local, so no relocation required. And, well, after today, it’s obvious they’re not spying on anyone.

ddos de da: Internet attacks still considerable

By: Danny McPherson -

Here at Arbor we’re working with many of our service provider partners on trying to qualify and quantify denial of service attacks and other network threats. Here are a few data points relative to DDoS attacks we’ve observed over the past 255 days of data collection:

  • 255 days of data collection
  • 39 ISPs participation average
  • ~1 Tbps of monitored inter-domain traffic
  • ~143k rate-based attacks (~278k total attacks)
  • 58% of attacks were TCP-based (80, 25, 6667, 22 leader ports)
  • 36% were ICMP
  • 5% were UDP (fragments well over majority)
  • 15% of attacks were TCP SYN (>94% had constant source sets and were likely NOT spoofed)

As for scale and frequency of attacks, of the 255 days of collection the following number of days had at least one attack exceeding the indicated threshold:

  • 6 Mpps – 1
  • 5 Mpps – 12
  • 4 Mpps – 33
  • 3 Mpps – 53
  • 2 Mpps – 91
  • 1 Mpps – 151

Total attacks over 255 days exceeding a given threshold:

  • 6 Mpps – 1
  • 5 Mpps – 17
  • 4 Mpps – 82
  • 3 Mpps – 135
  • 2 Mpps – 352
  • 1 Mpps – 823

Note that the above million packet per second (Mpps) attacks are from the perspective of a single participating ISP, an ISP which could be ingress, transit or edge network for the attack target. As such, it’s extremely likey that upon performing cross-ISP correlation (which is done but not fully analyzed) of the attack target data sets a much larger number of attacks will exceed the one million packets per second mark, and manual inspection already reveals that the aggregate of some of these attacks is far greater than even 10 Mpps!

To put this in perspective, the most crippling of the Estonian attacks had peak rates averaged over a 24 hour period of about 4 Mpps. 4 Mpps is a very large attack, and while less than 1% of the attacks we see exceed the Mpps mark, these attacks are nothing to ignore, pretty much regardless of who you are or what’s motivating an attacker.

We hope to release some formal analysis on the attack and traffic statistics we’ve been collecting, look for something here sometime soon. Volume III of the Infrastructure Security Survey is currently being compiled as well. With any luck, between these data sets we’ll be able to provide qualitative information on denial of service and Internet attack trends.

Go Back In Time →