The Heartburn Over Heartbleed: OpenSSL Memory Leak Burns Slowly

By: Arbor Networks -

Marc Eisenbarth, Alison Goodrich, Roland Dobbins, Curt Wilson

A very serious vulnerability present in OpenSSL 1.0.1 for two years has been disclosed (CVE-2014-0160). This “Heartbleed” vulnerability allows an attacker to reveal up to 64kb of memory to a connected client or server. This buffer-over-read vulnerability can be used in rapid succession to exfiltration larger sections of memory, potentially exposing private keys, usernames and passwords, cookies, session tokens, email, or any other data that resides in the affected memory region. This flaw does not affect versions of OpenSSL prior to 1.0.1.  This is an extremely serious situation, which highlights the manual nature of the tasks required to secure critical Internet services such as basic encryption and privacy protection.

As the vulnerability has been present for over two years, many modern operating systems and applications have deployed vulnerable versions of OpenSSL. OpenSSL is the default cryptographic library for Apache and nginx Web server applications, which together account for an estimated two-thirds of all Web servers. OpenSSL is also used in a variety of operating systems, including BSD variants such as FreeBSD, and Linux distributions such as Ubuntu, CENTOS, Fedora and more. Other networking gear such as load-balancers, reverse proxies, VPN concentrators, and various types of embedded devices are also potentially vulnerable if they rely on OpenSSL, which many do. Additionally, since the vulnerability’s disclosure, several high-profile sites such as Yahoo Mail, Lastpass, and the main FBI site have reportedly leaked information. Others have discussed the impact on underground economy crime forums, which were reportedly vulnerable to the matter and were attacked.

A key lesson is that OpenSSL, which is a vital component of the confidentiality and integrity of uncounted systems, applications and sites across the Internet, is an underfunded, volunteer-run project, which is desperately in need of major sponsorship and attendant allocation of resources.

Anyone running OpenSSL on a server should upgrade to version 1.0.1g. For earlier versions, re-compiling with the OPENSSL_NO_HEARTBEATS flag enabled will mitigate this vulnerability. For OpenSSL 1.0.2, the vulnerability will be fixed in 1.0.2-beta2. In terms of remediation, there’s a huge amount of work that must be done, not only for servers, but also for load-balancers, reverse proxies, VPN concentrators, various types of embedded devices, etc.  Applications which were statically compiled against vulnerable versions of the underlying OpenSSL libraries must be re-complied; private keys must be invalidated, re-generated, and re-issued; certificates must be invalidated, re-generated, and re-issued – and there are a whole host of problems and operational challenges associated with these vital procedures. Some systems may be difficult to patch, so network access control restrictions or the deployment of non-vulnerable proxies may be considered where possible to reduce the attack surface.

In most cases, exploitation of this vulnerability leaves no sign in server logs, making it difficult for organizations to know if they have been compromised. In addition, even after applying the OpenSSL patch, private keys, passwords, authentication credentials or any other data that was stored in heap memory used by OpenSSL may have already been compromised by attackers, potentially going as far back as two years. Of significant concern is the compromise of private key material, and one security organization reported that they were able to obtain this material during testing. Others reported difficulty in obtaining certificate material but were able to discover significant amounts of other sensitive data. Considering how easy it is for attackers to hammer this vulnerability over and over again in a very quick sequence, the amount of memory being disclosed can be quite substantial. Memory contents will vary depending on program state and controlling what is returned and what position in memory the contents are read from is much like a game of roulette.

Risk to Private Key Material
Security researchers in a Twitter exchange starting on April 8 2014 indicate that private keys have been extracted in testing scenarios, and other researchers suggest that attacking the servers during or just after log rotation and restart scripts run could expose private key material. This allegation has not been tested by ASERT.

For further details, please see the Twitter thread at


Incident Response and Attack Tools
While there has been some call to avoid over-reaction, organizations should strongly consider revoking and reissue certificates and private keys; otherwise, attackers can continue to use private keys they may have obtained to impersonate Websites and/or launch man-in-the-middle attacks. Users should change usernames and passwords as well, but should not enter login credentials on Websites with vulnerable OpenSSL deployments. To do so could invite attackers to compromise both the old and new credentials if they were exposed in memory.

Many tools have been made available to test for the vulnerability and these same tools are available for attackers to use as well. It is also reasonable to consider that the password reuse problem will again cause additional suffering, because the same passwords shared across multiple systems create an extension of attack surface. A shared password that provides access to a sensitive system, or to an e-mail account used for password resets, can be all that an attacker needs to infiltrate an organizations defenses along multiple fronts.

Multiple proof-of-concept exploits have already been published, and a Metasploit module has been published. Attackers of all shapes and sizes have already started using these tools or are developing their own to target vulnerable OpenSSL servers. There have been reports that scanning for vulnerable OpenSSL servers began before the disclosure of the bug was made public, although other reports suggest that these scans may not have been specifically targeting the Heartbleed vulnerability.

ATLAS Indicates Scanning Activity
ASERT has observed an increase in scanning activity on tcp/443 from our darknet monitoring infrastructure, over the past several days, most notably from Chinese IP addresses (Figure 1, below). Two IP addresses ( and observed scanning tcp/443 have been blacklisted by Spamhaus for exploit activity. Scans from Chinese sources are predominately coming from AS4143 (CHINANET-BACKBONE) and AS23724 (CHINANET-IDC-BJ-AP).

Figure 1:       TCP/443 scans, Tuesday – Wednesday (April 8-9)



Attacks observed by ASERT decreased by Thursday as of this report writing. China still accounted for the largest percentage of detected scan activity:

Figure 2:       TCP/443 scans, Thursday (April 10)



Pravail Security Analytics Detection Capabilities

Arbors Pravail Security Analytics system provides detection for this vulnerability using the following rules:

2018375 ­ ET CURRENT_EVENTS TLS HeartBeat Request (Server Intiated)

2018376 ­ ET CURRENT_EVENTS TLS HeartBeat Request (Client Intiated)

2018377 ­ ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Client Init Vuln Server)

2018378 ­ ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Server Init Vuln Client)

Examples of detection capabilities are reproduced below.


Heartbleed detection tool screenshot

Heartbleed detection tool screenshot


Analysis of Historical Packet Captures Using New Indicators
In the event of this, and other security threats that are highly emergent, organizations may wish to consider implementing analysis capabilities on archived packet captures in order to detect first signs of attack activity. Granular analysis using fresh indicators can help pinpoint where and when a targeted attack (or a commodity malware attack, for that matter) may have first entered the network, or when such attackers may have exfiltrated data using a technique that was not yet being detected on the wire during the time of the initial attack and infiltration. The capabilities of Pravail Security Analytics will give organizations the means to accomplish such an analysis. A free account is available at and rest assured that this site is using the latest non-vulnerable OpenSSl version.

Longer-Term Implications and Lessons Learned
Serious questions have been raised regarding the notification process surrounding this vulnerability.  The operational community at large have voiced serious disapproval surrounding the early notification of a single content delivery network (CDN) provider, while operating system vendors and distribution providers, not to mention the governmental and financial sectors, were left in the dark and discovered this issue only after it was publicly disclosed via a marketing-related weblog post by the CDN vendor in question.  It has been suggested that the responsible disclosure best practices developed and broadly adopted by the industry over the last decade were in fact bypassed in this case, and concerns have been voiced regarding the propriety and integrity of the disclosure process in this instance.

Recent indications that a significant number of client applications may be utilizing vulnerable versions of OpenSSL as well has broad implications, given the propensity of non-specialist users to ignore software updates and to continue unheedingly running older versions of code.

Furthermore, only ~6% of TLS-enabled Websites (and an undetermined, but most probably even-smaller percentage of other types of systems) make use of Perfect Forward Secrecy (PFS). This configurable option ensures that if an issue of this nature arises, previously encrypted traffic retained in packet captures isn’t susceptible to retrospective cryptanalysis.

Without PFS, there are no automated safeguards that can ameliorate these issues, once a vulnerability of this nature has been exposed.  Many operators and users may not realize that if attackers captured packets of encrypted traffic in the past from vulnerable services/applications which weren’t configured with PFS – i.e., the overwhelming majority of such systems – and have retained those captured packets, they’ve the opportunity now to use analysis tools to replay those packets and decrypt the Internet traffic contained in those packets. This means that attackers can potentially unearth their credentials, intellectual property, personal financial information, etc. with access to previously captured packet-dumps.

The ability for an attacker to decrypt packet capture archives requires that the attacker has obtained the private keys used to encrypt that traffic. As recent research shows, this is not a theoretical vulnerability – private key material has been compromised in a lab environment and therefore we must assume that attackers have at least the same, if not more substantial capabilities.

The ‘Heartbleed’ vulnerability may well result in an underground market in ‘vintage’ packet captures – i.e., packet captures performed after the date this vulnerability was introduced into OpenSSL, and prior to some date in the future after which it is presumed that the most ‘interesting’ servers, services, applications, and devices have been remediated.

This incident has the potential to evolve into a massive 21st-Century, criminalized, Internet-wide version of the Venona Project, targeting the entire population of Internet users who had the ill fortune to unknowingly make use of encrypted applications or services running vulnerable versions of OpenSSL. This highlights the paradox of generalized cryptographic systems in use over the Internet today.

While the level of complexity required to correctly design and implement cryptosystems means that in most situations, developers should utilize well-known cryptographic utilities and libraries such as OpenSSL, the dangers of a cryptographic near-monoculture have been graphically demonstrated by the still-evolving Heartbleed saga.  Further complicating the situation is the uncomfortable fact that enterprises, governments, and individuals have been reaping the benefits of the work of the volunteer OpenSSL development team without contributing the minimal amounts time, effort, and resources to ensure that this vital pillar of integrity and confidentiality receives the necessary investment required to guarantee its continued refinement and validation.

This is an untenable situation, and it is clear that the current development model for OpenSSL is unsustainable in the modern era of widespread eavesdropping and rapid exploitation of vulnerabilities by malefactors of all stripes. Information on how to support the OpenSSl effort can be found here:

Heartbleed and Availability
While Heartbleed is a direct threat to confidentiality, there are also potential implications for availability.

In some cases, attackers seeking exploitable hosts may scan and/or try to exploit this vulnerability so aggressively that they inadvertently DoS the very hosts they’re seeking to compromise. Organizations should be cognizant of this threat and ensure that the appropriate availability protections are in place so that their systems can be defended against both deliberate and inadvertent DDoS attacks.

It should also be noted that initial experimentation seems to indicate that it’s easiest for attackers to extract the private keys from vulnerable OpenSSL-enabled applications and services, using the least amount of exploit traffic, immediately after they have been started.  Accordingly, organizations should be prepared to defend against DDoS attacks intended to cause state exhaustion and service unavailability for SSL-enabled servers, load-balancers, reverse proxies, VPN concentrators, etc.  The purpose of such DDoS attacks would be to force targeted organizations to re-start these services in order to recover from the DDoS attacks, thus providing the attackers with a greater chance of capturing leaked private keys.

References Note: This event may have been a false positive caused by ErrataSec’s masscan software (

Venona Project:>

Putting Your Intelligent DDoS Mitigation System to the Test

By: Gary Sockrider -

If you are reading this, it’s likely you have already deployed an Intelligent DDoS Mitigation System or plan to do so soon. It’s a great feeling to know you are seeing and stopping attacks. It’s also important to know that you have the systems properly tuned and your operators are prepared to deal with attacks in the future. As with any type of response activity, practice makes perfect. To that end it’s a good idea to have a DDoS simulator on hand.

By simulating a DDoS attack you can verify that you are catching the real ones. It’s also a great way to fine tune your mitigation for optimal performance. Last and most importantly, you can run drills at regular intervals. In this way your team can practice identifying and dealing with threats in real time without having to put production resources at serious risk. Drills can also be useful for training new employees.

We know that DDoS attacks can be multi-dimentional and even though you have the right tools in place, you really don’t know if your team is up to the challenge.  You need some sort of multi-dimentional tool to simulate real world attacks.  It isn’t enough to simply use a script that sends SYN packets towards a single host.  In order to know how to use countermeasures, you need a tool that can send multiple types of attacks. Ideally, that tool will also be easy to use.

Recently, I got to spend some time the folks at BreakingPoint. What I liked about their solution was the combination of simplicity and power. They have pre-configured Denial of Service attacks to evaluate your defenses.These include application-layer, VoIP and brute force attacks such as: HTTP Fragmentation, SlowLoris, SSL Renegotiation, UDP Flood, VoIP Flood, and  IPv6 Extension Header Fragmentation among others. They can also simulate legitimate traffic combined with multiple types of DDoS for a very realistic test environment.

Last month at Cisco Live! in San Diego, Arbor and BreakingPoint teamed up for a live demonstration at the World of Solutions. Using BreakingPoint’s Firestorm system, Arbor Consulting Engineer, Scott Rikimaru, was able to easily create a DDoS attack profile simulating SlowLoris. During the live demonstrations Scott would initiate the attack from the Firestorm against an Arbor Pravail APS appliance. The attack immediately became visible on the Pravail APS and then Scott began mitigation by switching into “Active Mode”.  The audience got to see the  mitigation in real time with attack traffic being dropped and legitmate traffic passed.

We all know the threat landscape is constantly changing. Make sure you test your IDMS deployment regularly so you can keep it running in top condition and get the most out of your investment.


AV, how cam’st thou in this pickle?

By: Danny McPherson -

While I’ve seen and heard random spatterings about why AV isn’t effective, or analyst reports from the likes of Yankee declaring “AV is Dead”, there’s been very little qualitative or quantitative study on precisely why. Well, beyond the endless flurry of new malware families and subseqent offspring, that is.. As such, I find myself borrowing from Shakespeare’s The Tempest, and asking: “AV: how cam’st thou in the pickle?”

That’s why I’m pleased some of my colleagues at Arbor, with some co-collaborators at the University of Michigan, published Automated Classification and Analysis of Internet Malware (pdf).

There are basically three main issues with AV in the report:

    • completeness – AV does not provide a complete categorization of the datasets, with AV failing to provide labels for 20 to 62 percent of the malware samples examined in the study
    • consistency – when labels are provided, malware is inconsistently classified across families and variants within a single naming convention, as well as across multiple vendors and conventions
    • conciseness – AV systems provide either too little or far too much information about a specific piece of malware

The authors go on to demonstrate how what something does is more important then what you call it (i.e., behaviors are better than labels). By observing state changes associated with files modified, processes created and network connections, a behavioral fingerprint can be generated for the malware. From there, grouping based on these fingeprints can provide some meaningful output and actionable information.

It’s definitely worth the read…

Botconomics: The Monetization of YOUR Digital Assets

By: Danny McPherson -

A decade ago IF your PC was compromised it was usually just taken for a joy ride. Today, with the monetization of bots, ease of compromise, prevalence of malware, and increasing connectedness of endpoints on the Internet, WHEN your assets are compromised they’re subjected to something more akin to a chop shop.

To follow this vein (purely for amusement):

  • Seat belt == AV; If you’re hit, you’re a whooping 50% (note that that 50% number is pretty accurate, at least in the case of AV) less likely to get injured
  • Overhead and side curtain airbags == Good AV (or HIPS?); might suffocate you or rip your head off, but there to make you safer!
  • Alarm system == IDS; is anyone listening?
  • Anti-lock Braking System == NAC; a parking pass in the console and you’re in the building
  • CD case in the glove box == lift some CD license keys
  • Office Badge/ID == Paypal & ebay account credentials
  • Used in hit & run == DDoS attack
  • LoJack == IP reputation services –> subscription required
  • The Club == HIPS (pita)
  • Turning your car into one of those rolling advertisements.. Or towing one of those billboard trailers? Leaving a cloud of smoke and soot in your wake? == Why Spam, of course… (ok, really weak)
  • Body stuffed in the trunk, used for high-dollar drug or arms deal and dumped in the river == drop site
  • Wallet with some cash or CCs == score!; keylogger streaming PIN numbers, login credentials and secret question answers, mother’s maiden name, birth date, national ID number, etc.. to one of the aforementioned drop sites
  • Garage door opener and vehicle registration w/home address in the car — hrmmm…
  • Car thief picks up your girlfriend == phishing…? :-)

OK, OK, enough of the bad analogies, I suspect you get the point or have stopped reading by now.

Ahh, but folks aren’t driving cars across the country anymore, they’re flying jet planes – Good thing we’ve got seat belts! And for you skeptics – not to worry, we’ve now got floatation devices if things get really ugly…

The point is, if you or anyone you do business with online is compromised, you’re at risk. Further – if anyone you do business with is online, you’re at risk. Need more? Someone that has you’re personal information does something with a networked system, and as a result, you’re at risk.

Think AV is protecting you? An IDS? Malware today is explicitly engineered around leading AV engines (e.g., ++580 Agobot variants), engines for which auto-update functions are disabled upon compromise via any of a number of techniques, from removing the programs or making them non-executable, to adding hosts.txt entries pointing to a local interface (e.g., — for the Internet address of the AV signature update server.

Entire bot systems exist with load-balanced command and control, real-time dynamic paritioning and multi-mode monetization capabilities based on the bot services consumer’s needs, etc..

The GOOD News for those bot services consumer:

[Taken verbatim from a recent spam message I received boasting 'bullet proof [bp]‘ hosting services:]

    • IPs that change every 10 minutes (with different ISPs)
    • Excellent ping and uptime
    • 100 uptime guarantee
    • Easy Control Panel to add or delete domains thru webinterfaces
    • …..

Bot herders have heard the public’s outcry for multi-mode bots, responding with SLAs, intuitive user interfaces, ISP redundancy and even excellent ping times! Heck, several pieces of malware perform speed tests to ‘top Internet sites’, indexing and allocating our resources based on availability and connectedness.

Need a turn-key phishing solution? For a small fee you can get a botnet partitioned to do all these things and more:

  • compromise based on exploit of your choice
  • patch owned hosts for exploit that was used to compromise, and perhaps a few other low-hanging vulnerabilities
  • allocate bot resources (control, drop, lift, host, spam, attack) based on connectedness
  • lift CD keys, install key loggers, lift passwords, account info, email addys, etc
  • setup a couple bots as drop sites
  • setup a couple bots as phishing site web servers
  • setup a couple sites as phishing email relays
  • setup a couple open proxies for access to any of the above
  • want to take it for a test drive, not a problem

and voila, you’re in business!

Ohh, and don’t forget the special operations bots at the ready in the event that an anti-{spam,bot,phishing} company actually impacts your operations.. Don’t believe me? Go ask BlueSecurity (note the link still doesn’t work), or our friends at CastleCops, or… Six months of DoS attack observation across 30 ISPs here at Arbor yielded well over one hundred days with at least one ISP reporting an attack of one million packets per second or better. Some trivial math (1,000,000 * 60 bytes per packet * 8 bits per byte == 480 Mbps), enough to take 99%++ of the enterprises on the Internet offline today.

I’m not knocking any of the solutions above, they’re all necessary (well, most of them) and serve some purpose. It’s little more than an arms race today and there is no Silver Bullet, it’s all about layered security and awareness. As good-minded security folk continue to innovate, so to do the miscreants. As they find more ways to pull more money from more compromised assets, the problem will continue to grow. You CAN and WILL be affected, whether directly or implicitly, whether you bank and buy stuff online or not – the merchants you deal with surely have networks of some sort. A good many of those merchants do make concerted efforts to protect their consumers – perhaps others see things like any of the slew of compliance standards as ‘I tried, get out of jail free’ waivers when they do get compromised.

Being aware that the problem exists is the first step towards making it suck less, or so one would hope.. Let’s just hope that the Internet’s open any-to-any connectivity, as molested today as it may be (much in the name of security, mind you), isn’t entirely lost in the process.

Bots and widespread compromise affect every aspect of our economy today, directly or implicitly. Therein enters our amalgamation; botconomics.


By: Sunil James -

After many months of arduous labor by various folks here at Arbor, I’m pleased to bring you ATLAS, which is short-form for the Active Threat Level Analysis System. It’s been a long road, but our baby can finally walk!

ATLAS is a multi-phase project, the first phase of which includes the release of a public portal that’s available at ATLAS is designed to push the state-of-the-art when it comes to large-scale, Internet-wide threat monitoring. You all know our pedigree; this stuff has been in our blood for years, and now Arbor is moving forward to really develop the kind of threat monitoring network that I hope will provide the public and Arbor customers with a truly globally-scoped view of the Internet from a threat perspective.

Plenty more to come on ATLAS over the next few days/weeks/months. For now, though, give ATLAS a whirl, and let us know what you think. This site is built for you, so, your feedback is ESSENTIAL in helping us better meet your needs. If you’re at RSA this year, stop by the booth for a live demonstration.

45,187,200 seconds later, ATLAS lives…

A Letter to Larry & Sergey

By: Sunil James -

Dear Larry & Sergey,

What up, ‘fellas? Thanks for swinging through my house party last weekend. I’m glad to see you’re still able to hang UMich style :) Oh, and before I forget, Halle says thanks for the wicked “Kid ‘n Play”-style break-dancing you two put on at the party. Good to know you can still get down.

Damn, this custom search engine thing is mighty slick! I started diddling around with it on Monday evening, and created an infosec-specific search engine. However, what I really dig is the ability to have the community refine and expand it to make it more encompassing of sites I missed (and there’s a ton of ‘em!). For the past few years, your little project has been tremendously helpful in allowing me to conduct infosec-specific searches, however, I usually ended up defaulting to manual browsing through a handful of trusted sites to deliver vetted data. With your latest baby, you’ve given my fellow security dweebs and I the power to really develop a usable, community-driven search engine specific to our needs as infosec operators.

Anyway, when you get a chance, check out my engine. When you’re not busy cornering the way we search for all kinds of media, I’d love it if you actively participated and refined the engine by signing up as a contributor. (BTW, damn you for limiting how many people I can actually invite! I guess I’ll just have to blog about this somewhere and have people sign up to volunteer, considering there’s no limit on the number of folks that approach me to help … ;)

Oh, one more note…kudos to whoever thought up Google Marker. It makes things SO helpful…it’s so nice to browse around sites of interest and automatically “bookmark” that site with the engine. It’s working great with Firefox 2.0 on Mac OS X 10.4.8.

Alright, hit me on the cell later today; I still need to mail you those “Beat Ohio State” t-shirts I promised!


Looking For a Few Good Men (or Women)

By: Sunil James -

So, a couple of months back, we interviewed Peter Markowsky for a position within the Arbor Security Engineering & Response Team (ASERT). I’m glad to see that he eventually ended up @ Google, which, by the way, is on a security hiring frenzy, it seems. Anyway, Peter briefly blogged about his interview experience with us, thereby creating an opportunity for me to say one thing:


’tis true, my friends. We’re seeking those code slingers, those jack-of-all-trades, those x86 ninjas who’re looking to truly be challenged again. It’s funny, though. Being a “security” guy isn’t as important as being a “software” guy. At its core, we’re looking for software engineers w/ a bent towards security, not necessarily vice versa…something EXTREMELY hard to find in and of itself, as all other security product vendors can likely attest to. If only there weren’t so many darn west coast-based shops exploiting their natural habitat of all-year round beautiful weather…I retort by asking, “who wouldn’t want a change of seasons like you have in beautiful Ann Arbor?”

You’ve seen our t-shirts, you’ve seen our blog, and you’ve seen our research…now see our job postings here and here, and come find out why 9 out of 10 dentists agree that working @ Arbor is good for your teeth.

It’s Our Party & We’ll Cry If We Want To…

By: Jeff Nathan -

Have you ever taken a moment to realize that the primary reason the information security industry even exists is because a noted lack of pedantic people both in the RFC world of the 1980s and the software engineering world up until the mid 1990s? Yes, there was actually a time where people did not consider the unexpected consequence of an unbounded strcpy(). Way back, when these people were focused on writing software and designing systems, they were unencumbered with the trappings of secure coding. I wonder if this period allowed people to be more free with their ideas and in turn make the incredible strides that fueled technology development.

The coding styles of the past are in stark contrast today, where even the least enlightened organizations have at least some sense that there are consequences for writing bad software. All low blows aside, I have a tough time writing any code without considering the myriad of side effects of even a small block of code. Back when I was first learning about programming, I wonder if I would have pursued it if I realized I was going to have to spend as much time being careful as I spent being creative. At this point, I begin to feel insincere, as the very industry that has kept me employed some seven years exists because people were not coding in incredibly pedantic circles.

This isn’t to say that software engineering efforts of years past were the best way to be productive and get things done. While we’ve been busy making a case for our own existence by releasing vulnerability advisories and developing new security products other segments of the software development industry have been coming up with ideas like Extreme Programming. Software is ubiquitous, which is a good thing because we’re no longer viewed as computer nerds when we tell someone what we do for a living. Somewhat unfortunately, by creating a vast unwashed mass that consumes software and doesn’t have to be at least this high to ride the software, we’ve skewed the public’s view of what software development really is.

Software development is both artistic and scientific. I like to refer to programming as craftsmanship. If you take pride in what you’re working on it shows. From the perspective of the security industry this also means that the security fitness of a piece of software is part of the craftsmanship. Writing secure code, whether considered ahead of time or an afterthought isn’t always the most natural way to write software. And, while we’ve spent time reminding everyone how important it is to do the right thing and taking the moral high ground we may have done ourselves a disservice. Referring back to the ubiquity of software, the lack of understanding of how software is created is not good for any of us.

I’ve read on in horror when I perused stories of proposed legislation that would require individual developers to be financially responsible for the fitness of software. For years I’ve looked forward to the day where two enormous companies would face off in the US courts to settle the debate on software liability. The case would last for years and bring even more attention to the security industry, which would be great. But, there’s always the possibility that the courts don’t agree with the plaintiff (in this case a detrimentally affected customer) and side with a software vendor. I’m sure Congress can grasp the idea that open source software development might simply stop of individual developers are held liable for their software. I don’t want individual authors to be held responsible either. But, part of me certainly would like for there to be fiscal liability for the manufacturers of software.

There’s an unfortunate dichotomy in arguing that point. If an individual author can sell or give away a piece of software that comes with a license that states the author makes no claim as to the fitness of said software, why shouldn’t a larger commercial entity be able to do the same thing? I have no idea how long it will take before such a question comes before a court. But before it does, I think that the security industry better have a very convincing answer prepared.

Battling the Stupid-Bit

By: Rob Malan -

The Evil Bit! I’ve been thinking about RFC-3514 often over the last few quarters; that and what should be its cousin: the Stupid-Bit. I know you’re shaking your head now – poor CTO…too much time in the sales/marketing dunk tank. I’m serious, though. Not that you should be able to look at a bit in a packet and know its intent (e.g. malicious, web-surfing, financial transaction…), but rather that by understanding the context of the packet, you should be able to decide its fate. As a thought exercise, put yourself at any point in the network, and, given a packet, ask yourself, “What do I need to know about this packet in order to forward it along the way?” The set of things that our current security and networking infrastructure ask are pretty rudimentary: do we know how to get where it wants to go, i.e. have a route, arp mapping, etc.? Is it allowed to go to that host and service – match a firewall ruleset? Does it carry obvious maliciousness – match an IPS vulnerability or exploit signature? The industry is pushing this a bit further: is it allowed to talk on this port and is it coming from an authorized machine (NAC)? However, it’s my guess that you could ask yourselves a bunch of other pretty relevant questions — some of these divining an implicit evil-bit or more likely a stupid-bit set to one.

Arbor’s spent the last five years building both provider and enterprise solutions that measure network-wide normal behavior. Give us a point in your network and a packet — we can tell you if you’ve seen it before; how often; to which other hosts; between which users; what other applications were in the mix at the time; etc. We have done this in the past with flow-based inputs, and increasingly with application-specific data generators — such as our Authflow Identity Tracking agents. If you have talked to our sales-force, pm-force, or me in the last nine months, you’ve probably heard about what we’re doing in the area of deep packet data inputs. The idea is to build out our context not just network-wide, but also up and down the stack. I won’t spoil the marketing launches, so enough said about that; but where we’re going is increasingly towards putting this network-wide context into action. If you can answer a lot more questions about a packet, you can make a much better forwarding decision. Networks need to get smarter or at least not work so hard at forwarding traffic with the stupid-bit set to one.

Say NO to RFPs!

By: Carlos Morales -

I don’t know who dislikes RFPs more: vendors who have to answer them or customers who have to create them and then read all the responses. There aren’t too many things that waste more time than RFPs.

I understand the original premise of these: they were a way for a customer to define their needs and do an impartial analysis of which vendor best fits the requirements. This is no longer necessary. With websites, on-line or print product reviews, tradeshows, peer groups, Webex sessions, and product trials, it’s relatively easy to find out what solutions best meets your needs. Because of this, RFPs have morphed into one of three things:

- Documents completely tilted towards a particular vendor’s technology produced to suit a requirement that the company has to consider multiple technologies before making a decision.

- Documents outlining a wish list of technology that no vendor currently has. This puts vendors in the uncomfortable position of having to over-commit heavily on future technology just to have a chance at winning the deal. Nothing is more responsible for half implemented, unreliable features than over commitments.

- Huge documents filled with more legalize than actual technology/proposal questions. I like to call these “Big Telco RFPs.” These goliaths read more like a contract for the merger of two fortune 500 companies and usually require 10 or more people, including a couple of lawyers to get done. It’s due a week from Friday, by the way.

Why bother? A system where people simply research and choose a solution that best matches their needs would be much better. They would make the call on what to use, justify their decision to management, and be held responsible for the success of the solution. Many companies do this today and it’s far more efficient for everyone involved. RFPs should really become a thing of the past.

Go Back In Time →