The Heartburn Over Heartbleed: OpenSSL Memory Leak Burns Slowly

By: Arbor Networks -

Marc Eisenbarth, Alison Goodrich, Roland Dobbins, Curt Wilson

Background
A very serious vulnerability present in OpenSSL 1.0.1 for two years has been disclosed (CVE-2014-0160). This “Heartbleed” vulnerability allows an attacker to reveal up to 64kb of memory to a connected client or server. This buffer-over-read vulnerability can be used in rapid succession to exfiltration larger sections of memory, potentially exposing private keys, usernames and passwords, cookies, session tokens, email, or any other data that resides in the affected memory region. This flaw does not affect versions of OpenSSL prior to 1.0.1.  This is an extremely serious situation, which highlights the manual nature of the tasks required to secure critical Internet services such as basic encryption and privacy protection.

As the vulnerability has been present for over two years, many modern operating systems and applications have deployed vulnerable versions of OpenSSL. OpenSSL is the default cryptographic library for Apache and nginx Web server applications, which together account for an estimated two-thirds of all Web servers. OpenSSL is also used in a variety of operating systems, including BSD variants such as FreeBSD, and Linux distributions such as Ubuntu, CENTOS, Fedora and more. Other networking gear such as load-balancers, reverse proxies, VPN concentrators, and various types of embedded devices are also potentially vulnerable if they rely on OpenSSL, which many do. Additionally, since the vulnerability’s disclosure, several high-profile sites such as Yahoo Mail, Lastpass, and the main FBI site have reportedly leaked information. Others have discussed the impact on underground economy crime forums, which were reportedly vulnerable to the matter and were attacked.

A key lesson is that OpenSSL, which is a vital component of the confidentiality and integrity of uncounted systems, applications and sites across the Internet, is an underfunded, volunteer-run project, which is desperately in need of major sponsorship and attendant allocation of resources.

Mitigation
Anyone running OpenSSL on a server should upgrade to version 1.0.1g. For earlier versions, re-compiling with the OPENSSL_NO_HEARTBEATS flag enabled will mitigate this vulnerability. For OpenSSL 1.0.2, the vulnerability will be fixed in 1.0.2-beta2. In terms of remediation, there’s a huge amount of work that must be done, not only for servers, but also for load-balancers, reverse proxies, VPN concentrators, various types of embedded devices, etc.  Applications which were statically compiled against vulnerable versions of the underlying OpenSSL libraries must be re-complied; private keys must be invalidated, re-generated, and re-issued; certificates must be invalidated, re-generated, and re-issued – and there are a whole host of problems and operational challenges associated with these vital procedures. Some systems may be difficult to patch, so network access control restrictions or the deployment of non-vulnerable proxies may be considered where possible to reduce the attack surface.

Exploitation
In most cases, exploitation of this vulnerability leaves no sign in server logs, making it difficult for organizations to know if they have been compromised. In addition, even after applying the OpenSSL patch, private keys, passwords, authentication credentials or any other data that was stored in heap memory used by OpenSSL may have already been compromised by attackers, potentially going as far back as two years. Of significant concern is the compromise of private key material, and one security organization reported that they were able to obtain this material during testing. Others reported difficulty in obtaining certificate material but were able to discover significant amounts of other sensitive data. Considering how easy it is for attackers to hammer this vulnerability over and over again in a very quick sequence, the amount of memory being disclosed can be quite substantial. Memory contents will vary depending on program state and controlling what is returned and what position in memory the contents are read from is much like a game of roulette.

Risk to Private Key Material
Security researchers in a Twitter exchange starting on April 8 2014 indicate that private keys have been extracted in testing scenarios, and other researchers suggest that attacking the servers during or just after log rotation and restart scripts run could expose private key material. This allegation has not been tested by ASERT.

For further details, please see the Twitter thread at https://twitter.com/1njected/status/453781230593769472

image002

Incident Response and Attack Tools
While there has been some call to avoid over-reaction, organizations should strongly consider revoking and reissue certificates and private keys; otherwise, attackers can continue to use private keys they may have obtained to impersonate Websites and/or launch man-in-the-middle attacks. Users should change usernames and passwords as well, but should not enter login credentials on Websites with vulnerable OpenSSL deployments. To do so could invite attackers to compromise both the old and new credentials if they were exposed in memory.

Many tools have been made available to test for the vulnerability and these same tools are available for attackers to use as well. It is also reasonable to consider that the password reuse problem will again cause additional suffering, because the same passwords shared across multiple systems create an extension of attack surface. A shared password that provides access to a sensitive system, or to an e-mail account used for password resets, can be all that an attacker needs to infiltrate an organizations defenses along multiple fronts.

Multiple proof-of-concept exploits have already been published, and a Metasploit module has been published. Attackers of all shapes and sizes have already started using these tools or are developing their own to target vulnerable OpenSSL servers. There have been reports that scanning for vulnerable OpenSSL servers began before the disclosure of the bug was made public, although other reports suggest that these scans may not have been specifically targeting the Heartbleed vulnerability.

ATLAS Indicates Scanning Activity
ASERT has observed an increase in scanning activity on tcp/443 from our darknet monitoring infrastructure, over the past several days, most notably from Chinese IP addresses (Figure 1, below). Two IP addresses (220.181.158.174 and 183.63.86.154) observed scanning tcp/443 have been blacklisted by Spamhaus for exploit activity. Scans from Chinese sources are predominately coming from AS4143 (CHINANET-BACKBONE) and AS23724 (CHINANET-IDC-BJ-AP).

Figure 1:       TCP/443 scans, Tuesday – Wednesday (April 8-9)

image003

image005

Attacks observed by ASERT decreased by Thursday as of this report writing. China still accounted for the largest percentage of detected scan activity:

Figure 2:       TCP/443 scans, Thursday (April 10)

image007

image010

Pravail Security Analytics Detection Capabilities

Arbors Pravail Security Analytics system provides detection for this vulnerability using the following rules:

2018375 ­ ET CURRENT_EVENTS TLS HeartBeat Request (Server Intiated)

2018376 ­ ET CURRENT_EVENTS TLS HeartBeat Request (Client Intiated)

2018377 ­ ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Client Init Vuln Server)

2018378 ­ ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Server Init Vuln Client)

Examples of detection capabilities are reproduced below.

 

Heartbleed detection tool screenshot

Heartbleed detection tool screenshot

 

Analysis of Historical Packet Captures Using New Indicators
In the event of this, and other security threats that are highly emergent, organizations may wish to consider implementing analysis capabilities on archived packet captures in order to detect first signs of attack activity. Granular analysis using fresh indicators can help pinpoint where and when a targeted attack (or a commodity malware attack, for that matter) may have first entered the network, or when such attackers may have exfiltrated data using a technique that was not yet being detected on the wire during the time of the initial attack and infiltration. The capabilities of Pravail Security Analytics will give organizations the means to accomplish such an analysis. A free account is available at https://www.pravail.com/ and rest assured that this site is using the latest non-vulnerable OpenSSl version.

Longer-Term Implications and Lessons Learned
Serious questions have been raised regarding the notification process surrounding this vulnerability.  The operational community at large have voiced serious disapproval surrounding the early notification of a single content delivery network (CDN) provider, while operating system vendors and distribution providers, not to mention the governmental and financial sectors, were left in the dark and discovered this issue only after it was publicly disclosed via a marketing-related weblog post by the CDN vendor in question.  It has been suggested that the responsible disclosure best practices developed and broadly adopted by the industry over the last decade were in fact bypassed in this case, and concerns have been voiced regarding the propriety and integrity of the disclosure process in this instance.

Recent indications that a significant number of client applications may be utilizing vulnerable versions of OpenSSL as well has broad implications, given the propensity of non-specialist users to ignore software updates and to continue unheedingly running older versions of code.

Furthermore, only ~6% of TLS-enabled Websites (and an undetermined, but most probably even-smaller percentage of other types of systems) make use of Perfect Forward Secrecy (PFS). This configurable option ensures that if an issue of this nature arises, previously encrypted traffic retained in packet captures isn’t susceptible to retrospective cryptanalysis.

Without PFS, there are no automated safeguards that can ameliorate these issues, once a vulnerability of this nature has been exposed.  Many operators and users may not realize that if attackers captured packets of encrypted traffic in the past from vulnerable services/applications which weren’t configured with PFS – i.e., the overwhelming majority of such systems – and have retained those captured packets, they’ve the opportunity now to use analysis tools to replay those packets and decrypt the Internet traffic contained in those packets. This means that attackers can potentially unearth their credentials, intellectual property, personal financial information, etc. with access to previously captured packet-dumps.

The ability for an attacker to decrypt packet capture archives requires that the attacker has obtained the private keys used to encrypt that traffic. As recent research shows, this is not a theoretical vulnerability – private key material has been compromised in a lab environment and therefore we must assume that attackers have at least the same, if not more substantial capabilities.

The ‘Heartbleed’ vulnerability may well result in an underground market in ‘vintage’ packet captures – i.e., packet captures performed after the date this vulnerability was introduced into OpenSSL, and prior to some date in the future after which it is presumed that the most ‘interesting’ servers, services, applications, and devices have been remediated.

This incident has the potential to evolve into a massive 21st-Century, criminalized, Internet-wide version of the Venona Project, targeting the entire population of Internet users who had the ill fortune to unknowingly make use of encrypted applications or services running vulnerable versions of OpenSSL. This highlights the paradox of generalized cryptographic systems in use over the Internet today.

While the level of complexity required to correctly design and implement cryptosystems means that in most situations, developers should utilize well-known cryptographic utilities and libraries such as OpenSSL, the dangers of a cryptographic near-monoculture have been graphically demonstrated by the still-evolving Heartbleed saga.  Further complicating the situation is the uncomfortable fact that enterprises, governments, and individuals have been reaping the benefits of the work of the volunteer OpenSSL development team without contributing the minimal amounts time, effort, and resources to ensure that this vital pillar of integrity and confidentiality receives the necessary investment required to guarantee its continued refinement and validation.

This is an untenable situation, and it is clear that the current development model for OpenSSL is unsustainable in the modern era of widespread eavesdropping and rapid exploitation of vulnerabilities by malefactors of all stripes. Information on how to support the OpenSSl effort can be found here: https://www.openssl.org/support/

Heartbleed and Availability
While Heartbleed is a direct threat to confidentiality, there are also potential implications for availability.

In some cases, attackers seeking exploitable hosts may scan and/or try to exploit this vulnerability so aggressively that they inadvertently DoS the very hosts they’re seeking to compromise. Organizations should be cognizant of this threat and ensure that the appropriate availability protections are in place so that their systems can be defended against both deliberate and inadvertent DDoS attacks.

It should also be noted that initial experimentation seems to indicate that it’s easiest for attackers to extract the private keys from vulnerable OpenSSL-enabled applications and services, using the least amount of exploit traffic, immediately after they have been started.  Accordingly, organizations should be prepared to defend against DDoS attacks intended to cause state exhaustion and service unavailability for SSL-enabled servers, load-balancers, reverse proxies, VPN concentrators, etc.  The purpose of such DDoS attacks would be to force targeted organizations to re-start these services in order to recover from the DDoS attacks, thus providing the attackers with a greater chance of capturing leaked private keys.


References
http://www.openssl.org/news/vulnerabilities.html#2014-0160
http://heartbleed.com
http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html
http://news.netcraft.com/archives/2014/04/02/april-2014-web-server-survey.html
http://threatpost.com/seriousness-of-openssl-heartbleed-bug-sets-in/105309
http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/
https://www.openssl.org/news/secadv_20140407.txt
https://isc.sans.edu/diary/Heartbleed+vendor+notifications/17929
http://possible.lv/tools/hb/
http://filippo.io/Heartbleed/
https://zmap.io/heartbleed/
https://github.com/rapid7/metasploit-framework/blob/master/modules/auxiliary/scanner/ssl/openssl_heartbleed.rb
https://gist.github.com/sh1n0b1/10100394
http://www.seacat.mobi/blog/heartbleed Note: This event may have been a false positive caused by ErrataSec’s masscan software (http://blog.erratasec.com/2014/04/no-we-werent-scanning-for-hearbleed.html)
http://arstechnica.com/security/2014/04/heartbleed-vulnerability-may-have-been-exploited-months-before-patch/
https://twitter.com/1njected/status/453781230593769472

Venona Project: http://en.wikipedia.org/wiki/Venona>
PFS: https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy

More AS4_PATH Triggered Global Routing Instability

By: Danny McPherson -

For those of you not paying attention, a slew of new instabilities in the global routing system are occurring – again.  These are presumably being tickled by another ugly AS4_PATH tunnel bug where someone [read: broken implementation] erroneously includes AS_CONFED_* segments in an AS4_PATH attribute – a transitive optional BGP attribute that’s essentially ‘tunneled’ between non-4-octet-AS-number speaking autonomous systems.

BGP Routing Instability - Update Frequencies

BGP Routing Instability - Update Frequencies - GMT -5

The problem is that when it’s unencapsulated at the receiving end, by a BGP router that could be several networks away, those AS_CONFED_* attributes aren’t supposed to be there, and can result in either a reset of the local session with the adjacent external BGP speaker from which the update was received, or may be propagated to an internal BGP peer, which will likely drop the session with the transmitting speaker — neither of which stop the problem at the source.  The prefix that appears to be causing all the fuss appears to be 193.5.68.0/23, a copy of the suspect update [courtesy of ras] available here.  Ras’s email provides some insight into what he’s seeing at the moment, it’s available in the juniper-nsp archive, linked below, and currently unavailable.

The relevant protocol specifications error handling procedures were rather vague in this area until recently.  There have been a couple drafts submitted to the IETF Inter-Domain Routing (IDR) WG that attempt to address the specific case outlined above, as well as more generically that of optional transitive attributes and error handling.   One of the drafts is an update to the original BGP Support for Four-octet AS Number Space specification, RFC 4893, with more explicit guidance and an expanded error handling procedures section.   Another draft, Error Handling for Optional Transitive BGP Attributes, attempts to be more prescriptive in general, and addresses a few specific issues in existing specifications as well.

Some more information on the original problem from Rob Shakir and others on the IDR mailing list can be found here and in nested references.  I first heard about this specific incident through an email from Richard A Steenbergen ‘ras‘ on the Juniper NSP mailing list, about one hour after the incident began.  Coincidentally, the web interface for the Juniper NSP mailing list seems to be having some reachability problems at the moment that may actually be related to this specific issue.  Some earlier text on the previous incident, as well as a related problem, are available in an earlier post on our blog here.

It is worth noting that the amount of instability resulting from this seems to be significant, but not catastrophic at this point – although that’s likely something that is very topologically depedent, and may very well not be the case for a few less fortunate folk.  Finally, this incident is still evolving, it’s being going on for about 4 hours now.  Let’s hope it’s squelched soon, and if any noteworthy updates emerge, I’ll be sure to provide updates here accordingly.

MS08-067: Server Service Vulnerabilities Redux and Wormability

By: Jose -

Yesterday was all abuzz about a new vulnerability patch from Microsoft, released out of their normal schedule of Patch Tuesday. MS08-067: Vulnerability in Server Service Could Allow Remote Code Execution (958644) was released at 1pm US Eastern to address very major issues. Everyone should review the patch, do some testing, and update ASAP. We’re hearing some reports of WiFi driver issues post patching, so do your prep work on this one. We know the issue affects all of the major, common versions of Windows:

  • Windows XP
  • Windows 2003
  • Windows Vista
  • Windows 2008 Server

The patch was made out of the normal cycle because malcode was on the loose using the vulnerability to spread. Specifically, this vulnerability is a buffer overflow in an unauthenticated Windows SMB file sharing session on TCP ports 139 or 445 in the Windows API call NetPathCanonicalize(). A malicious client can bind to the service and issue a request with an overly long argument, overflowing a buffer and possibly executing arbitrary code on the vulnerable server. This is how the malcode is getting onto systems.

The vulnerability is on TCP ports that see a lot of scanning, but we can baseline the activity to look for spikes. Here’s 30 days of activity for TCP ports 139 and 445 from ATLAS; we’re not seeing a huge scanning spike:

TCP port 139 scanning activity

TCP port 445 scans

While highly wormable — on by default, exploit code is now out, etc — it’s not a Sasser-like situation. Thankfully. This is likely to be mitigated by things like the default firewall in XP SP2 and the like. But we are seeing some malcode on that service.

The Gimmiv family of malware is propagating by exploiting MS08-067. We first received samples related to this family of malware on 2008-10-08 using reports from a trusted partner, so this was fully two weeks before the patch release. Samples we have analyzed are NOT packed with any Windows PE packer, which is more uncommon these days. Once the malcode is on the system, it drops the following files:

  • C:Documents and SettingsLocalServiceLocal SettingsTemporary Internet Filesmacnabi.log
  • C:WINDOWSsystem32wbembasesvc.dll
  • C:WINDOWSsystem32wbemsyicon.dll
  • C:WINDOWSsystem32wbemwinbase.dll

It then contacts three HTTP servers with GET requests:

  • doradora.atzend.com
  • perlbody.t35.com
  • summertime.1gokurimu.com

The malcode then creates several new files:

  • C:Documents and SettingsUserLocal SettingsTempFPMOOWRB.bat
  • C:WINDOWSsystem32wbemsysmgr.dll
  • C:WINDOWSsystem32basesvc.dll
  • C:WINDOWSsystem32inetproc02x.cab
  • C:WINDOWSsystem32install.bat
  • C:WINDOWSsystem32scm.bat
  • C:WINDOWSsystem32syicon.dll
  • C:WINDOWSsystem32winbase.dll
  • C:WINDOWSsystem32winbaseInst.exe

The Windows Batch File above is run using cmd.exe. It also sends an ICMP Echo Request packet to multiple IP addresses, using a unique payload: “abcde12345fghij6789″. This is done via the Win32 API call IcmpSendEcho().

Finally, it shuts down the System Manager service using a shell command (calling out to cmd.exe).

The malware’s main purpose is to steal information from the infected user’s host.

Static analysis of the malcode binaries reveals additional interesting data. First, the malcode contains multiple IP addrseses hardcoded into it: 212.227.93.146, 64.233.189.147, and 202.108.22.44. The malode references some VBS files, msrclr40.vbs and nkzclear.vbs, and contains some VBS strings:

WScript.Sleep 5
Dim oFS
set oFS = WScript.CreateObject("Scripting.FileSystemObject")
oFS.DeleteFile "%s"
oFS.DeleteFile "%s"
oFS.CreateFolder "%s"
%s%s
WScript.Sleep 5
Dim oFS
set oFS = WScript.CreateObject("Scripting.FileSystemObject")
oFS.DeleteFile "%s"
oFS.DeleteFile "%s"
%s%s
WScript.Sleep 5
Dim oFS
set oFS = WScript.CreateObject("Scripting.FileSystemObject")
oFS.DeleteFile "%s"
oFS.CreateFolder "%s"
oFS.CopyFile "%s", "%s"
oFS.DeleteFile "%s"
oFS.DeleteFile "%s"

It also contains a CLSID for the credentials DLL, consistent with a credential theft tool. This has been used to grab decrypted passwords from MSN Messenger, for example. This format string batch is also present, suggesting a possible log structure:

===============Outlook Express===============
===============Credential Info================
============Protected Storage Info=============
ID:
Pass:
URL:

In short, not a hugely wormy piece of software, but instead a typical infostealer Trojan.

What’s very interesting about this malcode, even from a quick review of it, is that the malcode doesn’t make use of a lot of Windows APIs consistently, suggesting this was quickly mashed together. Why use “/c reg delete “HKLMSYSTEMCurrentControlSetServices%s” /f” when you can use the Windows Registry API directly? This doesn’t jive with the notion that the idea that the author(s) were able to develop a new, 0day functional exploit, even from fuzzing. This leads me to suspect they stole it from someplace else and bolted it – crudely – into this malcode. If that’s true, then there’s someone using this as a 0day prior to this patch release and all of this attention. Anyone have attack logs that would suggest 0day activity outside of this malcode?

Additional thoughts on this from around the net:

We’ll be keeping an eye on this one in the coming days.

Botconomics: The Monetization of YOUR Digital Assets

By: Danny McPherson -

A decade ago IF your PC was compromised it was usually just taken for a joy ride. Today, with the monetization of bots, ease of compromise, prevalence of malware, and increasing connectedness of endpoints on the Internet, WHEN your assets are compromised they’re subjected to something more akin to a chop shop.

To follow this vein (purely for amusement):

  • Seat belt == AV; If you’re hit, you’re a whooping 50% (note that that 50% number is pretty accurate, at least in the case of AV) less likely to get injured
  • Overhead and side curtain airbags == Good AV (or HIPS?); might suffocate you or rip your head off, but there to make you safer!
  • Alarm system == IDS; is anyone listening?
  • Anti-lock Braking System == NAC; a parking pass in the console and you’re in the building
  • CD case in the glove box == lift some CD license keys
  • Office Badge/ID == Paypal & ebay account credentials
  • Used in hit & run == DDoS attack
  • LoJack == IP reputation services –> subscription required
  • The Club == HIPS (pita)
  • Turning your car into one of those rolling advertisements.. Or towing one of those billboard trailers? Leaving a cloud of smoke and soot in your wake? == Why Spam, of course… (ok, really weak)
  • Body stuffed in the trunk, used for high-dollar drug or arms deal and dumped in the river == drop site
  • Wallet with some cash or CCs == score!; keylogger streaming PIN numbers, login credentials and secret question answers, mother’s maiden name, birth date, national ID number, etc.. to one of the aforementioned drop sites
  • Garage door opener and vehicle registration w/home address in the car — hrmmm…
  • Car thief picks up your girlfriend == phishing…? :-)

OK, OK, enough of the bad analogies, I suspect you get the point or have stopped reading by now.

Ahh, but folks aren’t driving cars across the country anymore, they’re flying jet planes – Good thing we’ve got seat belts! And for you skeptics – not to worry, we’ve now got floatation devices if things get really ugly…

The point is, if you or anyone you do business with online is compromised, you’re at risk. Further – if anyone you do business with is online, you’re at risk. Need more? Someone that has you’re personal information does something with a networked system, and as a result, you’re at risk.

Think AV is protecting you? An IDS? Malware today is explicitly engineered around leading AV engines (e.g., ++580 Agobot variants), engines for which auto-update functions are disabled upon compromise via any of a number of techniques, from removing the programs or making them non-executable, to adding hosts.txt entries pointing to a local interface (e.g., update.youravdude.com — 127.0.0.1) for the Internet address of the AV signature update server.

Entire bot systems exist with load-balanced command and control, real-time dynamic paritioning and multi-mode monetization capabilities based on the bot services consumer’s needs, etc..

The GOOD News for those bot services consumer:

[Taken verbatim from a recent spam message I received boasting 'bullet proof [bp]‘ hosting services:]

    • IPs that change every 10 minutes (with different ISPs)
    • Excellent ping and uptime
    • 100 uptime guarantee
    • Easy Control Panel to add or delete domains thru webinterfaces
    • …..

Bot herders have heard the public’s outcry for multi-mode bots, responding with SLAs, intuitive user interfaces, ISP redundancy and even excellent ping times! Heck, several pieces of malware perform speed tests to ‘top Internet sites’, indexing and allocating our resources based on availability and connectedness.

Need a turn-key phishing solution? For a small fee you can get a botnet partitioned to do all these things and more:

  • compromise based on exploit of your choice
  • patch owned hosts for exploit that was used to compromise, and perhaps a few other low-hanging vulnerabilities
  • allocate bot resources (control, drop, lift, host, spam, attack) based on connectedness
  • lift CD keys, install key loggers, lift passwords, account info, email addys, etc
  • setup a couple bots as drop sites
  • setup a couple bots as phishing site web servers
  • setup a couple sites as phishing email relays
  • setup a couple open proxies for access to any of the above
  • want to take it for a test drive, not a problem

and voila, you’re in business!

Ohh, and don’t forget the special operations bots at the ready in the event that an anti-{spam,bot,phishing} company actually impacts your operations.. Don’t believe me? Go ask BlueSecurity (note the link still doesn’t work), or our friends at CastleCops, or… Six months of DoS attack observation across 30 ISPs here at Arbor yielded well over one hundred days with at least one ISP reporting an attack of one million packets per second or better. Some trivial math (1,000,000 * 60 bytes per packet * 8 bits per byte == 480 Mbps), enough to take 99%++ of the enterprises on the Internet offline today.

I’m not knocking any of the solutions above, they’re all necessary (well, most of them) and serve some purpose. It’s little more than an arms race today and there is no Silver Bullet, it’s all about layered security and awareness. As good-minded security folk continue to innovate, so to do the miscreants. As they find more ways to pull more money from more compromised assets, the problem will continue to grow. You CAN and WILL be affected, whether directly or implicitly, whether you bank and buy stuff online or not – the merchants you deal with surely have networks of some sort. A good many of those merchants do make concerted efforts to protect their consumers – perhaps others see things like any of the slew of compliance standards as ‘I tried, get out of jail free’ waivers when they do get compromised.

Being aware that the problem exists is the first step towards making it suck less, or so one would hope.. Let’s just hope that the Internet’s open any-to-any connectivity, as molested today as it may be (much in the name of security, mind you), isn’t entirely lost in the process.

Bots and widespread compromise affect every aspect of our economy today, directly or implicitly. Therein enters our amalgamation; botconomics.

Death by a Thousand Little Cuts

By: Jose -

It is not uncommon for seasoned (or heavily burdened) information security (infosec) professionals to look at the mornings’ security alerts and see a flood of the same old-same old. A few years ago, it was buffer overflows, and now in 2006 it is SQL injection attacks and cross-site scripting (XSS) vulnerabilities.

Typically, the deluged infosec professional will look at those attacks and think, “OK, that’s a lot of attacks I don’t care about.” It may be as simple as saying, “If it can be carried out by a browser, how hard can it be?” It may be simpler, “We’re not running that, it doesn’t affect us, got to move on.” Whatever the reason, many people simply ignore those reports.

However, they do have an impact. One of the things we have been seeing much of is Linux botnets based on PHP vulnerabilities (or Awstats vulnerabilities, also). Typically, someone will build their botnet on an established network like Undernet, and typically use a bot binary like Kaiten; it will be obvious and get taken down quickly.

However, there are many boxes out there that are potentially vulnerable to the attacks carried out by that malware. They are going to be used for additional attacks: DDoS attacks, warez trading, spam, what have you…they get used. Moreover, they get used in ways that you do care about.

Often we will see phishing attacks on web servers that have been compromised through some vulnerability we never thought twice about. A phishing attack will get loaded up on a website, someone will go and peek at the website and notice, “Hey, it’s running Cpanel,” and we’ll immediately know how the attackers got in and “set up shop” (Cpanel has had some relatively recent vulnerabilities, and there are more Cpanel installations than I had ever imagined).

In December 2005 and March 2006 (WMF and createTextRange, respectively), we are seeing websites be used for malicious web pages to attack clients. In the past few months it’s been the WebAttacker framework being thrown on some of these sites or, even more popularily, feeder sites. These include redirects or IFRAMEs of the WebAttacker toolkit. With two new Internet Explorer 0-days in the past week (the KeyFrame and VML bugs), we’re seeing this cycle more and more.

These high profile, once-per-quarter (and now, appearantly, once a month) type bugs that can be leveraged against millions of hosts wind up being successful because people get into websites such as “Bob’s Mortgage Site” using any one of a million bugs or a mis-configuration, and then socially engineer you there somehow and you wind up being affected. Yeah, it really all comes back to haunt you.

You cannot secure the Internet by acting alone; it is simply too big and dynamic for anyone one group to tackle. It is this challenge that we have always faced in one form or another. In 1998, it was poorly configured FTP servers and SMTP open relays. In 2001, it was the plethora of buffer overflows in every product and project. In 2006, every website (it seems) is out to get you: WMF, createTextRange(), WebAttacker, or some other web browser- based vulnerability. These vulnerabilities just will not go away, and you cannot avoid it by paying attention to every advisory that comes along. The combination of high profile client vulnerabilities and low profile website vulnerabilities is simply intractable.

Expect more of the same in one disguise or another. Attackers are researching browser bugs faster than the people who can fix them, and they’re learning how to capitalize on them by loading them onto more and more websites.

It’s Our Party & We’ll Cry If We Want To…

By: Jeff Nathan -

Have you ever taken a moment to realize that the primary reason the information security industry even exists is because a noted lack of pedantic people both in the RFC world of the 1980s and the software engineering world up until the mid 1990s? Yes, there was actually a time where people did not consider the unexpected consequence of an unbounded strcpy(). Way back, when these people were focused on writing software and designing systems, they were unencumbered with the trappings of secure coding. I wonder if this period allowed people to be more free with their ideas and in turn make the incredible strides that fueled technology development.

The coding styles of the past are in stark contrast today, where even the least enlightened organizations have at least some sense that there are consequences for writing bad software. All low blows aside, I have a tough time writing any code without considering the myriad of side effects of even a small block of code. Back when I was first learning about programming, I wonder if I would have pursued it if I realized I was going to have to spend as much time being careful as I spent being creative. At this point, I begin to feel insincere, as the very industry that has kept me employed some seven years exists because people were not coding in incredibly pedantic circles.

This isn’t to say that software engineering efforts of years past were the best way to be productive and get things done. While we’ve been busy making a case for our own existence by releasing vulnerability advisories and developing new security products other segments of the software development industry have been coming up with ideas like Extreme Programming. Software is ubiquitous, which is a good thing because we’re no longer viewed as computer nerds when we tell someone what we do for a living. Somewhat unfortunately, by creating a vast unwashed mass that consumes software and doesn’t have to be at least this high to ride the software, we’ve skewed the public’s view of what software development really is.

Software development is both artistic and scientific. I like to refer to programming as craftsmanship. If you take pride in what you’re working on it shows. From the perspective of the security industry this also means that the security fitness of a piece of software is part of the craftsmanship. Writing secure code, whether considered ahead of time or an afterthought isn’t always the most natural way to write software. And, while we’ve spent time reminding everyone how important it is to do the right thing and taking the moral high ground we may have done ourselves a disservice. Referring back to the ubiquity of software, the lack of understanding of how software is created is not good for any of us.

I’ve read on in horror when I perused stories of proposed legislation that would require individual developers to be financially responsible for the fitness of software. For years I’ve looked forward to the day where two enormous companies would face off in the US courts to settle the debate on software liability. The case would last for years and bring even more attention to the security industry, which would be great. But, there’s always the possibility that the courts don’t agree with the plaintiff (in this case a detrimentally affected customer) and side with a software vendor. I’m sure Congress can grasp the idea that open source software development might simply stop of individual developers are held liable for their software. I don’t want individual authors to be held responsible either. But, part of me certainly would like for there to be fiscal liability for the manufacturers of software.

There’s an unfortunate dichotomy in arguing that point. If an individual author can sell or give away a piece of software that comes with a license that states the author makes no claim as to the fitness of said software, why shouldn’t a larger commercial entity be able to do the same thing? I have no idea how long it will take before such a question comes before a court. But before it does, I think that the security industry better have a very convincing answer prepared.

Vulnerability Complexities

By: Mark Zielinski -

Dave Goldsmith had a great post earlier today which I would like to point out to anyone who hasn’t read it yet. With comments like, “I’m quite positive that when this vulnerability reached Sun Microsystems, someone’s head exploded”, I found his commentary very amusing. Even though this vulnerability is now eight years old, it’s a perfect example of design flaws and complicated programming problems capable of creating very interesting results. I don’t want to get too far off topic, but personally, I find classic stack and heap buffer overflows to be very boring. While these kinds of vulnerabilities may have been interesting a few years ago, there really isn’t anything exciting about seeing the same vulnerability repeatedly, unless of course there’s some element of surprise or distinctiveness about it. These days, what I find to be exciting are vulnerabilities that often occur because of poor software design. Like this vulnerability, usually this results in applications exhibiting many complicated behaviors, which can often be influenced and combined to create positive situations for an attacker that wouldn’t have otherwise been possible. Now that’s interesting.

Here’s a little bit of history on this vulnerability: Secure Networks originally developed a module for their vulnerability scanner, Ballista, which could detect the “RPC packet bounce” vulnerability on Solaris operating systems. Oddly enough, even though Secure Networks released a vulnerability module to detect this particular vulnerability, I don’t recall them having releasing a security advisory. Around this same time, I had read the release notes to the latest Ballista release and developed a proof-of-concept exploit based on this information. The same year, Sun Microsystems released a patch that addressed a separate Solaris vulnerability, which could have been used to locally compromise a server. I researched the patch and concluded that it didn’t properly address the vulnerability, and in fact, it was still vulnerable. After modifying my exploit to utilize both of these vulnerabilities, the once “patched” local vulnerability became a new, remote zero-day vulnerability. When Sun Microsystems released the next version of Solaris, they included additional functionality, thereby adding further complexity to the situation. As the exploit utilized one vulnerability to attack another, the second vulnerability was receiving data in such a way that made it think it received an address it needed to look up. What’s interesting about this is that by utilizing any sort of DNS spoofing techniques, any attacker could still reliably exploit this problem by first “spoofing” the arbitrary commands they intended to execute.

How’s that for complicated?