ATLAS 2.0: Observing A Rapidly Changing Internet

By: Danny McPherson -

It’s already been over 2 years ago since we first introduced our Active Threat Level Analysis System – ATLAS, a multiphase project that’s been evolving pretty much constantly ever since.  The first phase of ATLAS focused on capturing data via a globally scoped network of sensors running a number of data capture and analysis tools that would interact with attackers to discover what activities they are attempting, capture full payloads and classify them, and characterize scan and backscatter traffic.   This information was then correlated with a number of other ATLAS system data sources, and wrapped in the ATLAS portal, a public resource that delivers a sub-set of the intelligence derived from the ATLAS sensor network on host/port scanning activity, zero-day exploits and worm propagation, security events, vulnerability disclosures and dynamic botnet and phishing infrastructures.  It includes:

  • Global Threat Map: Real-time visibility into globally propagating threats
  • Threat Briefs: Summarizing the most significant security events that have taken place over the past 24 hours
  • Top Threat Sources: Multi-dimensional visualization of originating attack activity
  • Threat Index: Summarizing Internet malicious activity by offering detailed threat ratings
  • Top Internet Attacks: 24-hour snapshot of the most prevalent exploits being used to launch attacks globally
  • Vulnerability Risk Index: Determines the most dangerous vulnerabilities being exploited on the Internet today

Today we announced ATLAS 2.0, the next generation of ATLAS.  Many of you that follow the ASERT blog, employ our Active Threat Feed (ATF), or work with Arbor and our ASERT team on operational security issues, have seen bits and pieces of ATLAS 2.0 for quite a while now.  In a nutshell, what we’ve done with ATLAS 2.0 is expand well beyond the initial ATLAS capabilities, incorporating new intelligence information, to include:

  • collaboration with over 100 ISPs across 17 countries
  • expanded Fingerprint Sharing Alliance participation
  • real-time ‘coarse’ Internet traffic levels, protocols, and applications
  • topologically diverse global view of Internet routing system security, stability and intelligence
  • topologically diverse DNS system inputs and analysis (e.g., to identify fast flux and other DNS-related threats)
  • attack traffic data flows and trajectory information
ATLAS Fast Flux Bots

ATLAS DNS Fast Flux Bots

It’s all about visibility and baselining up and down the IP protocol stack, operating at each of the various layers, the more information we collect and model, from more globally distributed and diverse locations, in particular with the ever-increasing array of topologically scoped threats, the more likely we are to detect deviations from what’s normal or acceptable.  Once those deviations are detected, they can be analyzed to determine whether they’re legitimate or malicious.  Whether they’re Internet control plane routing system stability [1], global Internet traffic levels [2, 3, 4], exploit activity for a given vulnerability [5], DNS flux activity [6], or botnet command and control log and execution activities [7], establishing broad visibility and understanding what constitutes normal activity enables network operators and engineers to respond most effectively in their operating environment.

We hope that the additional intelligence gained through ATLAS 2.0 will permit Arbor to continue to provide a valuable public resource, and enable Arbor customers and non-customers alike to better prepare for the rapidly evolving global Internet threat landscape.

Reblog this post [with Zemanta]

Comments

  1. Do the fingerprints rely on application layer data or just flow behaviour?

    1. Danny McPherson 03/11/2009, 9:55 am

      The Fingerprint Sharing Alliance (FSA) and Active Threat Feed (ATF) only use Network and Transport Layer information at this stage, as they’re enabled and “detectable” with flow-based information via Peakflow (and other) systems – some are single stage, some are “compound temporal signatures”. The darknet sensor network obviously uses application layer data and collected payloads are analyzed (automated and some manual), instrumented, and indexed in our malware laboratory (AML), enabling abstraction and development of the fingerprints. HTH, -danny

  2. ATLAS, AKA, Jose’s bullshit meter. Pretty pictures and charts DO make sales though.