Friday, December 21, 2012

How much is a zero-day exploit worth?

It's a pretty common question asked by both bug hunters and journalists alike - "How much is a zero-day vulnerability worth?"

There's no simple answer as I discuss the topic in my first blog posting with IOActive. You can find the discussion "Exploits, Curdled Milk and Nukes (Oh my!)" on the IOActive Labs Blog site.

Monday, December 17, 2012

Now at IOActive

For those that haven't seen the exchanges on Twitter or LinkedIn, I'm no longer with Damballa...

The last 3.5 years with Damballa were a wild ride. My first 3 years with the company saw much innovation and cutting-edge technology making its way to the market, but as things slowed down and the business doubled down on the features that make a product more "channel friendly", it was becoming less interesting to me. Don't get me wrong though, the research coming from Damballa Labs still can't be beat, and I hope it makes it the product sometime soon.

So, with that all said, I wanted to get back in to consulting. I love the constant flux of new problems, logistics and cutting-edge technology.

Last week I joined IOActive, Inc., as their CTO.

As some of you may be aware, I've been working with the company for a number of years - including being  a member of their Advisory Board. As their CTO my initial focus will be on helping to develop the long-term service strategy - bringing new boutique and cutting-edge services to market to address the latest onslaught of technology threats and preempt many upcoming security problems for large and sophisticated organizations.

IOActive is a fantastic company. It's at the forefront of advanced security consultancy and has been growing at an amazing rate.

So, with all that said, you can now find me at IOActive, and I'd be pleased to offer you my new business card. I'm sure IOActive will be able to help! :-)

Sunday, November 25, 2012

Exploit Development for Fun & Profit

Last week I pulled together a posting for DarkReading covering the commercial aspects of exploit development - "The Business of Commercial Exploit Development". I hope you find it interesting... it sheds some light in to a side of the security business that few understand or operate within, but has a huge impact on what the threat landscape looks like in reality.

Persistent Threat Detection (on a Budget)

If there’s one simple – high impact – thing you could do to quickly check whether your network has been taken over by a criminal entity, or uncover whether some nefarious character is rummaging through your organizations most sensitive intellectual property out of business hours, what would it be? In a nutshell, I’d look to my DNS logs.

It’s staggering to me how few security teams have gotten wise to regularly interrogating the logs from their recursive DNS servers. In many ways DNS logging can be considered sprinkling flour on the floor to track the footsteps of the culprit who’s been raiding the family fridge. Each step leaves a visible impression of where and how the intruder navigated the kitchen, and their shoe size.

Whenever an electronic intruder employs their tools to navigate your network, tries to connect back to their command and control server, or attempts to automatically update the malicious binaries they've installed upon the system they have control over (or wish to control), those victim devices tend to repeatedly resolve the domain names that that attacker is operating from. Therefore, armed with a list of known bad domain names and/or IP addresses, it’s a trivial task to mine the DNS logs and identify any successful intrusions.

Depending upon how authoritative your “blacklist” of criminal domains is, and how picky you are about the IP destinations that the domain names are resolving to, you can rapidly spot those nefarious shoe impressions in the flour.

One word of caution though, this isn’t a comprehensive technique for detecting persistent threats operating within your network – but it is one of the simplest! It also has the highest impact – particularly if you’re operating on a shoestring budget.

An obvious limitation of DNS log mining is the depth and accuracy of the blacklist you’re matching DNS events to – so you’ll want to ensure that the list you’re using covers the types and classes of threats you’re most interested in detecting. While there are plenty of free blacklists out there, the vast majority of them deal with spam, phishing and drive-by hosts… so you’ll want to invest some time shopping around a little.

Here are a few tips to using DNS as a means of detecting persistent threats (advanced or otherwise):

  • Turn DNS logging on! Seriously, do it now… you can read the rest of this blog after you've turned it on.
  • Select a bunch of relevant blacklists that contain malicious domains associated with the threats (and criminal actors) you’re most interested in.
  • Create a list of IP address ranges for countries, companies or regions that computer systems within your organization shouldn’t be communicating with, and use this as a second-pass filter for spotting other unwanted or likely malicious traffic.
  • Scrape your DNS logs frequently – ideally at least once per week.
  • If you’re worried about a handful of specific threats (e.g. a criminal operator or state that is likely targeting your organization), scrape your DNS logs for relevant domain names hourly – and alert upon their discovery.
  • Even if you’re only scraping you’re DNS logs weekly, don’t throw them away after you’re done. Try to keep your DNS logs for multiple years at least. (Note: DNS logs compress to almost nothing – so they’re not going to take up much space).
  • Consider scraping all the older logs (going back up to 5 years) once a month or so. New domains will be added to the blacklists you’re using over time, and new intelligence can shed new insight in to past intrusions. It’ll also help to establish when the intruder first compromised your network when you do find one.
  • If your DNS server allows it, turn on logging of “failed” lookups – i.e. NX domain requests. While these events won’t help in your blacklist lookups, they will help identify malware families that make use of domain generation algorithms (DGA) as well as “broken” applications within your network that need some tuning in finding their legitimate destinations.
  • DNS log scraping can be conveniently done off-line through simple batch script processing. So the impact on the team responsible for securing the corporate infrastructure is minimal after a nominal development investment.

If you’re not happy with the quality of the blacklist you’ll be able to bring to bear in uncovering the persistent threats likely already operating within your environment, or if it would be helpful to do a “one-off” check of your DNS logs and to help build the internal business case for investing in a more permanent detection solution, let me know.

Point of Sale (POS) and Card Reader Tampering

In the field of consumer retail the most important piece of equipment is the cash register; better known by those in the trade as the Point of Sale (POS) terminal. In essence, if the retailer can’t complete a sale by successfully taking the money from the customer, then there is no business. Which means it’s a critical component of the business and needs to be treated as such.

Over the last two decades POS technology has evolved considerably. Today’s systems are predominantly networked computers capable of not only processing a sale, but also querying inventory, managing customer loyalty programs and even delivering news and mandatory training materials directly to the store employee.

At their heart, these modern POS terminals are often a standard desktop PC adorned with a number of card readers, money drawers and barcode scanners and, as such, are all too often vulnerable to the same threats that affect any other PC around the world. Some all-in-one POS systems incorporate a number of physical safeguards to protect against the everyday insertion or removal of attached peripherals, and to also prevent theft of the equipment – which you rarely see on corporate desktop systems.

In many stores you go to you’ll also encounter a separate card reader (often with a touch-screen and numeric keypad) that’s designed to allow the customer to swipe and complete a credit or debit card transaction by themselves. These card readers are typically owned and managed by the merchant bank that processes the financial transfers for the retailer and, while there are many different types, a handful are more popular than others.

These merchant-supplied card readers typically include any number of logical and physical anti-tampering technologies – most of which are designed to elevate the retailers trust in the reader, and to help protect against semi-sophisticated criminals. There are entire books and engineering courses in anti-tampering technology, but an interesting paper I came across a few years ago will likely be a good primer for hinting at the sophistication of the anti-tampering technologies found in the POS card readers, and the techniques available to organized criminals for defeating them.

Check out “Thinking inside the box: system-level failures of tamper proofing” by the University of Cambridge from 2008. It has a few pretty pictures too.

It should be no surprise that the criminals have access to many of the tools and techniques to alter even the most sophisticated anti-tampering technology. It’s interesting to note that there are online tutorials and walkthroughs on many hacking sites and (more importantly) carding forums. Here is just one example:
A carder forum at

If you’re a retailer, what should you be doing to protect yourself from POS (and card reader) tampering? I’m sure there are a number of audit points within the PCI standards that cover this topic but, frankly, it’s so difficult to locate those points and distil them into something immediately actionable I’d recommend the following as a bare minimum:
  • Maintaining a list of the POS terminals and card readers within the store – that includes the type, make, model and serial number. This list and terminals should be checked on a daily basis.
  • Checking that serial numbers on the terminals match the serial numbers displayed on the terminal screen.
  • Checking for signs of terminal and component tampering; and making sure that staff are trained in identifying evidence of physical tampering.
  • Checking that stickers and other visual identifiers are unchanged.
  • Prohibiting unauthorised people from accessing terminals and any CCTV equipment.
At the end of the day, modifying the card reader and defeating the anti-tampering technologies within them is not a trivial task for the uninitiated… unlike installing a piece of malware, keylogger or battery-powered card skimmer on the POS computer. However, as we’ve already seen with the growing sophistication and almost commoditization of ATM card skimmers (see Brian Krebs excellent series of blogs on the topic: “All about skimmers“), this has become a business and the sophisticated fraud can be achieved with a relatively low investment by the criminal.

Thursday, September 13, 2012

Nitol and Action by Microsoft

Reading this morning's blog from Microsoft about "Operation b70" left me wondering a lot of things. Most analysts within the botnet field are more than familiar with - a free dynamic DNS provider based in China known to be unresponsive to abuse notifications and a popular home to domain names used extensively for malicious purposes - and its links to several botnets around the world. So it was a surprise to hear that Microsoft was able to team up with Nominum to usurp control of the domain (zone) and effectively block the known malicious domains, while other regular users can carry on with their business or businesses.

Microsoft presented the need to take control of this cluster of malicious domains as a necessary action against the Nitol botnet and to protect and secure the supply chain. While I don't quite understand all of the logic behind this argument (there's just not enough info public at this point in time), at the end of the day Microsoft have managed to remove a thorn from the community's side.

The Nitol botnet is, in general terms, bothersome but not a wide scale threat. Damballa Labs has been tracking the threat for quite some time and, as botnets go, is a rather small and tired affair. If you're a victim of Nitol though, yes, it's a pain-in-the-bum DDOS agent.

The angle is much more interesting to me than the Nitol botnet that formed the legal excuse for being able to seize control of the domain.

From a Damballa Labs perspective, we currently track around 70 different botnets that currently leverage's DNS infrastructure for C&C resiliency - using a little over 400 different third-level domain names of With a bit of luck I'll have some size information about those botnets later today.

Will the usurping of kill these botnets? Unfortunately not. There may be a little disruption, but it's more of an inconvenience for the criminals behind each of them. Most of these botnets make use of multiple C&C domain names distributed over multiple DNS providers. Botnet operators are only too aware of domain takedown orders from law enforcement, so they add a few layers of resilience to their C&C infrastructure to protect against that kind of disruption.

Take Nitol for example - it employs multiple domains from several free dynamic DNS providers, including other four-digit .ORG domain services such as,, and

Interestingly enough, the (former) owners of are providing new support advice to their inconvenienced customers in how to bypass this interuption by Microsoft:
Or, after some Google translation:
Good to know that it's business as usual...

 The story isn't over yet!

Monday, August 20, 2012

Trend Micro - You're doing it wrong!

In my Monday morning blog catch-up I stumbled upon Trend Micro's blog "Big Data Analytics and the Smart Protection Network". I don't normally bother reading or commenting on such self-serving marketing crud, but I worry that Trend Micro may starting to believe their own marketing fluff.

There are a number of things that are worth commenting upon, but the following is more dire than many...
"Every day, we receive 430,000 files for analysis, of which 200,000 are unique. That results in 60,000 new signatures for detection every day."
Trend Micro - you're doing it wrong! Who in their right mind still pumps out 60,000 new signatures for yesterdays malware? The fact that any vendor is forced to write signatures for each new threat is obviously a depressing aspect of the whole legacy approach to antivirus. There are considerably smarter ways in dealing with this class of threat. From my own past observations those 200,000 unique samples are more than likely serial variants of only a handful of meaningful malware creations. A ratio of 200:1 or 500:1 is pretty common nowadays - and even then more modern "signature" approaches could shrink the ratio down to between 4000:1 or 10,000:1 by the time you start interpreting the code contained within the malicious binary.

Why is all this important? Firstly, perhaps it's my German heritage, but inefficiencies can be grating. Just because you're been pumping out the same crud the same way for decades, doesn't mean you can't learn something from the younger dogs at the park. Secondly, these 1:1 transformations of signature to unique malware sample are redundant against the current state of the threat. The bad guys can generate a unique malware variant for every single visitor every time they get infected or receive an update. Thirdly, the signature you're pushing out is redundant - it's a marketing number, not a protection number, it wouldn't even serve as a SPF number on a bottle of sunscreen. Finally (and it's only "finally" because I've got a day job and I could go on and on...), "200,000 are unique" - I think you've missed more than a few...
"Thanks to our leadership in the reputation and correlation area, we get many requests from law enforcement to help them identify and jail criminals."
I suppose blind self-promotion works best if evidence is contrary. I'm sorry, but there's a lot more to reputation than blacklisting URL's and whitelisting "good" applications nowadays - and there's an ample list of companies specializing in reputation services that do this particular approach much better. The problem with these (again) legacy blacklist approaches is that the threat has moved on and the criminals have been able to ignore these dilapidated technologies for a half-decade. Server-side domain generation algorithms (DGA), one-time URL's, machine-locked malware, Geoip restrictions, blacklisting of security vendor probing infrastructure, etc. are just a sampling of tools and strategies that the bad guys have brought to bare against this legacy framework of reputation blacklists and correlation.

Don't get me wrong, the data you're gathering is useful for law enforcement. It can be helpful in identifying when the criminals screw up or when a newbie comes on to the scene, and it can be useful in showing how much damage has been done in the past by the bad guys - it's just not too effective against preemptively stopping the threat.

Speaking of the data, I'd love to know who's buying that data from Trend Micro? From past experience I know that most governments around the world pay a pretty penny for knowing precisely what foreign citizens are browsing on their computers, what type of Web browser they're using and what's the current patch level of their operating system... it's traditionally useful for all kinds of spying and espionage but, more importantly nowadays, for modeling and optimizing various cyber-warfare campaign scenarios.

Wednesday, July 18, 2012

DGA-based Botnet Detection

There are essentially two types of magic in the world – those that deceive an audience in to believing something impossible has occurred, and that which is best defined by Arthur C. Clarke as “any sufficiently advanced technology is indistinguishable from magic”.

The antivirus industry is rife with the former – there’s no shortage of smoke and mirror explanations when it comes to double-talking “dynamically generated signature signature-less engines” and “zero false-positive anomaly detection systems”.

I’m going to introduce you to the later kind of magic. Technological approaches so different from traditional approaches that, for many folks out there in Internet-land, it’s indistinguishable from magic. More than that though, I’m going to try to explain how such techniques are reversing the way in which threat discovery has occurred in the past. However what I’m not going to do is to even try to explain a fraction of the math and analytics that lies behind that magic – at least not in a blog!
Oh where, oh where should we start?

Let’s begin, for arguments sake, by classifying malware as a tool; a weapon to be more precise. In the physical world it would be easy to associate “malware” with the bullets from a gun, and the gun in turn likened to perhaps a drive-by download site or a phishing email. In response to that particular physical threat, there are a number of technological approaches that have been deployed in order to counter the threat – we have metal detectors and x-ray machines to alert us to the presence of guns, sniffing technologies to identify the presence of explosive materials, CCTV and behavioral analysis systems to identify the suspects who may be hiding the gun.

A fundamental premise of this layered detection approach is that we’ve encountered the threat in the past and already classified it as bad – i.e. as a “weapon”. Gun equals bad, knife equals bad, metal corkscrew equals bad, and so on. Meanwhile everything else is assumed to be good – like an ostrich egg – until it happens to be used as a weapon (such as when “Russell pleaded guilty to assault using an ostrich egg as a weapon, assault, and breaching a protection order“) and inevitably some new detection technique is proposed to detect it.

Traditionally the focus has been on “preventing” the threat. In particular, detecting the presence of a known threat and stopping it from reaching its target. In the physical world, in general, the detection technologies are pretty robust – however (and it’s a big “however”), the assumption is that the technology needed to provide this prevention capability is ubiquitous, deployed everywhere it could potentially be needed, and that it works every time. Sure, at high value targets (such as airports) you’ll find such technology employing its optimal capability, elsewhere though (such as the doorway into your home) it’ll not be encountered. There are obvious parallels with the cyber-world here – except arguably the Internet-equivalent technologies are a little more ubiquitous, but considerably less capable in preventing the malware threat.

For the mob hitman, serial killer, or other kind of mass murderer, the threat of sparsely deployed metal detectors is an easily avoided problem. Subversion or avoidance of the detection systems is pretty easy and, more importantly, an appropriate choice of location negates the problem entirely. Even then, such a detection strategy, operated in isolation, isn’t a serious inhibitor for new murders. If such a technology exists to only detect the guns and bullets, but is not capable of providing attribution (e.g. this gun was used in 16 different murders over the last 2 weeks and the owner of this gun is the murderer), then the criminal only ever loses a tool each time they get caught – since the prevention technology is divorced from association with the victims (past or prospective).

But there’s an entirely different side to dealing with that kind of threat – and that’s the forensics element. While our not-so-friendly murderer can avoid the detection technologies, they’re much less capable of avoiding the evidence of violent past actions. Starting from the first murder, it is possible to build a case that points to a specific criminal by analyzing the components of the crime scene(s).
I know, the argument is that everything should be done to prevent the crime in the first place. That’s clearly very difficult in the physical world, but you’re basically living in a fantasy land of Goblins and Unicorns if you’re expecting it to work better in the cyber-world.

Which basically brings me to the discussion (and subsequent detection) of the latest generation of sophisticated malware – malware that uses domain generation algorithms (DGA) to locate their command and control infrastructure and upload their stolen data. Malware with this capability are designed to evade all those conventional prevention technologies and, once successfully deployed within their victim populace, evade all other methods of traffic filtering and takedown processes. Even if a malware sample is accidentally captured, its DGA capabilities will go undetected.
Detecting DGA-based malware is, as I implied earlier, both “magic” and a reversal of conventional prevention approaches. In order to detect DGA-based threats early on, you start with the victims first…

DGA-based malware use an algorithm to pick out candidate domain names in order to hunt for their prospective C&C servers. The vast majority of the domain names they’re looking for simply don’t exist. In the world of DNS, attempting to resolve a domain name that doesn’t exist will result in a “no such domain” (i.e. an NX) response from an authoritative DNS server somewhere down the line. So, in essence, DGA’s are noisy if you’re watching DNS activity – and lots of NX responses are a key feature of an infected host. Unfortunately, the average Internet-connected device typically tries to look up lots of things that don’t exist, and there’s often a lot of legitimate NX traffic which can disguise the flapping of the malware.

Assuming some kind of algorithmic basis to the domain candidates being created by the malware, you could suppose that it would be possible to develop a unique signature for them. If only it was that easy – the criminals are smarter than that. And you’re also assuming that you’ve already encountered a copy of the malware before in order to create a signature for that particular DGA-based malware.

Instead there’s a much better way – you monitor all your DNS traffic and NX responses, and you identify clusters of devices (or IP addresses) that are generating roughly similar domain name requests. This first pass will provide a hint that those devices share some kind of common ailment (without ever needing to know what the malware is or have ever encountered a sample before). In a second pass you focus upon identifying just the domain names that are structurally very similar across all the afflicted assets and classify that DGA based upon a number of features. Classifying the DGA makes it easier for attribution and uniquely tracking a new threat.

At this point you’re pretty much sure that you have a new malware outbreak and you know who all the victims are, and you can easily track the growth (or demise) of the botnet within your network. Unfortunately you’re also dealing with a brand new threat and you probably don’t have a malware sample… and that’s where an additional layer of analytics comes into play and more “magic” happens as you automatically begin to identify the domain names that are tried by the DGA-based malware that actually succeed and engage with the criminals server.

Let’s work through a simple example uncovered last week by Damballa Labs. Over the weekend one of our NX monitoring tools identified a few thousand IP addresses within our customer base that were generating clusters of NX DNS traffic very similar to the following:,,,,,,,,,,,, ,,,,,,,,,,,,,, 
While you’d be hard pressed to write a “signature” for the domain names that wouldn’t cause false positives out the wazoo, the features of the combined victim’s traffic (frequency, distribution, commonality, etc.) work fine as a way of associating them to a shared new threat.

Armed with this knowledge, it is then possible to identify similarly structured domain names that were successfully resolved by the victims that also shared timing elements or appeared to alter the flow of NX domain responses. For example, if the DGA-based malware is designed to locate a “live” C&C server, once it’s found a server it probably doesn’t need to keep on looking for more and will likely stop generating NX domain traffic for a period of time.

Based upon our observations of this particular botnet outbreak, it was possible to identify the following C&C servers being operated by the criminals:
  • ###.133. ###.247 –
  • ###.133. ###.247 –
  • ###.133. ###.247 –
  • ###.133. ###.247 –
  • ###.133. ###.247 –
  • ###.133. ###.75 –
  • ###.133. ###.75 –
  • ###.133. ###.191 –
  • ###.133. ###.191 –
  • ###.133. ###.191 –
  • ###.133. ###.191 –
  • ###.133. ###.191 –
  • ###.133. ###.191 –
[NOTE: We've temporarily obfuscated some of this data while we continue to investigate and enumerate the global pool of victims. We'll release the technical details of this particular DGA-based botnet soon…]

So, by this point we know who the victims are, how many C&C servers the criminals are operating, what their IP addresses are and, subsequently, which hosting facilities they are operating from:
  • AS13237 LAMBDANET-AS Lambdanet Communications Deutschland GmbH

What about the malware piece? While we know who the victims are, it would be nice to know more about the tool that the criminals behind this botnet prefer and ideally to get to know them more “personally” – if you know what I mean…

As it happens, there are some malware samples that have been discovered in the very recent past that also like to contact C&C’s running upon the same hosts at this facility. For example:
  • d977ebff137fa97424740554595b9###
Fortunately, while the malware sample wasn’t detected by any antivirus products, it had previously been automatically executed within our dynamic analysis environment and we’d already extracted all of its observable network features, including an additional (successful) C&C engagement:
  • using ###.133. ###.75, ###.133. ###.191, and ###.23. ###.139
This, in turn, helped identify the following additional hosting facility based in the Netherlands:
  • AS49981 WORLDSTREAM WorldStream
There’s obviously much more to this particular threat, and if you’d like to get involved digging into it and helping with the attribution please let me know…

But, getting back on track, this approach in identifying brand new threats – while sounding like magic to many – works really well! We’ve found it to be immensely scalable and fantastically accurate.

However, there’s one fly in the ointment (as it were)… the approach identifies new threats long before the malware component is typically uncovered by the security community and is independent of the the vector used to infect the victims,the malware tool that was deployed and ultimately the actual endpoint device type.

Think of it this way. Based upon the forensics of the blood splattered bodies and evidence left in the room that the murderer left behind, we know that the victims were bludgeoned to death with an ovoid object approximately 6 inches in diameter, weighing 3 pounds and composed of a calcium shell. We also know that the murderer was 5 foot 11 inches, weighed 240 pounds and wears size 11 Reeboks.

You can keep your metal detectors for all the good that’ll do in this case…

Sunday, July 15, 2012

Electronic Accountability in Syria Civil War

In today's story by the BBC covering Syria they note the the conflict has now officially evolved in to a civil war -

By being legally categorized as a civil war all participants are now subject to the articles of war - such as the Geneva convention. It also means that the persons behind any crimes and atrocities committed during this war can be prosecuted as international war criminals even after the conflict ends.

With the trials currently underway in the Hague against the leaders of the Bosnia war, I was thinking how different prosecutions of war crime in Syria will likely be different - given a considerably more networked world and advances in electronic monitoring.

When I read about the most recent murders of 100 souls, it is inevitable that there will be a kind of electronic trail that did not exist for wars of even a decade ago.

The instructions and target coordinates of the artillery will have been communicated and authorized electronically - not just as written communications, but also as digital voice and CB radio. The point though is that there will be a recoverable record somewhere. Given the high level of electronic eavesdropping by the combatants and other observers (e.g. NATO forces and local non-combatants), even those localized communications between regional commands and tank drivers can be intercepted, stored, and shuttled to appropriate authorities relatively easily.

Those issuing criminal commands can expect to not only be held accountable, but can expect those crimes and attribution to be documented to an excruciating level of detail - leaving little ambiguity to future courts.

Some may argue that encryption will be their savior. I doubt it. The tools they're using to generate and decipher those communications will become available to investigators post-conflict. And, regardless of access, as we're observing with the prosecutions relating to a conflict that occurred practically two decades ago, technology advances. How sure would you be that  even your 128bit encrypted digital radio messages will hold up to decryption techniques and capabilities in 20 years?

No, leaders and those issuing commands will be held accountable with evidence that has never been so rich and attributable.

Sunday, July 1, 2012

One Billion Creditcards Stolen

"The details of one billion stolen credit cards were posted yesterday upon hundreds of Web sites around the world." What would we we if that actually happened? (and how do you know it hasn't happen today?)

Practically every day there's some kind of public disclosure about some company-or-other having been infiltrated and the credit card details of a bunch of their customers were stolen. Despite several years of increased disclosures and ever-higher volumes of cards being stolen, I'm not actually sure what the impact is. Granted, every so often you'll see some followup story about how XYZ Corp is being sued due to third-party losses due to the data breach; but really, what would happen if there were more data losses... much more...

I don't know how many credit and debit cards there are in circulation around the world, but I'm pretty sure it's going to be measured in the multiple billions. So what could happen to the world if one billion (i.e. 1,000,000,000) credit cards and all the appropriate card owners details were intercepted and dumped on the Internet for all to see (and use?) at midnight tonight?

You might question the logistics of such an interception and accumulation of that many cards. Here are (just some) some ways in which it could happen:
  • A number of popular underground carder forums (used to match buyers with sellers of stolen credit cards) get hacked, and all the accounts of the carders that sell their stolen wares through the forum in turn have their accounts hacked in to. A few domino's fall and, before you know it, the hacker has breached the credit card repositories of a few dozen prolific sellers and steals their stolen data. To undermine those hacker carders and their illegal businesses, the hacker dumps copies of all the data on a few hundred pastebin and anonymous file-hosting sites (making it impractical for law enforcement to take down the data after the fact).
  • A small number of disgruntled IT employees at one of the major payments processing companies backdoor a number of critical servers and data repositories - continually running batch jobs that store the relevant metadata in an encrypted archive, that is updated with any new card details. 24 hours after they resign (or are laid off due to restructuring) they extract the data dump they had been preparing for months and dump it on the Internet because they hated the company and what it did to them.
  • A foreign power has spent 2 years infiltrating Visa International and a few dozen of the largest merchant banks using digital and human intrusion techniques, and has managed to accumulate the details of all their customers. The attackers filter the stolen credit card data for only US and EU and anonymously release the data in order to undermine those economies.
I don't know how far-fetched the last couple of scenarios are (and I know that plenty of safe-guards have been installed to counter various scenarios) but, at the end of the day, it doesn't really matter. The data exists somewhere in digital form and, given the right skills, circumstances, and motivations, it would be possible to accumulate and dump the details of one billion stolen credit cards.

So, the stolen data is stolen, made publicly available for all and sundry to access and potentially use, what happens now? Does our financial system collapse? Do organizations begin to sue one-another over overestimated (potential) losses they've incurred? Do the owners of those stolen credit cards loose everything? Does anyone who has their own credit card stop using it - loosing faith in that aspect of the banking system?

I think this is a discussion that we really need to have. To be frank, getting hold of the data related to a (few) billion credit cards is getting easier every day. I believe it is inevitable that truly colossal dumps of stolen data will occur sometime soon.

 The impact will be huge.

Lets ignore all of the behind-the-scenes shenanigans the lawyers and bankers will perform and, for once, focus on just one person... and maybe that happens to be you. What happens if you wake up tomorrow morning, head on in to work, stop by the Starbucks on the corner to grab your morning coffee and your card is denied. So you try another card, and it too is denied. You get on the phone to your bank to try to find out what happening and you're greeted with a robo-message that hundreds of millions of the bank-issued credit cards have been stolen and that they've taken action to ensure that no fraudulent charges will be made to your cards. The downside? None of your cards work in the meantime and it'll be at least a couple of weeks before the bank can issue and post out the replacements (and that's being damned optimistic - given the scale of the problem). I hope you have enough cash for gas to get home that evening.

Thursday, June 28, 2012

The Pilots of Cyberwar

As a bit of a history buff I can’t avoid a slight tingling of déjà vu every time I read some new story commenting upon the ethics, morality and legality of cyber-warfare/cyber-espionage/cyberwar/cyber-attack/cyber-whatever. All this rhetoric about Stuxnet, Flame, and other nation-state cyber-attack tools, combined with the parade of newly acknowledged cyber-warfare capabilities and units within the armed services of countries around the globe, brings to the fore so many parallels with the discussions about the (then) new-fangled use of flying-machines within the military in the run-up to WWI.

Call me a cynic if you will, but when the parallels in history are so evident, we’d be crazy to ignore them.

The media light that has been cast upon the (successful) deployment of cyber-weapons recently has many people in a tail-spin – reflecting incredulity and disbelief that such weapons exist, let alone have already been employed by military forces. Now, as people begin to understand that such tools and tactics have been fielded by nation-states for many years prior to these most recent public exposures, reactions run from calls for regulation through to global moratoriums on their use. Roll the clock back 100 years and you’ll have encountered pretty much the same reaction to the unsporting use of flying-machines as weapons of war.

That said, military minds have always sought new technologies to gain the upper-hand on and off the battlefield. Take for example Captain Bertram Dickenson’s statement to the 1911 Technical Sub-Committee for Imperial Defence (TSID) who were charged with considering the role of aeroplanes in future military operations:

“In case of a European war, between two countries, both sides would be equipped with large corps of aeroplanes, each trying to obtain information on the other… the efforts which each would exert in order to hinder or prevent the enemy from obtaining information… would lead to the inevitable result of a war in the air, for the supremacy of the air, by armed aeroplanes against each other. This fight for the supremacy of the air in future wars will be of the greatest importance…”

A century later, substitute “cyber-warriors” for aeroplanes and “Internet” for air, and you’d be hard-pressed to tell the difference from what you’re seeing in the news today.

Just as the prospect of a bomb falling from the hands of an aviator hanging out the cockpit of a zeppelin or biplane fundamentally changed the design of walled fortifications and led to the development of anti-aircraft weaponry, new approaches to securing the cyber-frontier are needed and underway. Then, as now, it wasn’t until civilians were alerted to (or encountered first-hand) the reality of the new machines of war, did an appreciation of these fundamental changes become apparent.

But there are a number of other parallels to WWI (and the birth of aerial warfare) and where cyber-warfare is today that I think are interesting too.

Take for example how the aviators of the day thought of themselves as being different and completely apart from the other war-fighters around them. The camaraderie of the pilots who, after spending their day trying to shoot-down their counterparts, were only too happy to have breakfast, and exchange stories over a few stiff drinks with the downed pilots of the other side is legendary. I’m not sure if it was mutual respect, or a sharing of a common heritage that others around them couldn’t understand, but the net result was that that first-breed of military aviator found more in common with their counterparts than with their own side.

Today, I think you’ll likely encounter the equivalent social scene as introverted computer geeks who, by way of day-job, develop the tools that target and infiltrate foreign installations for their country, yet attend the same security conferences and reveal their latest evasion tactic or privilege escalation technique over a cold beer with one-another. Whether it’s because the skill-sets are so specialized, or that the path each cyber-warrior had to take in order to acquire those skills was so influential upon their world outlook, many of the people I’ve encountered that I would identify as being capable of truly conducting warfare within the cyber-realm share more in common with their counterparts than they do with those tasking them.

When it comes to protecting a nation, cries of “that’s unfair” or “un-sporting” should be relegated to the “whatever” bucket. Any nation’s military, counter-intelligence organization, or other agency tasked with protecting its citizens would be catastrophically failing in their obligations if they’re not already actively pursuing new tools and tactics for the cyber-realm. Granted, just like the military use of aircraft in WW1 opened a Pandora’s box of armed conflict that changed the world forever, ever since the first byte’s traversed the first network we’ve been building towards the state we’re in.

The fact that a small handful of clandestine, weaponized cyber-arms have materialized within the public realm doesn’t necessarily represent a newly opened Pandora’s box – instead it merely reflects  one of the evils from a box that was opened from the time the Internet was born.

Monday, June 18, 2012

Botnet Metrics: Learning from Meteorology

As ISP’s continue to spin up their anti-botnet defenses and begin taking a more active role in dealing with the botnet menace, more and more interested parties are looking for statistics that help define both the scale of the threat and the success of the various tactics being deployed. But, as I discussed earlier in the year (see “Household Botnet Infections“), it’s not quite so easy to come up with accurate infection (and subsequent remediation) rates across multiple ISP’s.

To overcome this problem there are several initiatives trying to grapple with this problem at the moment – and a number of perspectives, observations and opinions have been offered. Obviously, if every ISP was using the same detection technology, in the same way, at the same time, it wouldn’t be such a difficult task. Unfortunately, that’s not the case.

One of the methods I’m particularly keen on is leveraging DNS observations to enumerate the start-up of conversations between the victim’s infected device and the bad guys command and control (C&C) servers. There are of course a number of pros & cons to the method – such as:
  • DNS monitoring is passive, scalable, and doesn’t require deep-packet inspection (DPI) to work [Positive],
  • Can differentiate between and monitor multiple botnets simultaneously from one location without alerting the bad guys [Positive],
  • Is limited to botnets that employ malware that make use of domain names for locating and communicating with the bad guys C&C [Negative],
  • Not all DNS lookups for a C&C domain are for C&C purposes [Negative].
On the top of all this lies the added complexity that such observations are conducted at the IP address level (and things like DHCP churn can be troublesome). This isn’t really a problem for the ISP of course – since they can uniquely tie the IP address to a particular subscriber’s network at any time.
One problem that persists though is that a “subscriber’s network” is increasingly different from “a subscriber’s infected device”. For example, a subscriber may have a dozen IP-enabled devices operating behind their cable modem – and it’s practically impossible for an external observer to separate one infected device from another operating within the same small network without analyzing traffic with intrusive DPI-based systems.

Does that effectively mean that remote monitoring and enumeration of bot-infected devices isn’t going to yield the accurate statistics everyone wants? Without being omnipresent, then the answer will have to be yes – but that shouldn’t stop us. What it means is that we need to use a combination of observation techniques to arrive at a “best estimate” of what’s going on.

In reality we have a similarly complex monitoring (and prediction) system that everyone is happy with – one that parallels the measurement problems faced with botnets – even if they don’t understand it. When it comes to monitoring the botnet threat the security industry could learn a great deal from the atmospheric physicists and professional meteorologists. Let me explain…

When you lookup the weather for yesterday, last week or last year for the 4th July, you’ll be presented with numerous statistics – hours of sunshine, inches of rainfall, wind velocities, pollen counts, etc. – for a particular geographic region of interest. The numbers being presented to you are composite values of sparse measurements.

To arrive at the conclusion that 0.55 inches of rainfall fell in Atlanta yesterday and 0.38 inches fell in Washington DC over the same period, it’s important to note that there wasn’t any measurement device sitting between the sky and land that accurately measured that rainfall throughout those areas. Instead, a number of sparsely distributed land-based point observations specific to the liquid volume of the rain were made (e.g. rain gauges), combined with a number of indirect methods (e.g. flow meter gauges within major storm drain systems), and broad “water effect” methods (e.g. radar) were used in concert to determine an average for the rainfall. This process was also conducted throughout the country, using similar (but not necessarily identical) techniques and an “average” was derived for the event.

That all sounds interesting, but what are the lessons we can take away from the last 50 years of modern meteorology? First and foremost, the use of accurate point measurements as a calibration tool for broad, indirect monitoring techniques.

For example, one of the most valuable and accurate tools modern meteorology uses for monitoring rainfall doesn’t even monitor rain or even the liquid component of the droplets – instead it monitors magnetic wave reflectivity. Yes, you guessed it – it’s the radar of course! I could get all technical on the topic, but essentially meteorological radar measure the reflection of energy waves from (partially) reflective objects in the sky. By picking the right wavelength of the magnetic wave from a radar, it gets better at detecting different sized objects in the sky (e.g. planets, aircraft, water droplets, pollutant particulates, etc.). So, when it comes to measuring rain (well, lots of individual raindrops simultaneously to be more precise), the radar system measures how much energy of a radar pulse was returned and at what time (the time component helps to determine distance).

Now radar is a fantastic tool – but by itself it doesn’t measure rainfall. Without getting all mathematical on you, the larger an individual raindrop the substantially bigger the energy reflection – which means that a few slightly larger raindrops in the sky will completely skew the energy measurements of the radar – meanwhile, the physical state of the “raindrop” also affects reflectivity. For example, a wet hailstone reflects much more energy than an all-liquid drop. There are a whole bunch of non-trivial artifacts of rainfall (you should checkout things like “hail spikes” for example) that have to be accounted for if the radar observations can be used to derive the rainfall at ground level.

In order to overcome much of this, point measurements at ground level are required to calibrate the overall radar observations. In the meteorological world there are two key technologies – rain gauges and disdrometers. Rain gauges measure the volume of water observed at a single point, while disdrometers measure the size and shape of the raindrops (or hail, or snow) that are falling. Disdrometers are pretty cool inventions really – and the last 15 years have seen some amazing advancements, but I digress…

How does this apply to Internet security and botnet metrics? From my perspective DNS observations are very similar to radar systems – they cover a lot of ground, to a high resolution, but they measure artifacts of the threat. However those artifacts can be measured to a high precision and, when calibrated with sparse ground truths, become a highly economical and accurate system.

In order to “calibrate” the system we need to use a number of point observations. By analogy, C&C sinkholes could be considered rain gauges. Sinkholes provide accurate measurements of victims of a specific botnet (albeit, only a botnet C&C that has already been “defeated” or replaced) – and can be used to calibrate the DNS observations across multiple ISP’s. A botnet that has victims within multiple ISP’s that each observe DNS slightly differently (e.g. using different static reputation systems, outdated blacklists, or advanced dynamic reputation systems), could use third-party sinkhole data for a specific botnet that they’re already capable of detecting via DNS, as a calibration point (i.e. scaling and deriving victim populations for all the other botnets within their networks).

Within their own networks ISP’s could also employ limited scale and highly targeted DPI systems to gauge a specific threat within a specific set of circumstances. This is a little analogous to the disdrometer within meteorology – determining the average size and shape of events at a specific point, but not measuring the liquid content of the rainfall directly either. Limited DPI techniques could target a specific botnet’s traffic – concluding that the bot agent installs 5 additional malware packages upon installation that each in turn attempt to resolve 25 different domain names, and yet are all part of the same botnet infection.

Going forward, as ISP’s face increased pressure not only to alert but to protect their subscribers from botnets, there will be increased pressure to disclose metrics relating to their infection and remediation rates. Given the consumer choice of three local ISP’s offering the same bandwidth for the same price per month, the tendency is to go for providers that offer the most online protection. In the past that may have been how many dollars of free security software they bundled in. Already people are looking for proof that one ISP is better than another in securing them – and this is where botnet metrics will become not only important, but also public.

Unfortunately it’s still early days for accurately measuring the botnet threat across multiple ISP’s – but that will change. Meteorology is a considerably more complex problem, but meteorologists and atmospheric physicists have developed a number of systems and methods to derive the numbers that the majority of us are more than happy with. There is a lot to be learned from the calibration techniques used and perfected in the meteorological field for deriving accurate and useful botnet metrics.

Saturday, June 2, 2012

Computer Herpes

The other day I came across a rather nice dissection of the HerpesNet malware agent (sometimes referred to as Mal/HerpBot-B) – carried out by the crew at Apart of the rather interesting name given to the malware and an associated remote C&C panel, there’s nothing particularly special about the functionality of the bot agent – it offers all the malicious features you’d expect the criminals to want.

What makes the dissection so interesting is the enumeration of remotely exploitable vulnerabilities within the C&C tasked with controlling all the botnet victims. This in itself isn’t unexpected – since the majority of malware authors are pretty poor coders – priding themselves on the features they include rather than the integrity and security of their coding practices. In fact, bug hunting malware and botnet C&C has practically become its own commercial business – as many boutique security firms now reverse engineer the bad guy’s tools and sell the uncovered remotely exploitable flaws they find to various law enforcement and government intelligence agencies.

The 80kb crimeware agent for this small botnet (7000-8000 victims) attempts some level of obfuscation by encoding its control strings with 00406FC0h – revealing the following command related domains and URLs:
Armed with information about the location of the C&C server and the method of communication (HTTP POST commands) the crew at performed a free security assessment of the server and uncovered a number of remotely exploitable SQL vulnerabilities which not only allowed them to enumerate the entire content of the botnet’s data (e.g. victim data) storage area, but to also uncover the criminals passwords for the server. Armed with that information, proceeded to gain full interactive control of the host – including the C&C management console for the botnet.

As is so typical for small “starter” botnets such as this, their criminal overlords tend to make a number of critical mistakes – such as using the server for other non-botnet-related tasks and infecting themselves with their crimeware agent and forgetting to remove their own stolen data from the C&C database. Easily half of the botnet’s C&C servers encountered by Damballa Labs contain key identifying information about the servers criminal overlord due to them testing their malware agents on themselves and forgetting to remove that data from the database. As you’ve already guessed, this Herpes botnet mastermind was no different… Say hello to “frk7″, aka “Francesco Pompo”.

Image courtesy of

I’m guessing life has suddenly become much more complicated for Francesco. His botnet has been hijacked, all of his aliases and online identities have been enumerated, both he and his girlfriend have had their personal photos accessed and plastered over the Internet, and his passwords to his accounts have been disclosed. I think his Twitter account has now been suspended too.

As someone who’s come from a penetration testing and vulnerability discovery background, it’s amusing to me how the hack proceeded. There’s nothing groundbreaking in what they did – they followed a standard methodology that dates back a decade to the early editions of the Hacking Exposed books – tactics and methods many professionals use on a routine basis, with one exception… somehow I doubt that poor Francesco gave his permission for this unscheduled evaluation of his server. I’m hoping that the countries in which crew members live are a little more flexible on their anti-hacking laws than they are in any of the countries I’ve lived in over the years. I suspect that while I’d get a pat on the shoulder with one hand if I was to have done this, I’d also be getting adorned with some unflattering steel bracelets and whisked off to a cold room with little in the way of scenery or comfort.

Wednesday, May 30, 2012

Analysis of the Flame, Flamer, sKyWIper Malware

The world is abuzz this week with some flaming malware – well “Flame” is the family name if you want to be precise. The malware package itself is considerably larger than what you’ll typically bump into on average, but the interest it is garnering with the media and antivirus vendors has more to do with the kinds of victims that have sprung up – victims mostly in the Middle East, including Iran – and a couple of vendors claiming the malware as being related to Stuxnet and Duku.

A technical report on sKyWIper was released by the Laboratory of Cryptography and Systems Security (CrySys Lab) over at the Budapest University of Technology and Economics yesterday covering their analysis of the malware – discovered earlier in May 2012 – and they also drew the conclusion that this threat is related (if not identical) to the malware described by the Iran National CERT (MAHER) – referred to as Flamer. Meanwhile, Kaspersky released some of their own analysis of “Flame” on Monday and created a FAQ based upon their interpretation of the malware’s functionality and motivations.

There is of course some debate starting about the first detection of Flamer. Given the malware’s size and number of constituent components it shouldn’t be surprising to hear that some pieces of it may have been detected as far back as March 1st 2010 – such as the file “~ZFF042.TMP” (also seen as MSSECMGR.OCX and 07568402.TMP) – analyzed by Webroot and attributed to a system in Iran.
While it’s practically a certainty that the malware was created and infected a number of victims before it was “detected” in May, I’d caution against some of the jumps people are making related to the attribution of the threat.

Firstly, this behemoth of a malware pack is constructed of a lot of different files – many of which are not malicious; with the package including common library files (such as those necessary for handling compression and video capture) as well as the Lua virtual machine. Secondly, when you’re limited to an 8.3 file naming convention, even malicious files are likely to have name collisions – resulting in many spurious associations with past, unrelated, threats if you’re googling for relationships. And finally, why build everything from scratch? – it’s not like malware authors feel honor bound to adhere to copyright restrictions or steal code from other malware authors – nowadays we see an awful lot of code recycling and simple theft as criminals hijack the best features from one another.

As you’d expect from a bloated malware package developed by even a marginally capable hacker, there are a lot of useful features included within. It’s rare to see so many features inside a single malware sample (or family), but not exceptional. As Vitaly Kamluk of Kaspersky stated – “Once a system is infected, Flame begins a complex set of operations, including sniffing the network traffic, taking screenshots, recording audio conversations, intercepting the keyboard, and so on,” – which is more typical of an attack kit rather than a piece of malware. What do I mean by “attack kit”? Basically a collection of favorite tools and scripts used by hackers to navigate a compromised host or network. In the commercial pentesting game, the consultant will normally have a compressed file (i.e. the “attack kit”) that he can shuttle across the network and drop on any hosts he gains access to. That file contains all of the tools they’re going to need to unravel the security of the (newly) compromised host and harvest the additional information they’ll need to navigate onto the next targeted device. It’s not rocket science, but it works just fine.

I’m sure some people will be asking whether the malware does anything unique. From what I can tell (without having performed an exhaustive blow-by-blow analysis of the 20Mb malware file), the collection of files doesn’t point to anything not already seen in most common banking Trojans or everyday hacking tools. That doesn’t make it less dangerous – it merely reflects the state of malware development, where “advanced” features are standard components and can be incorporated through check-box-like selection options at compile time.

For malware of this ilk, automated propagation of infections (and infectious material) is important. Flame includes a number of them – including the commonly encountered USB-based autorun and .lnk vulnerabilities observed in malware families like Stuxnet (and just about every other piece of malware since the disclosure of the successful .lnk infection vector), and that odd print spooler vulnerability – which helps date the malware packaged. By that I mean it helps date the samples that have been recovered – as there is currently no evidence of what the malware package employed prior to these recent disclosures, or what other variants that are circulating in the wild (and not been detected by antivirus products today).

Are these exploits being used for propagation evidence that Stuxnet, Duku and Flame were created and operated by the same organization? Honestly, there’s nothing particularly tangible here to reach that conclusion. Like I said before, criminals are only too happy to steal and recycle others code – and this is incredibly common when it comes to the use of exploits. More importantly, these kinds of exploits are incorporated as updates into distributable libraries, which are then consumed by malware and penetration tool kits alike. Attack kits similar to Flame are constantly being updated with new and better tool components – which is why it will be difficult to draw out a timeline for the specific phases of the threat.

That all said, if the malware isn’t so special – and it’s a hodgepodge of various public (known) malicious components – why has it eluded antivirus products in the victim regions for so long? It would be simple to argue that these regions aren’t known for employing cutting-edge antimalware defenses and aren’t well served with local-language versions of the most capable desktop antivirus suites, but I think the answer is a little simpler than that – the actors behind this threat have successfully managed their targets and victims – keeping a low profile and not going for the masses or complex setups.

This management aspect is clearly reflected in the kill module of the malware package. For example, there seems to be a module named “browse32″ that’s designed to search for all evidence of compromise (e.g. malware components, screenshots, stolen data, breadcrumbs, etc.) and carefully remove them. While many malware families employ a cleanup capability to hide the initial infection, few include the capability of removing all evidence on the host (beyond trashing the entire computer). This, to my mind, is more reflective of a tool set designed for human interactive control – i.e. for targeted attacks.

Detecting Malware is Only One Step of Many

Dealing with the malware threat isn’t a Boolean problem anymore. By that I mean being able to detect (and block) a malicious binary isn’t the conclusion to the threat, but rather it’s a perspective on the status of the threat – a piece of evidence tied to the lifecycle of a breach.

Following on from yesterday’s blog covering the Antivirus Uncertainty Principle, I believe it’s important to differentiate between the ability to detect the malware binary from the actions and status of the malware in operation. Antivirus technologies are effective tools for detecting malicious binaries – either at the network layer, or through host-based file inspection – but their presence is just one indicator of a bigger problem.

For example, let’s consider the scenario of the discovery of a used syringe lying on the pavement outside your office entryway. It is relatively easy to identify a syringe from several yards away, and closer you get to it, the easier it is to determine if it has been used before – but it’ll take some effort and a degree of specialization to determine whether the syringe harbors an infectious disease.

That’s basically the role of commercial antivirus products – detecting and classifying malware samples. However, what you’re not going to be able to determine is whether anyone was accidentally stuck by the needle, or whether anyone is showing symptoms of the infectious disease it may have harbored. To answer those questions you’ll need a different, complementary, approach.

In the complex ballet of defense-in-depth protection deployment, it is critical that organizations be able to qualify and prioritize actions in the face of the barrage of alerts they receive daily. When it comes to the lifecycle of a breach and construction of an incident response plan, how do you differentiate between threats? Surely a malware detection is a malware detection, is a malware detection?

First off the bat, the detection of malware isn’t the same as the detection of an infection. The significance of a malware detection alert coming from your signature-based SMTP gateway is different from one coming from your proxy ICAP-driven malware dynamic analysis appliance, which is different again from the alert coming from the desktop antivirus solution. The ability to qualify whether the malware sample made it to the target is significant. If the malware was detected at the network-level and never made it to the host, then that’s a “gold star” for you. If you detected it at the network-level and it still made it to host, but the host-based antivirus product detected it, that’s a “bronze star”. Meanwhile, if you detected it at the network-level and didn’t get an alert from the host-based antivirus, that’s a… well, it’s not going to be a star I guess.

Regardless of what detection alerts you may have received, it’s even more important to differentiate between observing a malware binary and the identification of a subsequent infection. If the malware was unable to infect the host device, how much of a threat does it represent?

In the real world where alerts are plentiful, correlation between vendor alerts is difficult, and incident response teams are stretched to the breaking point, malware detections are merely a statistical device for executives to justify the continued spend on a particular protection technology. What really matters is how you differentiate and prioritize between all the different alerts – and move from malware detection to infection response.

Take for example a large organization that receives alerts that 100 devices within their network have encountered Zeus malware in a single day. First of all, “Zeus” is a name for several divergent families of botnet malware used by hundreds of different criminal operators around the world – and comes in a whole bunch of different flavors and capabilities, and are used by criminals in all sorts of ways. “Zeus” is a malware label – not a threat qualification. But I digress…

Let’s say that your network-based antivirus appliance detected and blocked 40 of those alertable instances (statistically signature-based antivirus solutions would probably have caught 2 of the 40, while dynamic malware analysis solutions would catch 38 of the 40). From an incident responder’s perspective there was no threat and no call to action from these 40 alerts.

That leaves 60 Zeus malware that made it to the end device. Now let’s say that 5 of those were detected by the local-host antivirus product and “removed”. Again, from an incident responder’s perspective, no harm – no foul.

Now the interesting part – what if you could differentiate between the other 55 Zeus malware installations? How does that affect things?

If we assume you’ve deployed an advanced threat detection system that manages to combine the features of malware binary detection and real-time network traffic analysis with counterintelligence on the criminals behind the threat, you could also identify the following communications:
  1. 5 of the infected devices are attempting to locate the command and control (C&C) infrastructure of a botnet that was shut down ten months ago. While the Zeus malware may be caching stolen credentials and data on the victim’s device, it cannot ever pass them to the criminals.
  2. 20 of the infected devices are attempting to reach old and well known criminal C&C infrastructure; however your content filtering and IPS technologies operate with blacklists that are now blocking these particular domains.
  3. 8 of the Zeus installations are old and not “proxy-aware”, and are incapable of reaching the bad guys C&C while those devices are within your network.
  4. 6 of the Zeus infected devices are communicating with an external C&C that has been “sinkholed” and is under the control of a security vendor somewhere. While the original criminal operators no longer have control of the botnet, the infected devices are still managing to navigate your network defenses and upload stolen data somewhere – and there’s no guarantee that the remote security vendor isn’t selling that intelligence on to someone else.
  5. Of the remaining 16 Zeus infected devices that are successfully navigating your network defenses and are engaging with the remote criminals, 3 belong to a botnet operated by a criminal organization specializing in banking Trojans based in the Ukraine, 6 belong to a botnet operated by criminals that focus upon click-fraud and DNS manipulation based in the Netherlands, and 7 belong to a botnet operator that targets US-based financial sector organizations based in China.
Armed with that kind of insight, any incident responder worth his (or her) salt can easily figure out how to prioritize the next call of actions. While detecting malware is necessary, detecting infections is more important. Moreover, being able to rapidly enumerate the risk posed by an infection and prioritize a response is critical with today’s diverse and agile threat landscape.

Spotting a used syringe is one thing. It’s quite another to identify and support someone who’s been infected with the disease it contained.

Malware Uncertainty & False Positives

The antivirus industry has been trying to deal with false positive detection issues for a long, long time -and it’s not going to be fixed anytime soon. To better understand why, the physicist in me draws an analogy with Heisenberg’s Uncertainty Principle – where, in its simplest distillation, the better you know where an atom is, the less likely you’ll know it’s momentum (and vice versa) – aka the “observer effect“. In the malware detection world, the more positive you are that something is malware, the less likely you’ll catch other malware. And the reverse of that, the better you are at detecting a spectrum of malware, the less positive you will be that it is malware.

If that particular geek-flash doesn’t make sense to you, let me offer you this alternative insight then. The highest fidelity malware detection system is going to be signature based. The more exacting the signature (which optimally would be a unique hash value for a particular file), the greater the precision in detecting a particular malicious file – however, the precision of the signature means that other malicious files that don’t meet the exacting rule of the signature will slip by. On the other hand, a set of behaviors that together could label a binary file as malicious is less exacting, but able to detect a broader spectrum of malware. The price for that flexibility and increased capability of detecting bad stuff comes at the cost of an increased probability of false positive detections.

In physics there’s a variable, ℏ the reduced Planck constant – that acts a bit like the fulcrum of a teeter-totter (“seesaw” for the non-American rest-of-the-world); it’s also a fundamental constant of our universe – like the speed of light. In the antivirus world of Uncertainty Principles the fulcrum isn’t a universal constant, instead you could probably argue that it’s a function of cash. The more money you throw at the uncertainty problem, the more gravity-defying the teeter-totter would appear to become.

That may all sound a little discomforting. Yes, the more capable your antivirus detection technologies are in detecting malware, the more frequently false positives will crop up. But you should also bear in mind that, in general, the overall percentage of false positives tends to go down (if everyone is doing things properly). What does that mean in reality? If you’re rarely encountering false positives with your existing antivirus defenses, you’re almost certainly missing a whole lot of maliciousness. It would be nice to say that if you’re getting a whole lot of false positives you must, by corollary, be detecting (and stopping) a shed-load of malware — but I don’t think that’s always the case; it may be because you’re just doing it wrong. Or, as the French would say – C’est la vie.

Saturday, April 21, 2012

Crimeware Immunity via Cloud Virtualization

There's a growing thought recently that perhaps remote terminal emulators and fully virtualized cloud-baseddesktops are the way to go if we're ever to overcome the crimeware menace.

In essence, what people are saying is that because their normal system can be compromised so easily, and that criminals can install malicious software capable of monitoring and manipulating done on the victims computer, that perhaps we'd be better off if the computer/laptop/iPad/whatever was more akin to a dumb terminal that simply connected to a remote desktop instance - i.e. all the vulnerable applications and data are kept in the cloud, rather than on the users computer itself.

It's not a particularly novel innovation - with various vendors having promoted this or related approaches for a couple of decades now - but it is being vocalized more frequently than ever.

Personally, I think it is a useful approach in mitigating much of today's bulk-standard malware, and certainly some of the more popular DIY crimeware packs.

Some of the advantages to this approach include:
  1. The user's personal data isn't kept on their local machine. This means that should the device be compromised for whatever reason, this information couldn't be copied because it doesn't exist on the user's personal device.
  2. So many infection vectors target the Web browser. If the Web browser exists in the cloud, then the user's device will be safe - hopefully implying that whoever's hosting the cloud-based browser software is better at patch management than the average Joe.
  3. Security can be centralized in the cloud. All of the host-based and network-based defenses can be run by the cloud provider - meaning that they'll be better managed and offer a more extensive array of cutting-edge protection technologies.
  4. Any files downloaded, opened or executed, are done so within the cloud - not on the local user's device. This means that any malicious content never makes it's way down to the user's device, so it could never get infected.
That sounds pretty good, and it would successfully counter the most common flaws that criminals exploit today to target and compromise their victims. However, like all proposed security strategies, it's not a silver bullet to the threat. If anything, it alters the threat landscape in a way that may be more advantageous for the more sophisticated criminals. For example, here are a couple of likely weaknesses with this approach:
  1. The end device is still going to need an operating system and network access. As such it will remain exposed to network-level attacks. While much of the existing cybercrime ecosystem has adopted "come-to-me" infection vectors (e.g. spear phishing, drive-by-download, etc.), the "old" network-based intrusion and automated worm vectors haven't gone away and would likely rear their ugly heads as the criminals make the switch back in response to cloud-based terminal hosting.
    As such, the device would still be compromised and it would be reasonable to expect that the criminal would promote and advance their KVM capabilities (i.e. remote keyboard, video and mouse monitoring). This would allow them to not only observe, but also inject commands as if they were the real user. Net result for the user and the online bank or retailer is that fraud is just as likely and probably quite a bit harder to spot (since they'd loose visibility of what the end device actually is - with everything looking like the amorphous cloud provider).
  2. The bad guys go where the money is. If the data is where they make the money, then they'll go after the data. If the data exists within the systems of the cloud provider, then that what the bad guys will target. Cloud providers aren't going to be running any more magical application software than the regular home user, so they'll still be vulnerable to new software flaws and 0-day exploitation. This time though, the bad guys would likely be able to access a lot more data from a lot more people in a much shorter period of time.
    Yes, I'd expect the cloud providers to take more care in securing that data and have more robust systems for detecting things that go astray, but I also expect the bad guys to up their game too. And, based upon observing the last 20 years of cybercrime tactics and attack history, I think it's reasonable to assume that the bad guys will retain the upper-hand and be more innovative in their attacks than the defenders will.
I do think that, on average, more people would be more secure if they utilized cloud-based virtual systems. In the sort-term, that security improvement would be quite good. However, as more people adopted the same approach and shifted to the cloud, more bad guys would be forced to alter their attack tools and vectors.

I suspect that the bad guys would quickly be able to game the cloud systems and eventually obtain a greater advantage than they do today (mostly because of the centralized control of the data and homogeneity of the environment). "United we stand, divided we fall" would inevitably become "united we stand, united we fall."

Wednesday, April 11, 2012

IP's and the APT

Most of the good thrillers I seem to have watched in recent years have spies and assassins in them for some diabolical reason. In those movies you’ll often find their target, the Archduke of Villainess, holed up in some remote local and the spy has to fake an identity in order to penetrate the layers of defense. Almost without exception the spy enters the country using a fake passport; relying upon a passport from any country other than their own.

Like any good story, there’s enough truth to the fiction to make it believable. Take the real-life example of the hit squad that carried out the assassination of a Hamas official in Dubai early 2010. That squad (supposedly Israeli) used forged passports from the United Kingdom, Ireland, France and Germany.

So, with that bit of non-fiction in mind, why do so many people automatically assume that cyber-attacks sourced from IP addresses within China are targeted, state-sponsored, attacks? Are people missing the plot? Has the Chinese APT leapfrogged fact and splatted in to the realm of mythology already?

If you’re manning a firewall or inspecting IPS log files, you can’t have missed noticing that there’s a whole bunch of attacks being launched against your organization from devices hosted in China on a near continuous basis. A sizable fraction of those attacks would be deemed “advanced”; meaning that as long as they’re more advanced than the detection technology you happen to be reliant upon, they’re as advanced as they need to be to get the job done.

Are these the APT’s of lore? Are these the same things that government defense departments and contractors alike quake in their boots from? There’s a simple way to tell. If what you’re observing in your own logs shows the source as being from a Chinese IP address it almost certainly isn’t.
Yes, there’s a tremendous amount of attack traffic coming from China, but this should really be categorized as the background hum of the modern Internet nowadays. China, as the most populous country on the planet, isn’t exempt from having more than its fair share of Internet scoundrels, wastrels, hackers and cyber-criminals — spanning the full spectrum of technical capability and motivations. Even then, the traffic originating from China may not be wholly from criminals based there — instead it may also contain attack traffic tunneled through open proxies and bot infected hosts within China by other international cyber-criminals.

Mind you, when we’re talking about cyber-warfare and state-sponsored espionage, we’re not talking about a bunch of under-graduate hackers.

Just about every country I can think of with a full-time professional military force has been investing in their cyber capabilities – both defense and attack. While they’re not employing the crème de la crème of professional hacking talent, they are professional and have tremendous resources behind them, and they follow a pretty strict and well thought-out doctrine. If you’re in the Chinese Army and have been tasked with facilitating a particular espionage campaign or to aid a spy mission, the last thing on earth you’re going to do is to launch or control your assets from an IP address that can be easily traced back to China. Anywhere else in the world is good, and an IP address in a country that your foe is already suspicious of (or fully trusting of) is way better.

Don’t get me wrong though, I’m not singling out the Chinese for any particular reason other than most readers will be familiar with the hoopla and epic proportions of Chinese APTs in the media. Any marginally competent adversary is going to similarly launch their attacks from a foreign source if they’re planning on maintaining deniability should the attack ever be noticed – just like those spy tactic of using foreign passports.

So, if you’re inclined, how are you going to get access to foreign resources that can proxy and mask your attacks? Elementary my dear Watson, there’s a market for that. First of all there’s a whole bunch of free and commercial anonymizing proxies , routers and VPN’s out there – but they may not be stable enough for conducting a prolonged campaign (and besides, they’re probably already penetrated by a number of government entities already). Alternatively you could buy access to already compromised systems and hijack them for your own use.

Over the last five years there have been a bunch of boutique threat monitoring and threat feed companies springing up catering almost exclusively to the needs of various national defense departments. While they may offer 0-day vulnerabilities, reliable weaponized exploits and stealthy remote access Trojans, their most valuable offering in the world of state-sponsored espionage is arguably the feed of intelligence harvested from the sinkholes they control. Depending upon the type of sinkhole they’re fortunate to be operating, and which botnet or malware campaign that happened to utilize the hijacked domain, they’re going to have access to a real-time feed of known victim devices from around the world, copies of all the data leached from the victims by the malware and, in some cases, the ability to remotely control the victim device. Everything a cyber-warfare unit is going to need to hijack and usurp control of a foreign host, and launch their stealthy attack from.

Now, if I was say working within the cyber-warfare team of the French Foreign Legion or perhaps the DGSE (General Directorate for External Security) and interested in gathering secret intelligence about the investment Chinese companies are making in sub-Sahara mineral resources, I’d probably launch my attack from a collection of bot-infected hosts located within US or Australian universities. The security analysts and incident response folks working at those Chinese companies are probably already seeing attack traffic from these sources off-and-on, so my more specialized and targeted attack would unlikely raise suspicion. Should the targeted attack eventually be discovered, the Chinese would simply blame the US and Australian governments – rather than the French.

Having said all that, you’ve probably seen movies with double-agents in them too. And it’s entirely possible that someone hair-brained enough would argue that China launches attacks from their own IP space because everyone knows that you shouldn’t, and therefore an assumption would be made that attacks launched from China are clearly not from the Chinese government – while they are in fact. How very cunning. Now there’s a twist for the next spy movie.

Friday, April 6, 2012

Practical Malware Analysis - A Review

Off and on over the last few weeks I've been reading Michael Sikorski & Andrew Honig's latest book "Practical Malware Analysis".

As you'd expect given the title, the book covers the art of malware reverse engineering and analysis from a malware investigators perspective - providing extensive coverage of the techniques that need to be mastered by folks that intend to make a career of such technical work. The tome of some 766 pages can be thought of more as a text book (complete with practical labs) rather than a reference book that many other similarly themed practical malware analysis books take.

A question I have when reading books such as this is "who's going to benefit from the book?". My first impression is that this book, while covering the spectrum of analysis techniques for an increasingly diverse array of threats, is probably most applicable to those folks just starting out in their IT security careers and are still exploring what they want to grow up. I think this book would be an ideal text for a 200-level computer science course at college or university - and the included labs would sufficiently reinforce the learned material. It's likely that folks who have some working familiarity with the malware threat and have tinkered with incident response or basic malware forensics could use the book as a concise reference for malware analysis, but would end up quickly moving on to more specialized/focused books that target specific classes of threat (e.g. rootkits, packers, etc.).

Having employed and managed many malware analysts in the past for organizations such as X-Force, IBM and Damballa, my expectation is that the corpus of knowledge contained within Practical Malware Analysis would represent the first year of their career - as in by mastering the content contained in this book, the reader would likely be equivalent to a junior analyst that had learned the basic "on the job" stuff at a typical anti-virus company (identify relevant features of the malware under study and develop signatures and clean-up scripts). Anyone beyond that level will need more specific books and material.

I like the fact that there's broad spectrum of material covered in the book and that there's labs to reinforce the concepts. That said, I'd have preferred that the authors dove a little deeper in to some of the automated techniques for handling armored malware at the sacrifice of the helicopter chapters on shellcode analysis and IDA Pro.