Thursday, June 28, 2012

The Pilots of Cyberwar

As a bit of a history buff I can’t avoid a slight tingling of déjà vu every time I read some new story commenting upon the ethics, morality and legality of cyber-warfare/cyber-espionage/cyberwar/cyber-attack/cyber-whatever. All this rhetoric about Stuxnet, Flame, and other nation-state cyber-attack tools, combined with the parade of newly acknowledged cyber-warfare capabilities and units within the armed services of countries around the globe, brings to the fore so many parallels with the discussions about the (then) new-fangled use of flying-machines within the military in the run-up to WWI.

Call me a cynic if you will, but when the parallels in history are so evident, we’d be crazy to ignore them.

The media light that has been cast upon the (successful) deployment of cyber-weapons recently has many people in a tail-spin – reflecting incredulity and disbelief that such weapons exist, let alone have already been employed by military forces. Now, as people begin to understand that such tools and tactics have been fielded by nation-states for many years prior to these most recent public exposures, reactions run from calls for regulation through to global moratoriums on their use. Roll the clock back 100 years and you’ll have encountered pretty much the same reaction to the unsporting use of flying-machines as weapons of war.

That said, military minds have always sought new technologies to gain the upper-hand on and off the battlefield. Take for example Captain Bertram Dickenson’s statement to the 1911 Technical Sub-Committee for Imperial Defence (TSID) who were charged with considering the role of aeroplanes in future military operations:

“In case of a European war, between two countries, both sides would be equipped with large corps of aeroplanes, each trying to obtain information on the other… the efforts which each would exert in order to hinder or prevent the enemy from obtaining information… would lead to the inevitable result of a war in the air, for the supremacy of the air, by armed aeroplanes against each other. This fight for the supremacy of the air in future wars will be of the greatest importance…”

A century later, substitute “cyber-warriors” for aeroplanes and “Internet” for air, and you’d be hard-pressed to tell the difference from what you’re seeing in the news today.

Just as the prospect of a bomb falling from the hands of an aviator hanging out the cockpit of a zeppelin or biplane fundamentally changed the design of walled fortifications and led to the development of anti-aircraft weaponry, new approaches to securing the cyber-frontier are needed and underway. Then, as now, it wasn’t until civilians were alerted to (or encountered first-hand) the reality of the new machines of war, did an appreciation of these fundamental changes become apparent.

But there are a number of other parallels to WWI (and the birth of aerial warfare) and where cyber-warfare is today that I think are interesting too.

Take for example how the aviators of the day thought of themselves as being different and completely apart from the other war-fighters around them. The camaraderie of the pilots who, after spending their day trying to shoot-down their counterparts, were only too happy to have breakfast, and exchange stories over a few stiff drinks with the downed pilots of the other side is legendary. I’m not sure if it was mutual respect, or a sharing of a common heritage that others around them couldn’t understand, but the net result was that that first-breed of military aviator found more in common with their counterparts than with their own side.


Today, I think you’ll likely encounter the equivalent social scene as introverted computer geeks who, by way of day-job, develop the tools that target and infiltrate foreign installations for their country, yet attend the same security conferences and reveal their latest evasion tactic or privilege escalation technique over a cold beer with one-another. Whether it’s because the skill-sets are so specialized, or that the path each cyber-warrior had to take in order to acquire those skills was so influential upon their world outlook, many of the people I’ve encountered that I would identify as being capable of truly conducting warfare within the cyber-realm share more in common with their counterparts than they do with those tasking them.

When it comes to protecting a nation, cries of “that’s unfair” or “un-sporting” should be relegated to the “whatever” bucket. Any nation’s military, counter-intelligence organization, or other agency tasked with protecting its citizens would be catastrophically failing in their obligations if they’re not already actively pursuing new tools and tactics for the cyber-realm. Granted, just like the military use of aircraft in WW1 opened a Pandora’s box of armed conflict that changed the world forever, ever since the first byte’s traversed the first network we’ve been building towards the state we’re in.

The fact that a small handful of clandestine, weaponized cyber-arms have materialized within the public realm doesn’t necessarily represent a newly opened Pandora’s box – instead it merely reflects  one of the evils from a box that was opened from the time the Internet was born.

Monday, June 18, 2012

Botnet Metrics: Learning from Meteorology

As ISP’s continue to spin up their anti-botnet defenses and begin taking a more active role in dealing with the botnet menace, more and more interested parties are looking for statistics that help define both the scale of the threat and the success of the various tactics being deployed. But, as I discussed earlier in the year (see “Household Botnet Infections“), it’s not quite so easy to come up with accurate infection (and subsequent remediation) rates across multiple ISP’s.

To overcome this problem there are several initiatives trying to grapple with this problem at the moment – and a number of perspectives, observations and opinions have been offered. Obviously, if every ISP was using the same detection technology, in the same way, at the same time, it wouldn’t be such a difficult task. Unfortunately, that’s not the case.

One of the methods I’m particularly keen on is leveraging DNS observations to enumerate the start-up of conversations between the victim’s infected device and the bad guys command and control (C&C) servers. There are of course a number of pros & cons to the method – such as:
  • DNS monitoring is passive, scalable, and doesn’t require deep-packet inspection (DPI) to work [Positive],
  • Can differentiate between and monitor multiple botnets simultaneously from one location without alerting the bad guys [Positive],
  • Is limited to botnets that employ malware that make use of domain names for locating and communicating with the bad guys C&C [Negative],
  • Not all DNS lookups for a C&C domain are for C&C purposes [Negative].
On the top of all this lies the added complexity that such observations are conducted at the IP address level (and things like DHCP churn can be troublesome). This isn’t really a problem for the ISP of course – since they can uniquely tie the IP address to a particular subscriber’s network at any time.
One problem that persists though is that a “subscriber’s network” is increasingly different from “a subscriber’s infected device”. For example, a subscriber may have a dozen IP-enabled devices operating behind their cable modem – and it’s practically impossible for an external observer to separate one infected device from another operating within the same small network without analyzing traffic with intrusive DPI-based systems.

Does that effectively mean that remote monitoring and enumeration of bot-infected devices isn’t going to yield the accurate statistics everyone wants? Without being omnipresent, then the answer will have to be yes – but that shouldn’t stop us. What it means is that we need to use a combination of observation techniques to arrive at a “best estimate” of what’s going on.

In reality we have a similarly complex monitoring (and prediction) system that everyone is happy with – one that parallels the measurement problems faced with botnets – even if they don’t understand it. When it comes to monitoring the botnet threat the security industry could learn a great deal from the atmospheric physicists and professional meteorologists. Let me explain…

When you lookup the weather for yesterday, last week or last year for the 4th July, you’ll be presented with numerous statistics – hours of sunshine, inches of rainfall, wind velocities, pollen counts, etc. – for a particular geographic region of interest. The numbers being presented to you are composite values of sparse measurements.

To arrive at the conclusion that 0.55 inches of rainfall fell in Atlanta yesterday and 0.38 inches fell in Washington DC over the same period, it’s important to note that there wasn’t any measurement device sitting between the sky and land that accurately measured that rainfall throughout those areas. Instead, a number of sparsely distributed land-based point observations specific to the liquid volume of the rain were made (e.g. rain gauges), combined with a number of indirect methods (e.g. flow meter gauges within major storm drain systems), and broad “water effect” methods (e.g. radar) were used in concert to determine an average for the rainfall. This process was also conducted throughout the country, using similar (but not necessarily identical) techniques and an “average” was derived for the event.

That all sounds interesting, but what are the lessons we can take away from the last 50 years of modern meteorology? First and foremost, the use of accurate point measurements as a calibration tool for broad, indirect monitoring techniques.

For example, one of the most valuable and accurate tools modern meteorology uses for monitoring rainfall doesn’t even monitor rain or even the liquid component of the droplets – instead it monitors magnetic wave reflectivity. Yes, you guessed it – it’s the radar of course! I could get all technical on the topic, but essentially meteorological radar measure the reflection of energy waves from (partially) reflective objects in the sky. By picking the right wavelength of the magnetic wave from a radar, it gets better at detecting different sized objects in the sky (e.g. planets, aircraft, water droplets, pollutant particulates, etc.). So, when it comes to measuring rain (well, lots of individual raindrops simultaneously to be more precise), the radar system measures how much energy of a radar pulse was returned and at what time (the time component helps to determine distance).

Now radar is a fantastic tool – but by itself it doesn’t measure rainfall. Without getting all mathematical on you, the larger an individual raindrop the substantially bigger the energy reflection – which means that a few slightly larger raindrops in the sky will completely skew the energy measurements of the radar – meanwhile, the physical state of the “raindrop” also affects reflectivity. For example, a wet hailstone reflects much more energy than an all-liquid drop. There are a whole bunch of non-trivial artifacts of rainfall (you should checkout things like “hail spikes” for example) that have to be accounted for if the radar observations can be used to derive the rainfall at ground level.

In order to overcome much of this, point measurements at ground level are required to calibrate the overall radar observations. In the meteorological world there are two key technologies – rain gauges and disdrometers. Rain gauges measure the volume of water observed at a single point, while disdrometers measure the size and shape of the raindrops (or hail, or snow) that are falling. Disdrometers are pretty cool inventions really – and the last 15 years have seen some amazing advancements, but I digress…

How does this apply to Internet security and botnet metrics? From my perspective DNS observations are very similar to radar systems – they cover a lot of ground, to a high resolution, but they measure artifacts of the threat. However those artifacts can be measured to a high precision and, when calibrated with sparse ground truths, become a highly economical and accurate system.

In order to “calibrate” the system we need to use a number of point observations. By analogy, C&C sinkholes could be considered rain gauges. Sinkholes provide accurate measurements of victims of a specific botnet (albeit, only a botnet C&C that has already been “defeated” or replaced) – and can be used to calibrate the DNS observations across multiple ISP’s. A botnet that has victims within multiple ISP’s that each observe DNS slightly differently (e.g. using different static reputation systems, outdated blacklists, or advanced dynamic reputation systems), could use third-party sinkhole data for a specific botnet that they’re already capable of detecting via DNS, as a calibration point (i.e. scaling and deriving victim populations for all the other botnets within their networks).

Within their own networks ISP’s could also employ limited scale and highly targeted DPI systems to gauge a specific threat within a specific set of circumstances. This is a little analogous to the disdrometer within meteorology – determining the average size and shape of events at a specific point, but not measuring the liquid content of the rainfall directly either. Limited DPI techniques could target a specific botnet’s traffic – concluding that the bot agent installs 5 additional malware packages upon installation that each in turn attempt to resolve 25 different domain names, and yet are all part of the same botnet infection.

Going forward, as ISP’s face increased pressure not only to alert but to protect their subscribers from botnets, there will be increased pressure to disclose metrics relating to their infection and remediation rates. Given the consumer choice of three local ISP’s offering the same bandwidth for the same price per month, the tendency is to go for providers that offer the most online protection. In the past that may have been how many dollars of free security software they bundled in. Already people are looking for proof that one ISP is better than another in securing them – and this is where botnet metrics will become not only important, but also public.

Unfortunately it’s still early days for accurately measuring the botnet threat across multiple ISP’s – but that will change. Meteorology is a considerably more complex problem, but meteorologists and atmospheric physicists have developed a number of systems and methods to derive the numbers that the majority of us are more than happy with. There is a lot to be learned from the calibration techniques used and perfected in the meteorological field for deriving accurate and useful botnet metrics.

Saturday, June 2, 2012

Computer Herpes

The other day I came across a rather nice dissection of the HerpesNet malware agent (sometimes referred to as Mal/HerpBot-B) – carried out by the crew at malware.lu. Apart of the rather interesting name given to the malware and an associated remote C&C panel, there’s nothing particularly special about the functionality of the bot agent – it offers all the malicious features you’d expect the criminals to want.

What makes the malware.lu dissection so interesting is the enumeration of remotely exploitable vulnerabilities within the C&C tasked with controlling all the botnet victims. This in itself isn’t unexpected – since the majority of malware authors are pretty poor coders – priding themselves on the features they include rather than the integrity and security of their coding practices. In fact, bug hunting malware and botnet C&C has practically become its own commercial business – as many boutique security firms now reverse engineer the bad guy’s tools and sell the uncovered remotely exploitable flaws they find to various law enforcement and government intelligence agencies.

The 80kb crimeware agent for this small botnet (7000-8000 victims) attempts some level of obfuscation by encoding its control strings with 00406FC0h – revealing the following command related domains and URLs:
  • http://dd.zeroxcode.net/herpnet/
  • http://www.zeroxcode.net/herpnet/
  • http://frk7.mine.nu/herpnet/
  • ftp.zeroxcode.net
  • upload@zeroxcode.net
Armed with information about the location of the C&C server and the method of communication (HTTP POST commands) the crew at malware.lu performed a free security assessment of the www.zerocode.net server and uncovered a number of remotely exploitable SQL vulnerabilities which not only allowed them to enumerate the entire content of the botnet’s data (e.g. victim data) storage area, but to also uncover the criminals passwords for the server. Armed with that information, malwware.lu proceeded to gain full interactive control of the host – including the C&C management console for the botnet.

As is so typical for small “starter” botnets such as this, their criminal overlords tend to make a number of critical mistakes – such as using the server for other non-botnet-related tasks and infecting themselves with their crimeware agent and forgetting to remove their own stolen data from the C&C database. Easily half of the botnet’s C&C servers encountered by Damballa Labs contain key identifying information about the servers criminal overlord due to them testing their malware agents on themselves and forgetting to remove that data from the database. As you’ve already guessed, this Herpes botnet mastermind was no different… Say hello to “frk7″, aka “Francesco Pompo”.


Image courtesy of malware.lu.


I’m guessing life has suddenly become much more complicated for Francesco. His botnet has been hijacked, all of his aliases and online identities have been enumerated, both he and his girlfriend have had their personal photos accessed and plastered over the Internet, and his passwords to his accounts have been disclosed. I think his Twitter account has now been suspended too.

As someone who’s come from a penetration testing and vulnerability discovery background, it’s amusing to me how the malware.lu hack proceeded. There’s nothing groundbreaking in what they did – they followed a standard methodology that dates back a decade to the early editions of the Hacking Exposed books – tactics and methods many professionals use on a routine basis, with one exception… somehow I doubt that poor Francesco gave his permission for this unscheduled evaluation of his server. I’m hoping that the countries in which malware.lu crew members live are a little more flexible on their anti-hacking laws than they are in any of the countries I’ve lived in over the years. I suspect that while I’d get a pat on the shoulder with one hand if I was to have done this, I’d also be getting adorned with some unflattering steel bracelets and whisked off to a cold room with little in the way of scenery or comfort.