Saturday, December 19, 2009

The Botnet Helpdesk

So, you're planning on building your own botnet and despite all the how-to videos on YouTube you're still having problems building your botnet malware agent and getting your command & control to work like the videos said ti would. What do you do? Well, if you purchased your DIY botnet creation kit from one of several "commercial" botnet providers, you'd contact their help-desk.

I kid you not. Several crimeware service providers go beyond 24x7 IRC and email support - now offering full online help-desks; complete with ticketing systems for tracking your "incident" and live virtual advisers.

For a full analysis of one of these botnet service providers - check out my latest blog entry over on the Damballa site - The Botnet Distribution and Helpdesk Services.

Thursday, December 17, 2009

Anti-antivirus Testing Services


If you're a professional botnet operator, the malware agents you use are critical. To guarantee successful operation of the botnet agent and avoid detection on the victims computer, it needs to be tested. Today there is a growing service industry focused on providing anti-antivirus detection and malware QA to cybercriminals.

I been playing around with anti-antivirus testing services and posed a new blog entry covering Virtest.com over on the Damballa site - Malware QA and Exploit Testing Services

Tuesday, December 8, 2009

Extracting CnC from Malware

I've been asked quite a bit about the risks and value of automatic malware analysis within the enterprise over the last few months. There are of course a lot of technologies that enterprise can purchase and deploy withing their network to take in suspicious samples and classify them as benign or malicious.

Most of these technologies use a mix of signature and behavioral engines, although there's been a greater push recently to use virtual/sandboxing technologies as well (or as a replacement). I'm not convinced this is such a smart idea. The tools being used to create new families and serial variants of malware tend to be more sophisticated nowadays that whats being used to thwart them at the perimeter network. In fact practically anyone with the ability to use Google and permissions to install software on a computer can download many of the DIY malware construction kits and start generating crimeware thats guaranteed to defeat most of these commercial VM/Sandboxing technologies - some will even enable the would-be cybercriminal to use exploits to break out of the sandbox.

Anyhow, I've pulled together a whitepaper discussing the use of such technologies in obtaining botnet command and control information - and the limitations of such technologies within the enterprise.

"Extracting CnC from Malware" is now available on the Damballa web site.

Saturday, December 5, 2009

Couple of NASA.Gov Sites Hacked

I was just browsing a few blogs this evening and saw that NASA's Instrument Systems and Technology Division and their Software Engineering Division web sites were hacked and found to be vulnerable to what looks like SQL Injection as well as poor access controls. There may be a few other things going on there, but the details were pretty sparse, and I wasn't really looking to start probing the sites myself to find out what they're precisely vulnerable to.

The screenshot to the left shows access to the page editing functions of the site. NASA needs to get these sites secure as soon as possible. Any script-kiddie could walk in there and start adding their favorite drive-by download exploits as it stands.

The admin credentials (35 of them) were lifted off both Web servers by "c0de.breaker"

Original posting is over at TinKode.

Note: I've been advised that these vulnerabilities have been remediated.

Wednesday, November 25, 2009

Enterprise Botnets - Targeted or What?

Whats the difference between these massive botnets gobbling up sizable chunks of the Internet and those found inside the enterprise? Quite a bit actually.

Over the last couple of months I’ve been talking at a number of conferences and speaking with customers about the kinds of botnets we observe within enterprise networks as opposed to whats generally seen propagating the Internet at large. As you’d expect, there are a number of differences – partly because of the types of bad actors targeting businesses, and partly because enterprise perimeter security is considerably more advanced than that found at the end of the average DSL Internet connection.

From a cross-network visibility perspective, the types of botnets regularly encountered operating within enterprises in 2009 can best be divided (and described) as follows:

Internet Targeted – or “broad-spectrum” attack for want of a better description – account for approximately half of all botnets regularly encountered inside enterprise networks. These botnets aren’t targeted at any particular network – just at the average Internet user – but they typically manage to infiltrate enterprise networks due to lax security policies and as bleed-over from the other networks (and devices) employees may connect to. I discussed some of this in the earlier blog – Botnet bleed-over in to the enterprise – in which botnets designed to steal online gaming authentication credentials often appear within the enterprise. Just about all of these broad-spectrum botnets can self-propagate using an assortment of built-in worming capabilities. Fortunately, just about every one of these botnets are easily detected with standard host-based antivirus products.

What this means in practice however is that hosts “not quite” adhering to the corporate security policy, or which are a little behind in apply the latest patches (including not running the very latest signatures for their antivirus package), are the first to fall victim – and no organization I’ve observed in the last 20 years has ever managed implement their security uniformly throughout the entire enterprise.

I foresee that these “broad-spectrum” botnets will continue to appear within enterprises and be a nuisance to enterprise security teams. That said though, just because they aren’t targeted and fixes are available, it doesn’t mean that there’s no threat. If a particular botnet agent doesn’t yield value to its original botnet master (e.g. a botnet focused on obtaining passwords for social networking sites), it is quickly passed on to other operators that can make money from it – repurposing the compromised host and installing new malware agents that will yield value to the new owner.

Enterprise Targeted botnets are botnets that are hardly ever found circulating the Internet, and are designed to both penetrate and propagate within enterprise networks alone. Around 35% of botnets encountered within enterprise networks are this type. They are typically based upon sophisticated multi-purpose Remote Access Trojans (RAT); often blended with worming functions capable of using exploits against standard network services (services that are typically blocked by perimeter firewal technologies). Perhaps the most visible identifier of a botnet targeted at enterprises is the native support for network proxies – i.e. they’re proxy-aware – and capable of leveraging the users credentials for navigating command and control (CnC) out of the network.

In general, these “targeted” botnets aren’t targeted at a specific organization, but at a particular industry (i.e. online retail companies) or category of personnel within the organization (e.g. the CFO).The botnet agents tend to more advanced (on average) than most botnet malware encountered within enterprise networks – offering greater flexibility for the botnet masters to navigate the network and compromise key assets, and to be able to extract any valuable information they manage to obtain.

Deep Knowledge botnets are a completely different beast. Accounting for 10% of the botnets encountered within typical enterprise networks, these botnets tend to rely upon off-the-shelf malware components (more often than not, being built from commercial DIY malware creator kits). Depending upon the investment made by the botnet master, the features of the botnet agent can be very sophisticated or run-of-the-mill. What makes them so dangerous though is that the creator (who is often the botnet master) has a high degree of knowledge about the infiltrated enterprise – and already knows where to find all the valuable information. In some cases specific people or systems are targeted as beachheads in to the organization, while in others key organization-specific credentials are used to navigate the network.

Where this “deep knowledge” comes from can vary considerably. Each botnet within this category tends to be unique. I’ve come to associate these botnets with past or present employees (rather than industrial espionage) – as it’s not uncommon to be able to associate the CnC server of the botnet to a DSL or cable Internet IP address in the same city as the office or building that has been breached. In some cases I wouldn’t be surprised if the installation of these botnet agents were conducted by hand as a means of (semi)legitimate remote administration (think back to the problem in the mid-1990’s when people were installing modems in to their work computers so they could access them remotely). The problem though is that most of these commercial DIY malware construction kits have been backdoored by their creators (or “partners” in their distribution channel) – which means that any corporate assets infected with the botnet agent will find themselves under the control of multiple remote users.

Other” represents the catch-all for remaining 5% of botnets encountered within enterprise networks. These botnets (and the malware they rely upon) vary considerably in both sophistication and functionality, and don’t fit neatly in to any of the previous three categories. They include the small botnets targeted at an organization for competitive advantage, through to what can only be guessed at as being state-sponsored tools targeting specific industries and technologies.

It’ll be interesting to see how the distribution of these four categories of botnets change in 2010. I suspect that the proportions will remain roughly the same – with the “other” category decreasing over time, and being largely absorbed in to the “Enterprise Targeted” category rather than “Deep Knowledge”.

==> Reposted from http://blog.damballa.com/

Monday, November 23, 2009

Symantec Site Vulnerable to Blind SQL Injection

It looks as if Symantec has a bit of a problem with Blind SQL Injection. Unu has uncovered the vulnerability lying in one of Symantec's public Internet portals.

Using a couple of off-the-shelf tools - Pangolin and sqlmap - it's possible to enumerate the back-end databases supporting the public Internet web site - and this is what Unu appears to have been done.

Blind SQLi isn't a particularly sophisticated vulnerability, but it is often a labor intensive type of attack - not to mention rather noisy (due to the repeated requests and incremental guessing of characters that make up the database objects). That said, there are a bundle of tools out there that'll do all this work for you - so you don't need to be particularly security-savy to do this. In fact you probably don't even to need to know what SQL is since the tools take care of everything for you.

I discussed some of this the other week at the OWASP conference. Today these kinds of tools and features are becoming standard within botnets - which means that exploitation of these vulnerabilities and enumeration of the the database' data can be conducted in a few minutes - way before a security team can actively respond to the attack and close down the breach and loss of confidential data.

After enumerating the Symantec Web server, it would seem that there is data covering a number of Symantec products Oasis, Northwind, OneCare, as well as a couple of very interesting storage points relating to Norton and SymantecStore.

Based upon whats visible upon Unu's site, the Symantec store contains over 70,000 rows - which appear to be customer records, complete with clear-text passwords - that's bad and dumb! (Symantec should know better).

Oh, and there appears to be something like 122k records associated with product serial numbers.

I'm hoping that Symantec are dealing with this vulnerability and closing it down (as it's not clear whether Unu provided Symantec with prior knowledge of this vulnerability). In the meantime, they may want to start looking for a new security vendor to do some WebApp pentests.

Tuesday, November 17, 2009

IBM, OWASP's O2 and Dinis

Last week I was in Washington DC speaking at the annual OWASP AppSec conference. While there and acquaintance of mine - Dinis Cruz - posted a series of blogs concerning IBM, Ounce Labs, OWASP's O2 project and his mix in the equation - as well as presenting upon the status of O2. The crux of the blog series covers Dinis' analysis of why the recent purchase and integration of Ounce Labs in to IBM could work (but isn't) and a home for O2.

A few people have commented on the blog series - most notably R'Snake - in particular as it relates to the O2 project.

To be perfectly honest I'm not that familiar with the O2 project - having never gotten my hands dirty playing with it - but I know from experience how valuable similar tool integration frameworks are. From a pure-play consulting perspective, the ability to automate the dissection of results from multiple static analysis tools is money in the bank, and as such most security consulting practices offering code analysis services have typically invested their own time and money building similar tools. But custom integration paths are a substantial cost to consulting companies - so an Open Source framework has a lot of appeal (if it's good enough).

That said, Open Source projects like O2 typically have little to no appeal for any but the smallest MSSP and SaaS providers. Such service providers - seeking to build managed offerings around the integration and consolidated output of commercial (and freeware) tools - suffer from intense pressure by investors (and potential acquisition/merger partners) to not include Open Source code due to licensing and intellectual property disclosure concerns. Taking O2 down a commercial route eventually (or offering a seperate route like SNORT/SourceFire) would however have an appeal in these cases.

Shifting focus back to IBM and the acquisition and integration of Ounce Labs technology in to the Rational software portfolio - I share several of Dinis' concerns. From what I understand (and overheard at the OWASP conference), the Ounce Labs technologies are rolling under the Watchfire product team and being integrated together - which I would see as a sensible course of action, but would effectively mean the end of the "Ounce Labs" brand/product label. NOt that that really matters to the market, but it does tend to turn-off many of the employees that transitioned to IBM as part of the acquisition. Having said all that though, the WatchFire team are a bunch of very smart people and they were already well on the way to having developed their own static analysis tools that would have directly competed with Ounce Labs (at least in the Web-based language frameworks) - so this current integration is largely a technology-path accelerator rather than a purchase of new technology.

Dinis proposes a story - well, more of a "plot" - in which IBM can fulfil the requirements of a fictitious customer with an end-to-end solution. His conclusion is that IBM has all the necessary components and is more than capable of building the ultimate solution - but it's going to be a hard path and may never happen in practice.

I can understand the motivations behind his posts - particularly after personally passing through the IBM acquisition and integration of ISS. IBM has so much potential. It has some of the brightest researchers I have ever encountered in or out of academia and some of the best trained business executives in the world - however, it's a monster of a company and internal conflict over ownership (of strategy, the customer, and key concepts such as "security") between divisions and "brands" appears all to often to sink even the best made plans or intentions.

My advice to Dinis in making up his mind whether to stay with IBM or to move on would be this... if you enjoy working on exciting problems, inventing new technologies and changing focus completely every 2-4 years, but aren't overly concerned whether your research and technology will actually make it to a commercial product - then IBM is great (you can even start planning your retirement). However, if you're like me and the enjoyment lies in researching new technologies and solving problems that customers will actually use and be commercially available in the same year (or decade?) you worked on them, then it's unlikely you'd find IBM as fulfilling. IBM's solution momentum is unstopable once it gets going - but it takes a long time to get there things rolling and is pretty hard to change course once its rolling.

Sunday, November 15, 2009

"Responsible Disclosure" - Friend or Foe

It's been an interesting weekend on the "responsible disclosure" front. Reactions and tweet threads from several noted vulnerability researchers in response to K8em0's blog post (Behind the ISO Curtain) most notably those of Halvar Flake via his post (Why are most researchers not a fan of standards on "responsible disclosure" have been fast and (semi)furious.

On one hand it seems like a typical, dare I say it "annual", flareup on the topic. But then again, the specter of some ill-informed ISO standard being developed as a guide for defining and handling responsible disclosure was sure to escalate things.

To my mind, Halvar makes a pretty good argument for the cause that any kind of "standard" isn't going to be worth the paper its printed on. I particularly liked the metaphor...
"if I can actually go and surf, why would I discuss with a bunch of people sitting in an office about the right way to come back to the beach ?"
But the discussion isn't going away...

While I haven't seen anything on this ISO project (ISO/IEC NP 29147 Information technology - Security techniques - Responsible Vulnerability Disclosure) I suspect strongly that it has very little to do with the independent vulnerability researchers themselves - and seems more focused on how vendors should aim to disclose (and dare I say "coordinate" disclosures) publicly. In general most vendor-initiated vulnerability disclosures have been mostly responsible - but in cases where multiple vendors are involved, coordination often breaks down and slivers of 'ir' appear in front 'responsible'. The bigger and more important a multi-vendor security vulnerability is, the more likely it's disclosure will be screwed up.

Maybe this ISO work could help guide software vendors in dealing with security researchers and better handling disclosure coordination. It would be nice to think so.

Regardless, I think the work of ICASI is probably more useful - in particular the "Common Frameworks for Vulnerability Disclosure and Response (CVRF)" - and would probably bleed over in to some ISO work eventually. There are only a handful of vendors participating in the consortium (Cisco, Microsoft, IBM, Intel, Juniper and Nokia), but at least they're getting their acts together and working out a solution for themselves. I may be a little biased though since I was briefly involved with ICASI when I was with IBM. Coordination and responsible disclosure amongst these vendors is pretty important - eat your own dog-food and all that lark.

At the end of the day, trying to impose standards for vulnerability disclosure upon independent researchers hasn't and isn't going to work - even if these "standards" were ever to be enshrined in to law.

Monday, November 9, 2009

Clubbing WebApps with a Botnet - OWASP AppSec 2009

Back from vacation, fully refreshed, and back to the blog (and conference speaking)...

This week I'll be in Washington DC for the annual OWASP US conference - AppSec USA 2009. I'm speaking Thursday morning (10:45am-11:30am) on the topic of "Clubbing Web Applications with a Botnet", where I'll be covering the threat to Web applications from botnets - in particular they way they can (and are) used as force multipliers in brute-forcing and SQL Injection attacks.

A quick abstract for the talk is as follows:
The lonely hacker taking pot-shots at a Web application – seeking out an exploitable flaw - is quickly going the way of the dinosaur. Why try to hack an application from a solitary host using a single suite of tools when you can distribute and load-balance the attack amongst a global collection of anonymous bots and even ramp up the pace of attack by several orders of magnitude? If you’re going to _really_ hack a Web application for commercial gain, the every-day botnet is now core equipment in an attacker’s arsenal. Sure, DDoS and other saturation attacks are possible – but the real benefits of employing botnets to hack Web applications come from their sophisticated scripting engines and command & control which allow even onerous blind-SQL-injection attacks to be conducted in minutes rather than days. If someone’s clubbing your Web application with a botnet, where are your weaknesses and how much time have you really got?
I spoke briefly on the topic earlier this year at the OWASP Europe conference, but will be covering some new research in to techniques and trends - in particular the growing viability of Blind SQL Injection techniques.

If you happen to be in DC Thursday/Friday, drop by the conference. If you're already planning on attending the OWASP conference, make sure you attend my talk in the morning.

Saturday, October 17, 2009

"Add-ons may be causing problems" Says Firefox

So, it looks like the Mozilla folks have taken the initiative to block a couple of (pretty much) now default Microsoft Windows plug-ins that open up a few additional vectors for the bad guys to conduct drive-by-download attacks.

The two Firefox add-in's are the Microsoft .NET Framework Assistant and the Windows Presentation Foundation (as depicted in the screenshot of my system this evening).

Brian Krebs over at the Washington Post has a blog entry up (Mozilla Disables Microsoft's Insecure Firefox Add-on) covering more of the background on the topic and what led up to this latest Firefox response.

So, thumbs up to the Firefox team for taking the initiative here and working to protect their users. Keep up the good work.

Oh, and thanks also for the work with the new Plugin Check page. Its a great start to something thats been missing for quite some time (for mainstream users). There's still a lot of work to be done in figuring out which versions are installed (if the my screen shot below is anything to go by) and helping to manage the update process. It's something I've been calling for for quite some time now (see the whitepaper - Understanding the Web Browser Threat) - but this is real progress.

Software Piracy and Host Compromise

This last week has seen quite a bit of public discussion concerning the effect of software piracy on compromise rates, based upon Monday's release of a report titled "Software Piracy on the Internet: A Threat To Your Security"by the Business Software Alliance (BSA) - pages 6-12 are definitely worth a read (the rest is a little too self-serving of the BSA).

I don't believe the report actually holds any surprises for most security professionals, but it's always handy to have some independent (and current) validation.

I can remember back to the old 1980's BBS days where piracy was just as rampant with online games and even the base BBS software being backdoored by folks looking to make a quick buck through their leeched warez. The only thing that has changed has been the channels for distribution.

In the past I've conducted a number of studies related to pirate distribution channels - looking at both the exploits and malware being embedded in the content. For example, back in 2001-2002 when image file exploits were all the rage (e.g. JPEG/PNG/GIF/etc. file parsing vulnerabilities) I set up an experiment to analyze the content of several popular binary newsgroup channels (ranging from some of the heavily trafficked porn groups through to celebrity and disney image groups) and found that upwards of 5% of the copyrighted images being distributed contained exploit material (one popular vector was for the bad actors behind the attacks to respond to Repost Requests and Fills for missing images of popular collections).

A couple of years ago I repeated part of the experiment - but instead focusing on binary files (mostly games, Windows applications and keygens) and found almost two-thirds of the newsgroup content was backdoored with malware. I'm pretty sure that if I was to run the experiment again today I'd find the malicious file percentage to be higher. And thats just the newsgroup distribution channel. The P2P networks tend to be worse because its so much easier for others (potential victims) to stumble upon a malicious version of the pirated software - largely because it's a more efficient channel for criminals to operate under and they have a greater chance of enticing their victims (i.e. using faster P2P servers, constantly monitoring what's hot in file sharing, exploiting their own reputation systems, using botnets to saturate/influence, etc.).

What does this all mean? Well, it can probably be best summed up as "you get what you pay for" in most instances. While the motivations behind the BSA releasing this specific report are pretty obvious, so too is the fact that software piracy has, and always will be, a viable vector for criminals to make money both directly and indirectly through their pirated warez - i.e. selling "discounted" software, and through the use of the botnet infected hosts of their victims.

Dancho Danchev over at ZDNet has an interesting view on the problem by taking a look at the patching perspective - which I wholeheartedly agree with too. I covered the angle of patching (specifically Web browsers) in a whitepaper mid-2008 - Understanding the Web Browser Threat - that still applies today.

Wednesday, October 7, 2009

Serial Variant Evasion Tactics Whitepaper Released

Finally, today saw the public release of my latest technical whitepaper. This new whitepaper focuses on the business and techniques of generating unlimited quantities of undetected malware.

Cybercriminals have built serial variant production systems for several years and have been increasingly successful in using their spawned malware to bypass antivirus detection systems. The concept is simple - produce and release new malware faster than the antivirus companies can release new signatures to detect them. This idea lies at the very heart of the explosion (and exponential growth) in the numbers of new malware being discovered.

My latest whitepaper explains the components used by cybercriminals to construct "undetectable" malware - breaking down the tools they rely upon and the production tactics they use.

The papers goal is to enlighten those responsible for maintaining enterprise antivirus defenses about the tools cybercriminals and botnet masters have at their disposal - and help them better understand the root causes for the exponential growth in malware on the Internet.

New paper is here - Serial Variant Evasion Tactics.

Tuesday, September 29, 2009

Ethical Malware Creation Courses

My attention was drawn to a storm brewing up concerning the teaching of how to create malware. Apparently McAfee Avert Labs is advertising its Focus ’09 conference next month in Washington, D.C. and including a session titled: "Avert Labs — Malware Experience"
"Join experts from McAfee Avert Labs and have a chance to create a Trojan horse, commandeer a botnet, install a rootkit and experience first hand how easy it is to modify websites to serve up malware. Of course this will all be done in the safe and closed environment, ensuring that what you create doesn't actually go out onto the Internet."
This has already gotten a few malware experts a little hot under the collar. For example Michael St. Neitzel (VP of Threat Research and Technologies over at Sunbelt) decrees...
"This is unethical. And it’s the wrong approach to teaching awareness and understanding of malware. This would be like your local police giving a crash-course on how to plan and execute the perfect robbery -- yet to avoid public criticism, they teach it in a ‘safe environment’: your local police station."
Now, personally, I can't but feel an aspect of deja vu to all this banter. This argument about teaching how modern malware is built and hands-on training in its development has been going on for quite some time.

I remember having almost identical "discussions" back in 2000 when I helped create the ISS "Ethical Hacking" training course delivered in the UK (which was later renamed to "Network intrusion and prevention" around 2004 because some folks in marketing didn't like the term hacking) and later rolled out globally. Back then - practically a decade ago - there were claims that I was helping to teach a new generation of hackers... showing them the tools and techniques to break in to enterprise networks and servers. Within 3 years, such ethical hacking or penetration testing courses were a commodity - with just about every trade booth at a major security conference providing live demonstrations of hacking techniques.

Irrespective of the comparison with Ethical Hacking, training in the art of malware creation has been going on for ages. Just about any security company that does malware research has had to develop an internal training system for bringing new recruits up to pace with the threat - and of course they have to know how to use the tools the criminals are using to create their crimeware. So, for practically the entire lifetime of the antivirus business, people have been trained in malware development.

Whats all the waffle about "unethical" anyway? Is there a worry that trade secrets are going to be lost, or that a new batch of uber cyber-criminals are suddenly going to materialize? It doesn't make much sense to me. The bad guys already know all this stuff - after all, the antivirus companies follow their criminal counterpart's advances; it's not the other way around.

Looking back at the development of commercial Ethical Hacking courses and all the airtime nay-sayers got about training a new generation of hackers, I'm adamant these the availability of courses dramatically improved the awareness of the threat for those that needed to do something against it and enabled them to understand and better fortify their organizations. I only wish such courses had existed several years before 2000 - so we'd all be in a more advanced defensive state.

I honestly can't understand why the anti-malware fraternity has been so against educating their customers, and security professionals in general, the state of the art in malware creation and design. Hands-on training and education really works.

Good on McAfee - I'm backing the course, and want to see this type of education as easily available as that for penetration testing.

In fact you'll probably remember me mentioning that I'm also a proponent of making sure penetration testers and internal security teams use their own malware creations in pentests to check their defense in depth status. My, didn't that raise a ruckus too.

Smaller botnets dominate the enterprise network

I've been a little quiet on the blog these last couple of weeks - having spent quite a bit of time either writing or delivering new threat presentations (3 last week alone). Last week while I was in Miami speaking at Hacker Halted, a colleague (Erik Wu) was in Geneva for VB2009 presenting our latest findings of a study of some 600 different botnets encountered within enterprise networks.

I finally got around to pulling a quick blog together for the Damballa site covering one of the findings - related to the size of botnets. You can find a copy of the posting Botnet Size within the Enterprise on the Damballa blog and cross-posted below.

One additional thing I'd like to point out though... the number of hosts compromised which are members of small botnets is still only a fraction of the total number of botnet members found within the enterprise - i.e. we're talking about botnets operated by 600 botnet masters, rather the 1m+ compromised hosts we studied.

Cross-posting begins...

Last week at the VB2009 conference in Geneva, Erik Wu of Damballa presented some of our latest research findings. There’s been quite a bit of interest in these botnet findings – largely because very few people have had the opportunity to examine enterprise-focused botnets, rather than the noisy mainstream Internet botnets – in particular the differences between the two types of networks. So, with that in mind, I wanted to take some time here to provide more information about the key findings (I’ll try to cover other aspects in later blogs).

While we often observe plenty of stats pertaining to just how big some of the largest Internet-based botnets are (reaching in to the tens-of-millions), the spectrum of Enterprise-botnets appear to be different – at least from Damballa’s observations across our enterprise customers.

Based upon Damballa’s observations of some 600 different botnets encountered and examined within global enterprise businesses over three months, we found that small (sub 100 member) botnets account for 57 percent of all botnets.

Biggest Botnets within Enterprise

Fig 1. Biggest Botnets within Enterprise

As you can see in the pie chart above, Huge botnets (10,001+ members) accounted for 5 percent, Big botnets (501-10,000) accounted for 17 percent, Average botnets (101-500) accounted for 21 percent and Small (1-100) reached 51 percent of the 600 different botnets found successfully operating within enterprise environments.

The average size of the 600 botnets we examined hovered in the 101-500 range on a daily basis. Why do I use the term “on a daily basis”? Because the number of active members within each botnet tend to change daily – based upon factors such as whether the compromised hosts were turned on or part of the enterprise network (e.g. laptops), whether or not they had been remediated, and whether or not the remote botnet master was interactively controlling them.

While many people focus on the biggest botnets circulating around the Internet, it appears that the smaller botnets are not only more prevalent within real-life enterprise environments, but that they’re also doing different things. And, in most cases, those “different things” are more dangerous since they’re more specific to the enterprise environment they’re operating within.

Taking a closer look at all these small botnets (sub 100 victim counts), we noticed that the vast majority of them are utilizing many of the popular DIY malware construction kits out there on the Internet. These DIY kits (such as Zeus, Poison Ivy, etc.) normally retail for a few hundred dollars – but can often be downloaded for free from popular hacking forums, pirate torrent feeds and newsgroups – and are usable by anyone who knows how to use an Internet search engine and has ever installed software on a PC before.

It looks to me as though these small botnets are highly-targeted at particular enterprises (or enterprise vertical sector), typically requiring a sizable degree of familiarity of the breached enterprise itself. I suspect that in some cases we’re probably seeing the handy-work of employees effectively backdooring critical systems so that they can “remotely manage” the compromised assets and avoid antivirus detection – similar to the problems enterprise organizations used to have with people placing modems in machines for out-of-hours support. The problem though is that the majority of these “freely available” DIY malware construction kits are similarly backdoored. Therefore any employee using these free kits to remotely manage their network are also providing a parallel path for the DIY kit providers to access those very same systems – as evidenced with these small botnets often having multiple functional command and control channels.

As for the other small botnets, it looks like these are more professionally managed – with botnet masters specifically targeting corporate systems and data within the victim enterprise. These small botnets aren’t being used for noisy attacks (such as those seen throughout the Internet concerning spam, DDoS and click-fraud) – but rather they’re often passively monitoring the enterprise network to identify key assets or users and then going for high value items that can be either used directly (e.g. financial controller authentication details for large money transfers) or high value salable data (e.g. extracting copies of customer databases and source code to applications). Unfortunately for their enterprise victims, the egress traffic is almost always encrypted – so the only way of finding out specifically what information has been leeched away is going to rely upon detailed forensics and log analysis of the compromised hosts and the systems they interacted with.

The net result is that these smallest botnets efficiently evade detection and closure by staying below the security radar and relying upon botnet masters that have a good understanding of how the enterprise functions internally. As such, they’re probably the most damaging to the enterprise in the longterm.

– Gunter Ollmann, VP Research

Friday, September 18, 2009

Drive-by Malware Detection Rates

My attention was drawn today to a new threat report issued by Cyveillance covering their H1 2009 Cyber Intelligence Report. It's a nice report that focuses extensively on Web-based fraud and infection tactics - offering yet another view of the threat landscape.

While much of the report is fairly standard stuff (my, haven't things changed over the last 3 years now that every security company is putting out similar reports!), there's one particular nugget I found especially interesting. It would seem that Cyveillance conducted a solid study of the malicious Web sites they were periodically navigating, retreiving the malware from the drive-by attempt, and then subjecting the sample to a battery of standard AV detection products. The net result is an analysis of the effectiveness of traditional (mainstream) AV products to identify the malware as malicious.

By way of illustration:

The findings of their study reveal that AV detection of "0-day" malware is poor. In fact you could summarize it as becoming a victim to drive-by malware with every second site you visit - despite having "protection". Some AV products fared much, much worse.

It's a valuable proof-point for the consumer that host-based AV isn't really cut out for protecting home computers any more.

In addition, I think it's further backing to something I've been saying for a couple of years now - corporations that conduct business over the Internet need to assume that (in many cases) their customers computers are already compromised and they may not be able to trust anything that comes from them. Therefore, corporations need to develop alternative security and validation technologies situated in the backend - operating in environments they can control (and trust) - rather than trying to forcing the security emphasis upon their own customers. Basically, in order to continue to do business with Internet customers, they have to assume that a sizable percentage of their customers and transactions are compromised. The whitepaper on the topic is "Continuing Business with Malware Infected Customers".

Getting back to the findings from Cyveillance... I wrote about the tactics being adopted by drive-by-download cyber-criminals and the advancement of their automated delivery systems (X-Morphic Exploitation) back in 2007 and they've been improving their techniques in the meantime. With a bit of luck I'll be releasing a new whitepaper soon covering the latest techniques and tools being used by cyber-criminals to develop undetectable serial variant malware - so watch out for it.

Actually, I'll be covering this topic a little next week at Hacker Halted 2009 in Miami - so drop on by if you want to see the real deal in undetectable malware production.

Thursday, September 17, 2009

Ollmann speaking at the ISSA CISO Executive Event

It looks like I'll be in Los Angeles this coming weekend for the ISSA CISO Executive Event in Anaheim.

The theme for this years event is "Cyber Crime", and I'll be speaking on the topic "The Silent Breach: Botnet CnC Participation in the Enterprise"

I've constructed a brand new presentation for this executive event, and I'll be covering the dynamics of botnet command and control practices, and the implications for enterprise security - in particular the transition from "infection" to "breach". There's a lot of new analysis content based upon observations within real-life enterprise environments - and that's an important distinction. Practically all past analysis of botnets have been focused upon the Internet at large but - guess what - the dynamics within enterprise are quite a bit different!

I'm looking forward to the event and the discussions that follow.

Ollmann speaking at Hacker Halted USA 2009

Next Wednesday I'll be speaking at Hacker Halted 2009 down in Miami. I've never been to a Hacker Halted conference, so I'm looking forward to seeing what it's all like. So far the event has been really well organized by the Hacker Halted team - which always bodes well for a successful conference.

There's an outstanding line up of speakers for the event - in fact I'd go as far as saying that the line up is considerably stronger than recent BlackHat events. It's going to be a great event.

I'll be covering the topic: Factoring Criminal Malware in to Web Application Design

Here's a brief abstract for the talk...
With C&C driven malware near ubiquitous and over one-third of home-PC's infected with malware capable of hijacking live browser sessions, what attacks are _really_ possible? How can the criminals controlling the malware make real money from a "secure" e-commerce site? How are Web application developers meant to detect, stop or prevent an attack by their own customers?
If you're at the event or just happen to be in Miami Wednesday/Thursday, drop me an email if you care to grab a beer and discuss the evolving threat landscape.

Thursday, September 10, 2009

TippingPoint IPS Fails Critical Tests

I was reading a very interesting article today concerning the latest IPS testing results from NSS Labs. John Dunn over at TechWorld magazine has a story titled "Tippingpoint IPS struggles in new security tests".

Based upon the NSS Labs testing regime, TippingPoint's IPS (TippingPoint 10) detected/prevented less than 40 percent of the canned exploit tests. Lets be clear, that's bad! Just as important is the drop over the last five years in TippingPoints threat prevention coverage.

Some readers may think that I'm a little biased since I used to work for a competitor in this space - Internet Security Systems - and was responsible for their core threat detection technologies. While I'm not a great fan of TippingPoint - that's almost exclusively due to their commercial decision to purchase vulnerabilities from hackers, rather than their capability to protect organizations from Internet threats (despite the efforts of their marketing team).

TippingPoint's failure in these tests perhaps provide a degree of validation that commercial vulnerability purchase schemes do not increase protection. So the argument that such purchase programs allow security vendors to develop better protection, faster, is mostly marketing fluff.

That said, I suspect that TippingPoints poor performance in these latest tests to be more likely due to two factors:
  1. The testing has changed. It's long been said that some security vendors develop protection designed to pass testing and review systems rather than real-life threats. NSS have improved their testing systems to better represent real-life networks and their mix of traffic, and that probably had a negative effect on TippingPoints solution.
  2. They're suffering mojo drain. For the last few years 3Com have been messing about with what they're planning to do with TippingPoint - sell the division, subsume the division, spin it off, etc. The net result is that the 3Com business unit has suffered from an uncertain future which has resulted in a mix of brain-drain and mojo evaporation - with the consequence being that threat research and development has languished.
Can TippingPoint recover? Technically yes, just re-tune their detection engines for the new testing environment that NSS Labs use. But professionally I don't think that's the way to go (that sort of thing never occurred under my watch at ISS). TippingPoint's recent protection coverage failures run a lot deeper than that - their R&D teams need better executive support, a plan for the future and to recover their research mojo.

Monday, September 7, 2009

Ollmann speaking at the ZISC Workshop

This week I'll be in Zurich speaking at the ETH ZISC workshop on Security in Virtualized Environments and Cloud Computing.

The title of my talk is "Not Every Cloud has a Silver Lining" - and it's meant to be a fun (but insightful) look at the biggest and baddest cloud computing environments currently in existence - the botnets.

If you happen to be in Zurich on Thursday morning, by all means, please drop by for the talk. The workshop runs Thursday to Friday.

Need more details on what I'm covering? Below is the abstract...

What’s the largest cloud computing infrastructure in existence today? I’ll give you a hint. It consists of arguably 20 million hosts distributed over more than 100 countries and your computer may actually already be part of it whether you like it or not. It’s not under any single entities control, it’s sphere of influence is unregulated, and its operators have no qualms about sharing or selling your deepest cyber secrets.

The answer is botnets. They’re the largest cloud computing infrastructure out there and they’re only getting bigger and more invasive. Their criminal operators have had well over a decade to perfect their cloud management capabilities, and there’s a lot to learn from their mastery.

This session will look at the evolution of globe-spanning botnets. How does their command and control hierarchy really work? How are malicious activities coordinated? How are botnets seeded and nurtured? And how do they make their cloud invulnerable to shutdown?

Thursday, September 3, 2009

HSBC Bank France Hacked

Looks like Unu has gone and uncovered another major organization vulnerable to SQL Injection - this time it's HSBC Bank in France (previous escapades of Unu include Kaspersky and GameSpot to name but a few).

It's a little hard to verify the legitimacy of whether this particular HSBC hack is completely real because theres not enough evidence beyond some screenshots. That said though, Unu has been pretty reliable in the past on identifying SQL Injection vulnerable sites - so it looks probable.

In the case of HSBC France's system being compromised through SQL Injection, it looks like the backend SQL server was vulnerable - which has resulted in full access to the host. For example, the following list of drives and directories on the system.

Even though it appears that extensive access to the database server files are possible, there's something much worse... Unu has presented a screen shot of user credentials along with their login passwords.

It also looks like HSBC France has failed Security-101 best practices and stored passwords in clear-text. That's a massive no no! They should know better. This would get Web application developers fired in many organizations.

Oh, and a cursory inspection of the (poorly) obfuscated screenshot from Unu also indicates that there's no rigor on password selection or enforcement.

What more could go wrong?

Lets hope that Unu alerted HSBC in advance of his posting and that the SQL Injection vulnerability has been fixed. It'll probably take a little longer to fix the password problems though.

Unu's blog of his most recent HSBC Bank France finding is here.

Friday, August 28, 2009

Rent a DDoS botnet

Over recent weeks there has been a lot of interest in DDoS botnets – that is to say, rentable botnets that provide DDoS as a managed service. I’ve spoken to a number of people about how easy this is to do, and how practically anyone who happens to know how to use a popular Internet search engine can probably locate the sellers or the hacking message boards they hang around. Perhaps one of the finer points missing about the discussion of renting DDoS botnets pertains to the size.

A fairly typical rate for DDoS botnet rental hovers around the $200 for 10,000 bot agents per day. The rate per day is fairly flexible, and influenced by the actual size of the botnet that the bot master is trying to section off for DDoS services and where those hosts are physically situated. For example, some DDoS providers make a virtue of allocating bots that are located within a particular country and their average Internet bandwidth. Meanwhile, you’ll find providers at the other end of the spectrum offering DDoS services at substantially lower rates. For example, here’s a DDoS botnet for rent at the moment over at Ghost Market

80kDDoSBotnet

As you can see from above, this particular operator is offering a botnet of between 80k and 120k hosts capable of launching DDoS attacks of 10-100Gbps – which is more than enough to take out practically any popular site on the Internet. The price for this service? $200 per 24 hours – oh, and there’s a 3 minute try-before-you-buy.

By way of another example, the following screenshot is from another botnet master offering a 12k botnet for rent – for the price of $500 per month. Screenshots like this appear to be popular as a means of validating the sellers claims of the size of their botnets – despite the fact that all of this information can be trivially forged. Notice that only a handful of bots appear to be online and currently accessible? (this ties in to what I was saying the other day about counting botnets).

12kZeusForSale500perMonth

There are of course plenty of other operators that work this way – offering DDoS managed services – and there’s lots of competition amongst them. What’s perhaps most amusing about this botnet market to me is the fact that so few sellers have “good” reputations – and the message boards are rife with competitors throwing mud about the quality of the service or that the “seller” is actually just running a scam on newbie buyers.

I’d encourage readers to keep an eye on these kinds of hacking portals – just make sure you only access the sites from VM/sandboxed disposable hosts since many of the sites attempt to hack your Web browser. You’ll uncover lots of information about the mainstream botnet seller/renter market and, just as importantly, details about many of the newer or popular DIY botnet creation kits out there.

--Repost from blog.damballa.com.

Wednesday, August 26, 2009

Opt-in Botnets and hacking from the office

An area of personal interest for me over the last couple of years has been the evolution of cyber-protesting - in particular the development of what could be best called "opt-in botnets".

While the last 12 months have seen numerous stories covering politically motivated DDoS attacks targeting government institutions and country-specific brand name multi-nationals, several aspects to the evolution of this threat have been lost in the noise.

I'm planning on writing a handful of papers and articles covering both the emergence and evolution of cyber-protesting (from a security practitioners view), and how social networking sites are a game changer for the nature and breadth of attacks we can expect over the coming years.

That said, an important aspect of this cyber-protesting threat I believe lies with the increasing acceptance of opt-in botnets. In particular, the capability of a social group to create/access customized attack tools that can be harnessed for collaborative attacks against a shared target - where the software agent is intelligently linked to a centralized command and control infrastructure - and the distributed agents can be coordinated as a single weapon. All this with the consent of their cyber-protesting supporters.

Some aspects to this botnet-based cyber-protesting have already manifested themselves - in particular the way social networking sites like Facebook were used to incentivize supporters to visit external sites and download tools that would target Hamas or Israeli government sites at the beginning of this year.

That said, and why I bring up this topic now, there was an interesting column piece on SecurityFocus yesterday by Mark Rasch - Lazy Workers May Be Deemed Hackers. Mark examines the very important issue that many corporate entities may have unintentionally exposed their employees to some pretty severe legal ramifications - i.e. potentially exposing them to criminal prosecution if they misuse their work machines. This is important in the context of opt-in botnets.

If an employee decides to install any out-in cyber-protesting software on to their work machine and allows it to launch an attack against some target, while it may be a fire-able offense (i.e. inappropriate use of corporate systems) it could also lead to criminal hacking charges. Which, as Mark's column points out, is a pretty harsh offence for the employee - but also means considerable work (and distractions) for the employer in having to be involved with law enforcement and their prosecution process, whether they want to or not.

Tuesday, August 18, 2009

r00t-y0u.org Sting Backfires

OK, so this is quite amusing. It appears that some Ozzie cops had their cyber-sting backfire on them. After taking over the hacking forum root-y0u.org by basically busting an administrator of the site at their home address and posting a "warning" on their sites front page...

"This underground form has been monitored by law enforcement - every post, private message and all registration information has been captured. All member IP addresses and have been logged and identification processes are now underway.

The creation and distribution of malware, denial of service attacks and accessing stolen information are serious crimes.

Every movement on this forum has been tracked and where there is information to suggest a person has committed a criminal act, referrals will be forwarded to the relevant authority in each jurisdiction. There have already been a number of arrests as a result of current investigations. This message should serve as a warning not to engage in criminal activity."
... it seems that a sympathetic soul has in turned hacked the Australian federal police system.

Its odd that the Ozzie police would have decided to alert patrons of the r00t-y0u.org site that they were now being monitored - instead of running with it for longer and perhaps building a cases against the sites users/subscribers. Oh well lessons learned I guess... the painful way.

It's also odd that they didn't take down the affiliated Black Hacking site at the same time? perhaps they did and they're just watching it now ;-)

Monday, August 17, 2009

Dumpster Diving - XCrypt by Kazuya

For the last week or so I've been repeatedly asked "how do you find these crime-ware tools?" The answer is pretty simple really, I often just use a search engine and focus in on the hacking forums if I'm curious or after some low hanging fruit.

For example, lets take a look at a new(ish) crypter - XCrypt.

I stumbled across this particular crime-ware tool while perusing a popular Spanish hacking site - LatinoHack.com - which I originally came across when I was looking to see if there were any new (or related) updates to the DIY Octopus Keylogger tool.

Since my Spanish is pretty much non-existent, I need to rely upon one of those online Web translators for these kinds of sites - but then again, it seems that most of the "better" underground malware and hacking sites tend not to be in English anyway. These translators are good enough for my purposes though.

XCrypt caught my eye for a handful of reasons:
  1. It was a 1.0 cryptor (and I wasn't familiar with it)...
  2. It wa hosted on a Spanish site but had German instructions...
  3. It was high up on the first page of the forum.
If you're wondering what a cryptor does - well, generally, you point the tool at a malicious file (e.g. a piece of malware that you've already created - say the output from the DIY Octopus Keylogger), click start, and out pops an auto-unpacking self-encrypted version of the original malware that's (probably) going to bypass any anti-virus detection tools out there.

I was curious about XCrypt though, so I did a little more searching - this time using the keywords "xCrypt Public Kazuya" - and came across yet another hacking forum site - PortalHacker.net - which had a whole bundle of other Trojans and keyloggers for download (along with satisfied customer reviews).



PortalHacker had a bit of a discussion going on about XCrypt, including the latest anti-virus coverage (which was nothing currently detected it)...

... which isn't precisely unexpected. It's new(ish).

And, to help things along, the site (and review) also included a convenient option to download the tool from one of the free file-hosting providers out there (which is a popular way of distributing these kinds of crime-ware tools). The file was also password protected - to prevent any perimeter or host-based security products from intercepting the file and potentially flagging it as malware (the tool itself - not the output from the tool).

As for the specifics of this particular crime-ware creator tool - I'll leave that to a full-time threat analyst to do his/her stuff and provide the juicy biopsy of XCrypt - even though there were a bundle of postings on the forum congratulating the author of the tool for their skills and eliteness... as well as repeated AV evasion test results.

So, what was next? How about examining the German heritage of this particular tool?... which led to (yet another) hacking forum site - 1337-crew.to - with a thread covering the XCrypt tool, but this time the thread was started by someone called Kazuya (the author?).



And what do you know, pay dirt, there's an even newer version of the tool available...



... along with new AV test results (only one AV discovers its crypted crime-ware output), and 140 satisfied downloaders.

Most of these kinds of hacking and malware discussion forums have rating systems for contributors (and sellers), and it looks like the last stop in my search found a site that the author of this particular tool likes to hang out - 440 posts and a 5-star site reputation.

And so concludes a brief demonstration in how easy it is to uncover new crime-ware creator kits and tools, and how to get hold of samples to "play" with. This isn't really rocket science kind of stuff.

You may be asking yourself "isn't this all kind of illegal?" and "why aren't these kinds of sites shut down?". Well the answer to that is typically different laws apply in different countries. In most countries it is not illegal to create these kinds of tools, nor is it illegal to discuss their use. In some countries it may be illegal to buy/sell these tools, and in many countries it may be illegal to use them against computers you're not authorized to access - but the net result is that these kinds of information and crime-ware toolsets are out on the Internet for anyone to access (subject to Web filtering policies :-)

Thursday, August 13, 2009

Malware of the Day

It seems that most malware served up by cyber-criminals has a shelf-life of only 24 hours. PandaLabs said that 52% of the 37,000 virus samples they get each day will never be seen again on any other day.

I'm not surprised. Serial variant production lines have been pumping out new malware samples in industrial quantities. Back in early 2007 I released a whitepaper for IBM covering the mechanisms many of the drive-by-download sites were using to create and deploy "unique" malware samples on a per victim visit basis. I'm just glad that one of the anti-virus companies has "confessed" to the problem.

Unfortunately the problem is only going to get worse, and these "cloud-based" service proposals are probably going to provide as much protection against the real botnet threat as a real fluffy-white cloud does against a bullet.

I blogged in more detail on the topic over at the Damballa site. Half of New Viruses Only Useful to Cyber-criminals For A Single Day.

Sunday, August 2, 2009

Blackhat & Defcon - Las Vegas '09

It’s always great to catch up with former colleagues and security peers from around the world, but if there’s a t-shirt I need to add to my collection, it’ll be “I survived another Blackhat/Defcon”. With back-to-back “lets grab a beer and chat” meetings, the days (and evenings) quickly blur in to a litany of bar hops and, with only 24 hours in the day, “sleep” becomes the sacrificial goat on the altar of security knowledge exchange.

Irrespective of the sleep deprivation, the annual pilgrimage to Las Vegas for the paired conferences is generally a vital part of most security professional’s year – particularly those of us who tend to focus on attack vectors and vulnerabilities.

I found this year’s Blackhat to be less claustrophobic than previous years – largely due to the better layout of the stands and spread of conference rooms, but I’m sure that the number of attendees were down quite a bit (the figure thrown around the corridors was “40% down”) – and the average quality of the talks tended to be fairly high, although the variety of genuinely new security content was down quite a bit from previous years. This has been an ongoing trend with Blackhat which I’d attribute to the increasing popularity of more regional/international security conferences and fiercer competition. That said, there were no shortage of terribly boring sessions – particularly those with novice speakers who have rediscovered an old vulnerability and obscured the parallels due to their unique naming conventions.

Of all the talks I attended, the ones I tended to like the most had very little to do with the types of security I do now, or have done in the past – with my favorite being the SSN talk delivered by Alessandro Acquisti. Alessandro delivered an excellent presentation backed by rigorous research, and I enjoyed the anecdotes pertaining to the challenges in dealing with government offices.

One thing I noted too was that in just about every presentation at Blackhat there were references to botnets. Which is great to hear since that’s what I’m focused on, although it was pretty clear that most of the presenters don’t really understand the motivations behind them or their criminal operations particularly well. Often their references to botnets were more in the tune of “…and at the extreme end of damage, it could be used by a botnet to destroy the planet.”

Apart from that, Blackhat/Defcon was its usual self. Lots of geeks traveling in migratory packs lurching from one bar to another after a day of presentations – being lured by the prospect of free alcohol to vendor parties – and trying to fit in with the overall party atmosphere of Vegas. Which, needless to say, tends to go wrong pretty quickly. Geeks + Alcohol + Parties + Vegas Nightlife = Dread (for both those participating and those watching). - But hey, I'll probably be doing it all again next year ;-)

Sunday, July 19, 2009

Pentest Evolution: Malware Under Control

When I look back at the history of commercial consultancy-based pentesting I see two distinct forks in the road. The first happened around 2000, and the second happened around 2003. But I think another fork is about to crop up.

Prior to 2000, commercial pentesting was almost exclusively focused on the external hacking of an organizations Internet visible assets. Basically, professional full-time consulting teams (which can probably be tracked back to 1994 if you push hard enough) were following a loose pentest methodology (still mostly portrayed as a dark art and only "learnable" via an authoritative mentor) - plugging away with vulnerability scanners and exploiting anything that came up - where the goal was break in, plant a few flags, and then tell the client what patches and system hardening they needed to catch up on. This core area of pentesting (which is still a distinct suit of offerings and consulting skills today) focuses upon OS and network-level vulnerability discovery and careful exploitation.

The first fork
By 2000 though, simply hacking an IIS or Apache server through an unpatched vulnerability or permissions flaw and throwing up a command script to "root" the server wasn't really cutting it to anymore for all these new Web applications. So, the first real "specialist" services started appear - focused upon assessing the custom Web application itself - independent of the hosting platform. To my mind, that was the first forking of the pentest track. Sure, there were still (and are) security code reviews (dissecting lines of code and hunting for bugs and vulnerabilities) - but I don't class that as "pentesting" as such, thats either auditing or security assessment.

That first fork led to entirely new pentesting methodologies, training regimes and certifications. But, more importantly, it also led to distinct consulting teams - rather than a specialized subset of network skills learned as part of being a pentester. Today, there's so much to learn in the field of Web Application pentesting that to keep at the top of the game you'll never realistically have time to deep-dive more classical OS and network based pentesting.

The second fork
The next fork that altered the fundamentals of pentesting occurred around 2003 with the advent (and requirement) for specialized reverse engineering skills to "black-box" hack a brand-new commercial software product. Around this time major software vendors were struggling in their battles against blackhat hackers and the full disclosure movement - even the news media was keeping count of the vulnerabilities - and customers were scared.

The solution came from specialist pentesting consulting organizations that had established a name (and reputation) based upon their ability to discover/disclose new vulnerabilities. It was a simple business model - find new bugs in all the software that prospective customers use, tell the media you found some bugs, get recognized by prospective customers as being "elite" pentesters, and turn the "prospective" in to "loyal" customers.

I identify 2003 as the year that specialized bug-hunting and security reverse engineering services started to appear as commercial consulting offerings, and the first real wide-spread traction as software vendors began to procure this specialized consulting.

The skill-sets are (again) quite unique of any other arm of pentesting. While knowledge of the other two pentesting regimes is valuable (e.g. Network/OS pentesting and Web Application pentesting), it takes a different mind and training to excel in the area of security reverse engineering. While you could argue that some of the best "classical" pentesters had many of the skills to find and exploit any new bugs that stumbled across during a client engagement - it wasn't until 2003 that these services really became commercial offerings and sales teams started to sell them.

The impending fork?
Which all leads me to point out a probable new folk in the pentesting path - specialist malware and its employment in pentesting. Why?

It seems to me that we've reached a time where formalized methodologies and compliance mandates have pretty much defined the practical bounds of commercial pentesting (Network/OS, Web application and Reverse Engineering), and yet there is a sizable security gap. And that gap firmly lies within the "prove it" camp of pentesting.

What I mean by that is, as any savvy pentester will tell their customer, the pentest is only as good as the consultant and the tools they used, and is only valid for the configuration tested and the date/time of testing. No guarantees or warranties are inferred, and it's a point in time test. And, on top of all that the scope of the pentest has typically been narrowly defined - which means that you end up with phrases like "system was out of scope...", "...not all patches were applied", "...not allowed to install tools on the compromised host", etc., appearing in the final reports handed to the customer.

But, with the greater adoption/deployment (and availability) of technologies such as IPS, firewalls, ADS, Web filtering, mail gateways, host-based protection, DLP, NAC, etc. and the growing strictness (and relevance) of compliance regulations, those classic limitations of pentesting methodologies leave vacant the "prove it" - prove that those technologies are really working and that the formal emergency response systems really do work.

This is where I think a new skill set, mindset and pentesting methodology is developing - and is an area which I expect to see develop in to commercial offerings this year.

Pentesting with malware
What I envision is the requirement for specialised security pentesting offerings that focus upon developing new "malware" and "delivery systems" designed to not only test the perimeter defenses of an enterprise, but also every layer of their security system in one go.

I don't think it's enough to say "drive-by-downloads are a fact of life and all it takes is one unpatched host to browse a dangerous site to infect our network. but that's OK because we have anomaly detection systems and DLP, and we'll stop them that way". Prove it!

Given the widespread availability of DIY malware creation kits, and the staggering array of tools that can pack, crypt, armour, obfuscate and bind a custom malware sample - and make it completely invisible to any anti-virus technology deployed within an enterprise - I expect that there will be a demand for pentesting to evolve and encompass the use of "live" malware as a core pentest consultancy offering.

For example, does the customers enterprise prevent users from browsing key-munged web sites (e.g. www.gooogle.com, intranet.enterpriise.com, etc.)? Which browser plugings are installed and not fully patched? Can malicious URL's and zipped malware make it through the mail gateways? Can the host-based security package detect keyloggers and network sniffers? If a malware package starts to scan and enumerate the local network from an "infected" host, is it detected, and how fast? What types of data can be exported from an infected host? Does compression and encryption of exported data get detected by the DLP solution? Does the malware have to be "proxy-aware" and require user authentication? Is out-of-hours activity detected from an "infected" host? Is it possible to "worm" through the enterprise network and "infect" or enumerate shared file systems and servers?

All of these questions, and many more, can be answered through the deployment of specialized malware creations and focused delivery techniques. The problem though is that this is an untapped fork in the pentesting road, requiring new mindsets - particularly with enterpise security teams.

The bad guys are already exploiting enterprises with custom malware, yet its generally taboo for consultancies to test security using similar methods. To my mind, that means that new pentesting specialization is now required to deliver the expertiese needed by enterprise business to really test their security from today's threat spectrum.

Malware pentest anyone?