Tuesday, September 29, 2009

Ethical Malware Creation Courses

My attention was drawn to a storm brewing up concerning the teaching of how to create malware. Apparently McAfee Avert Labs is advertising its Focus ’09 conference next month in Washington, D.C. and including a session titled: "Avert Labs — Malware Experience"
"Join experts from McAfee Avert Labs and have a chance to create a Trojan horse, commandeer a botnet, install a rootkit and experience first hand how easy it is to modify websites to serve up malware. Of course this will all be done in the safe and closed environment, ensuring that what you create doesn't actually go out onto the Internet."
This has already gotten a few malware experts a little hot under the collar. For example Michael St. Neitzel (VP of Threat Research and Technologies over at Sunbelt) decrees...
"This is unethical. And it’s the wrong approach to teaching awareness and understanding of malware. This would be like your local police giving a crash-course on how to plan and execute the perfect robbery -- yet to avoid public criticism, they teach it in a ‘safe environment’: your local police station."
Now, personally, I can't but feel an aspect of deja vu to all this banter. This argument about teaching how modern malware is built and hands-on training in its development has been going on for quite some time.

I remember having almost identical "discussions" back in 2000 when I helped create the ISS "Ethical Hacking" training course delivered in the UK (which was later renamed to "Network intrusion and prevention" around 2004 because some folks in marketing didn't like the term hacking) and later rolled out globally. Back then - practically a decade ago - there were claims that I was helping to teach a new generation of hackers... showing them the tools and techniques to break in to enterprise networks and servers. Within 3 years, such ethical hacking or penetration testing courses were a commodity - with just about every trade booth at a major security conference providing live demonstrations of hacking techniques.

Irrespective of the comparison with Ethical Hacking, training in the art of malware creation has been going on for ages. Just about any security company that does malware research has had to develop an internal training system for bringing new recruits up to pace with the threat - and of course they have to know how to use the tools the criminals are using to create their crimeware. So, for practically the entire lifetime of the antivirus business, people have been trained in malware development.

Whats all the waffle about "unethical" anyway? Is there a worry that trade secrets are going to be lost, or that a new batch of uber cyber-criminals are suddenly going to materialize? It doesn't make much sense to me. The bad guys already know all this stuff - after all, the antivirus companies follow their criminal counterpart's advances; it's not the other way around.

Looking back at the development of commercial Ethical Hacking courses and all the airtime nay-sayers got about training a new generation of hackers, I'm adamant these the availability of courses dramatically improved the awareness of the threat for those that needed to do something against it and enabled them to understand and better fortify their organizations. I only wish such courses had existed several years before 2000 - so we'd all be in a more advanced defensive state.

I honestly can't understand why the anti-malware fraternity has been so against educating their customers, and security professionals in general, the state of the art in malware creation and design. Hands-on training and education really works.

Good on McAfee - I'm backing the course, and want to see this type of education as easily available as that for penetration testing.

In fact you'll probably remember me mentioning that I'm also a proponent of making sure penetration testers and internal security teams use their own malware creations in pentests to check their defense in depth status. My, didn't that raise a ruckus too.

Smaller botnets dominate the enterprise network

I've been a little quiet on the blog these last couple of weeks - having spent quite a bit of time either writing or delivering new threat presentations (3 last week alone). Last week while I was in Miami speaking at Hacker Halted, a colleague (Erik Wu) was in Geneva for VB2009 presenting our latest findings of a study of some 600 different botnets encountered within enterprise networks.

I finally got around to pulling a quick blog together for the Damballa site covering one of the findings - related to the size of botnets. You can find a copy of the posting Botnet Size within the Enterprise on the Damballa blog and cross-posted below.

One additional thing I'd like to point out though... the number of hosts compromised which are members of small botnets is still only a fraction of the total number of botnet members found within the enterprise - i.e. we're talking about botnets operated by 600 botnet masters, rather the 1m+ compromised hosts we studied.

Cross-posting begins...

Last week at the VB2009 conference in Geneva, Erik Wu of Damballa presented some of our latest research findings. There’s been quite a bit of interest in these botnet findings – largely because very few people have had the opportunity to examine enterprise-focused botnets, rather than the noisy mainstream Internet botnets – in particular the differences between the two types of networks. So, with that in mind, I wanted to take some time here to provide more information about the key findings (I’ll try to cover other aspects in later blogs).

While we often observe plenty of stats pertaining to just how big some of the largest Internet-based botnets are (reaching in to the tens-of-millions), the spectrum of Enterprise-botnets appear to be different – at least from Damballa’s observations across our enterprise customers.

Based upon Damballa’s observations of some 600 different botnets encountered and examined within global enterprise businesses over three months, we found that small (sub 100 member) botnets account for 57 percent of all botnets.

Biggest Botnets within Enterprise

Fig 1. Biggest Botnets within Enterprise

As you can see in the pie chart above, Huge botnets (10,001+ members) accounted for 5 percent, Big botnets (501-10,000) accounted for 17 percent, Average botnets (101-500) accounted for 21 percent and Small (1-100) reached 51 percent of the 600 different botnets found successfully operating within enterprise environments.

The average size of the 600 botnets we examined hovered in the 101-500 range on a daily basis. Why do I use the term “on a daily basis”? Because the number of active members within each botnet tend to change daily – based upon factors such as whether the compromised hosts were turned on or part of the enterprise network (e.g. laptops), whether or not they had been remediated, and whether or not the remote botnet master was interactively controlling them.

While many people focus on the biggest botnets circulating around the Internet, it appears that the smaller botnets are not only more prevalent within real-life enterprise environments, but that they’re also doing different things. And, in most cases, those “different things” are more dangerous since they’re more specific to the enterprise environment they’re operating within.

Taking a closer look at all these small botnets (sub 100 victim counts), we noticed that the vast majority of them are utilizing many of the popular DIY malware construction kits out there on the Internet. These DIY kits (such as Zeus, Poison Ivy, etc.) normally retail for a few hundred dollars – but can often be downloaded for free from popular hacking forums, pirate torrent feeds and newsgroups – and are usable by anyone who knows how to use an Internet search engine and has ever installed software on a PC before.

It looks to me as though these small botnets are highly-targeted at particular enterprises (or enterprise vertical sector), typically requiring a sizable degree of familiarity of the breached enterprise itself. I suspect that in some cases we’re probably seeing the handy-work of employees effectively backdooring critical systems so that they can “remotely manage” the compromised assets and avoid antivirus detection – similar to the problems enterprise organizations used to have with people placing modems in machines for out-of-hours support. The problem though is that the majority of these “freely available” DIY malware construction kits are similarly backdoored. Therefore any employee using these free kits to remotely manage their network are also providing a parallel path for the DIY kit providers to access those very same systems – as evidenced with these small botnets often having multiple functional command and control channels.

As for the other small botnets, it looks like these are more professionally managed – with botnet masters specifically targeting corporate systems and data within the victim enterprise. These small botnets aren’t being used for noisy attacks (such as those seen throughout the Internet concerning spam, DDoS and click-fraud) – but rather they’re often passively monitoring the enterprise network to identify key assets or users and then going for high value items that can be either used directly (e.g. financial controller authentication details for large money transfers) or high value salable data (e.g. extracting copies of customer databases and source code to applications). Unfortunately for their enterprise victims, the egress traffic is almost always encrypted – so the only way of finding out specifically what information has been leeched away is going to rely upon detailed forensics and log analysis of the compromised hosts and the systems they interacted with.

The net result is that these smallest botnets efficiently evade detection and closure by staying below the security radar and relying upon botnet masters that have a good understanding of how the enterprise functions internally. As such, they’re probably the most damaging to the enterprise in the longterm.

– Gunter Ollmann, VP Research

Friday, September 18, 2009

Drive-by Malware Detection Rates

My attention was drawn today to a new threat report issued by Cyveillance covering their H1 2009 Cyber Intelligence Report. It's a nice report that focuses extensively on Web-based fraud and infection tactics - offering yet another view of the threat landscape.

While much of the report is fairly standard stuff (my, haven't things changed over the last 3 years now that every security company is putting out similar reports!), there's one particular nugget I found especially interesting. It would seem that Cyveillance conducted a solid study of the malicious Web sites they were periodically navigating, retreiving the malware from the drive-by attempt, and then subjecting the sample to a battery of standard AV detection products. The net result is an analysis of the effectiveness of traditional (mainstream) AV products to identify the malware as malicious.

By way of illustration:

The findings of their study reveal that AV detection of "0-day" malware is poor. In fact you could summarize it as becoming a victim to drive-by malware with every second site you visit - despite having "protection". Some AV products fared much, much worse.

It's a valuable proof-point for the consumer that host-based AV isn't really cut out for protecting home computers any more.

In addition, I think it's further backing to something I've been saying for a couple of years now - corporations that conduct business over the Internet need to assume that (in many cases) their customers computers are already compromised and they may not be able to trust anything that comes from them. Therefore, corporations need to develop alternative security and validation technologies situated in the backend - operating in environments they can control (and trust) - rather than trying to forcing the security emphasis upon their own customers. Basically, in order to continue to do business with Internet customers, they have to assume that a sizable percentage of their customers and transactions are compromised. The whitepaper on the topic is "Continuing Business with Malware Infected Customers".

Getting back to the findings from Cyveillance... I wrote about the tactics being adopted by drive-by-download cyber-criminals and the advancement of their automated delivery systems (X-Morphic Exploitation) back in 2007 and they've been improving their techniques in the meantime. With a bit of luck I'll be releasing a new whitepaper soon covering the latest techniques and tools being used by cyber-criminals to develop undetectable serial variant malware - so watch out for it.

Actually, I'll be covering this topic a little next week at Hacker Halted 2009 in Miami - so drop on by if you want to see the real deal in undetectable malware production.

Thursday, September 17, 2009

Ollmann speaking at the ISSA CISO Executive Event

It looks like I'll be in Los Angeles this coming weekend for the ISSA CISO Executive Event in Anaheim.

The theme for this years event is "Cyber Crime", and I'll be speaking on the topic "The Silent Breach: Botnet CnC Participation in the Enterprise"

I've constructed a brand new presentation for this executive event, and I'll be covering the dynamics of botnet command and control practices, and the implications for enterprise security - in particular the transition from "infection" to "breach". There's a lot of new analysis content based upon observations within real-life enterprise environments - and that's an important distinction. Practically all past analysis of botnets have been focused upon the Internet at large but - guess what - the dynamics within enterprise are quite a bit different!

I'm looking forward to the event and the discussions that follow.

Ollmann speaking at Hacker Halted USA 2009

Next Wednesday I'll be speaking at Hacker Halted 2009 down in Miami. I've never been to a Hacker Halted conference, so I'm looking forward to seeing what it's all like. So far the event has been really well organized by the Hacker Halted team - which always bodes well for a successful conference.

There's an outstanding line up of speakers for the event - in fact I'd go as far as saying that the line up is considerably stronger than recent BlackHat events. It's going to be a great event.

I'll be covering the topic: Factoring Criminal Malware in to Web Application Design

Here's a brief abstract for the talk...
With C&C driven malware near ubiquitous and over one-third of home-PC's infected with malware capable of hijacking live browser sessions, what attacks are _really_ possible? How can the criminals controlling the malware make real money from a "secure" e-commerce site? How are Web application developers meant to detect, stop or prevent an attack by their own customers?
If you're at the event or just happen to be in Miami Wednesday/Thursday, drop me an email if you care to grab a beer and discuss the evolving threat landscape.

Thursday, September 10, 2009

TippingPoint IPS Fails Critical Tests

I was reading a very interesting article today concerning the latest IPS testing results from NSS Labs. John Dunn over at TechWorld magazine has a story titled "Tippingpoint IPS struggles in new security tests".

Based upon the NSS Labs testing regime, TippingPoint's IPS (TippingPoint 10) detected/prevented less than 40 percent of the canned exploit tests. Lets be clear, that's bad! Just as important is the drop over the last five years in TippingPoints threat prevention coverage.

Some readers may think that I'm a little biased since I used to work for a competitor in this space - Internet Security Systems - and was responsible for their core threat detection technologies. While I'm not a great fan of TippingPoint - that's almost exclusively due to their commercial decision to purchase vulnerabilities from hackers, rather than their capability to protect organizations from Internet threats (despite the efforts of their marketing team).

TippingPoint's failure in these tests perhaps provide a degree of validation that commercial vulnerability purchase schemes do not increase protection. So the argument that such purchase programs allow security vendors to develop better protection, faster, is mostly marketing fluff.

That said, I suspect that TippingPoints poor performance in these latest tests to be more likely due to two factors:
  1. The testing has changed. It's long been said that some security vendors develop protection designed to pass testing and review systems rather than real-life threats. NSS have improved their testing systems to better represent real-life networks and their mix of traffic, and that probably had a negative effect on TippingPoints solution.
  2. They're suffering mojo drain. For the last few years 3Com have been messing about with what they're planning to do with TippingPoint - sell the division, subsume the division, spin it off, etc. The net result is that the 3Com business unit has suffered from an uncertain future which has resulted in a mix of brain-drain and mojo evaporation - with the consequence being that threat research and development has languished.
Can TippingPoint recover? Technically yes, just re-tune their detection engines for the new testing environment that NSS Labs use. But professionally I don't think that's the way to go (that sort of thing never occurred under my watch at ISS). TippingPoint's recent protection coverage failures run a lot deeper than that - their R&D teams need better executive support, a plan for the future and to recover their research mojo.

Monday, September 7, 2009

Ollmann speaking at the ZISC Workshop

This week I'll be in Zurich speaking at the ETH ZISC workshop on Security in Virtualized Environments and Cloud Computing.

The title of my talk is "Not Every Cloud has a Silver Lining" - and it's meant to be a fun (but insightful) look at the biggest and baddest cloud computing environments currently in existence - the botnets.

If you happen to be in Zurich on Thursday morning, by all means, please drop by for the talk. The workshop runs Thursday to Friday.

Need more details on what I'm covering? Below is the abstract...

What’s the largest cloud computing infrastructure in existence today? I’ll give you a hint. It consists of arguably 20 million hosts distributed over more than 100 countries and your computer may actually already be part of it whether you like it or not. It’s not under any single entities control, it’s sphere of influence is unregulated, and its operators have no qualms about sharing or selling your deepest cyber secrets.

The answer is botnets. They’re the largest cloud computing infrastructure out there and they’re only getting bigger and more invasive. Their criminal operators have had well over a decade to perfect their cloud management capabilities, and there’s a lot to learn from their mastery.

This session will look at the evolution of globe-spanning botnets. How does their command and control hierarchy really work? How are malicious activities coordinated? How are botnets seeded and nurtured? And how do they make their cloud invulnerable to shutdown?

Thursday, September 3, 2009

HSBC Bank France Hacked

Looks like Unu has gone and uncovered another major organization vulnerable to SQL Injection - this time it's HSBC Bank in France (previous escapades of Unu include Kaspersky and GameSpot to name but a few).

It's a little hard to verify the legitimacy of whether this particular HSBC hack is completely real because theres not enough evidence beyond some screenshots. That said though, Unu has been pretty reliable in the past on identifying SQL Injection vulnerable sites - so it looks probable.

In the case of HSBC France's system being compromised through SQL Injection, it looks like the backend SQL server was vulnerable - which has resulted in full access to the host. For example, the following list of drives and directories on the system.

Even though it appears that extensive access to the database server files are possible, there's something much worse... Unu has presented a screen shot of user credentials along with their login passwords.

It also looks like HSBC France has failed Security-101 best practices and stored passwords in clear-text. That's a massive no no! They should know better. This would get Web application developers fired in many organizations.

Oh, and a cursory inspection of the (poorly) obfuscated screenshot from Unu also indicates that there's no rigor on password selection or enforcement.

What more could go wrong?

Lets hope that Unu alerted HSBC in advance of his posting and that the SQL Injection vulnerability has been fixed. It'll probably take a little longer to fix the password problems though.

Unu's blog of his most recent HSBC Bank France finding is here.