Showing posts with label protection. Show all posts
Showing posts with label protection. Show all posts

Thursday, March 15, 2012

Security: The Art of Compromise

“Security is all about protecting the user.” That’s the comment that came up the other week in the twittersphere that kicked off a not-unexpected trail of pro and con tweets.

Being limited to 140 characters makes it rather difficult to have a deep and meaningful discussion on the topic and the micro-blogging apparatus isn’t particularly conducive to the communicating the nuances of a more detailed thought. So I thought I’d address the topic here in blog format instead.

I suppose my first thought is that Internet security isn’t necessarily about protecting the user, in fact I’d go as far as saying that modern security approaches increasingly assume that the user themselves is the threat. In the most optimistic case security is about protecting assets (digital or physical) from harm or theft. Failing that, Internet security is about detecting change to assets that were deemed to merit protection.

As a community we tend to over use safety analogies when we’re trying to communicate the significance of a threat and the value of protection – which is why I believe there’s a prevailing assumption that Internet security is about protecting the user. For example, I’ve often heard (and abused it myself) the car analogy for putting defense in depth in to perspective – i.e. airbags, safety belts, crumple zones, etc. being metaphors for anti-virus, IDS and firewalls.

I think a more appropriate analogy for modern Internet security practices is that of protecting a bicycle. The cyclist is the user, and by protecting the bike itself we’re not actually doing much for the safety of the rider. In fact I’ll argue that over-protecting the bike may end up decreasing the safety of the cyclist – as we too often see in the cyber world (e.g. the lock’s so big & heavy that it affects the cyclists ability to actually ride the bike). By way of problem statement, we can consider the cyclist as a consumer of the technology (i.e. the bicycle) and, for him to be able to ride, he needs to ensure that his bike hasn’t been stolen or damaged.

When it comes to “protecting” the bike, there are a number of factors the would-be cyclist needs to take into account. Likely the most important concern is going to be how to lock-up the bike when not in use – especially when away from home. The most obvious solution is to purchase a dedicated bicycle lock.

Now this is where I think the analogy works better than most for the Internet security world… what are some of the deliberations the cyclist must make in selecting an appropriate protection solution?

  • How big a lock do I need? A small lock can be trivially overcome, but is light and easy to carry on longer rides. A big heavy lock will likely be much harder to overcome, but is going to be troublesome to carry.
  • How long a chain? A short chain is easier to carry. Meanwhile a longer chain offers me the flexibility to also lock up the wheels and wrap around bigger objects.
  • How much do I want to spend? Some top-quality locks are almost as expensive as the bicycle they’re destined to protect. Meanwhile a more expensive lock may be lighter and more proficient at keeping thieves away.
Deciding upon a protection solution is a practice in compromise – comparing the risk likelihood with the inhibitors of the proposed solution. There’s also awareness that no matter how big and badass the lock may be, there’ll always be someone out there with more powerful bolt-cutters or a more imaginative way of subverting the lock. It may be a compromise, but hopefully it is an informed decision. The cyclist opts for a solution, forks out the money, and lives with the decision. If all goes to plan, their bicycle will be present the next time they go to use it.

The same applies to the Internet security world. You can’t protect against all the threats and, even if you could, you’d likely end up making the system you’re trying to protect unusable for the folks that need to use it.

But “protection” is only one side of the security coin. “Detection” is a critical element of modern security. Some may argue that detection is something you aim to do if you can’t protect. I’d have to politely disagree.

Locking up your bike is a realistic security solution if you’re looking to protect it – but ensuring that your bike is locked up somewhere highly visible (good lighting, etc.) and located where a potential thief is likely to be noticed by the cyclist and other passerby’s is a critical “detection” component. The threat of detection becomes part of the security strategy. Even if that deterrent fails, and the protection was also insufficient, the sooner the cyclist knows whether their bicycle has been stolen or tampered with, the quicker they can respond to the threat and take the corresponding actions.

Detection within the Internet security realm is as important as protection. For the last decade or so the emphasis upon security has been protection – despite acknowledging the limits of the compromise situations and product vulnerabilities. Knowing precisely when an asset has become the focus of a would-be thief or eventually succumbing to the threat is critical in how an organization must respond to the incident.
As anyone who has had a bike stolen will tell you, the quicker you notice it’s gone, the higher the probability you have of getting it back.

Thursday, March 18, 2010

Protecting Your Malware IP Investment

Competition between malware authors and botnet operators can be fierce at times. Opponents are constantly squaring up and trying to build bigger, better and more "advanced" everything. As such, they're keen to make sure that their latest advances and IP isn't ripped off by a competitor or, heaven forbid, some pesky malware analyst working at an antivirus company.

Earlier this week, a customer asked me what was the smartest and most sophisticated thing I’d seen malware authors doing recently. He was probably expecting me to mention some new toolset feature such as auto-cracking CAPTCHA’s for webmail spamming or the custom advertiser routines for redirecting in-browser advertising… instead, I discussed the new host-locked malware versions that are being experimented with by a number of professional botnet operators.

Three years ago I wrote a paper covering the one-of-a-kind exploitation techniques that were being adopted by drive-by-download distributors and exploit delivery systems. The paper – X-Morphic Exploitation – covers the generation of one-off “custom” exploits and malware that are created for each potential victim visiting the attackers malicious Web site. One of the techniques covered related to the creation and delivery of serial variant malware and how each unique sample was only ever served to a single victim – all as a means of defeating signature-based protection technologies (and, to a smaller extent, bulk analysis of malware samples).

Well, as you’d expect, the threat has moved on. While the X-Morphic exploit delivery platforms have grown more and more popular over the last three years, it would seem that the botnet builders have adopted an additional new (and rather powerful) technique that makes it even more difficult for malware analysts and bulk analysis tools to deal with their malicious bot agents – and it taken right out of the commercial anti-piracy cookbook.

To explain whats going on, it’s probably easiest to step through a botnet infection that makes use of the new technique:

  1. The would-be victim/user is browsing the Internet and stumbles upon a drive-by-download Web page. The page cycles through a number of Web browser vulnerabilities – locates an exploit that will work against the users browser – exploits the vulnerability – inserts a shellcode payload and causes the newly introduced (and hidden) process(es) to execute.
  2. A hidden process downloads a “dropper” file on to the victims computer, and causes it to execute. The dropper may be a custom package created just for this victim (i.e. X-Morphic generated) or one that is being served to all potential victims for that day/week.
  3. The dropper unpacks itself – unraveling all of the tools, scripts and malware agents it needs on to the victims computer – and then proceeds to hide the malicious payload components (e.g. disabling the hosts antivirus protection, turning off auto-updates, modifying startup processes, root-kitting the botnet agent), cleans itself up by removing all redundant files and evidence of the installation activities, and finally starts up the actual botnet agent.
  4. The first time the botnet agent starts up, it does a number of checks to see whether or not it has Internet access (e.g. deciding whether a corporate proxy is in use) and whether or not its running on a “real” victims computer (i.e. that it’s not running in a sandbox or virtualized environment – which would indicate that someone is trying to analyze and study the malware itself). If everything looks good and the coast is clear (so to speak), the botnet agent does a quick system-level inventory of the victims computer (e.g. CPU ID, HDD serial number, network card MAC, BIOS version, etc.) and then makes its first connection to the botnet’s Command and Control (CnC) – registering the victims computer as a member of the botnet, and sending through the unique system inventory data.
  5. In response, the botnet CnC immediately sends an updated bot agent to the victims computer – uninstalling the old agent, and installing the new agent. However, this new agent is specifically created and “locked” to the victims computer – i.e. it is unique to this particular victim and will not run on any other computer.
  6. Once the new “locked” bot agent is installed, it connects to a different CnC server – and the victim’s computer is now fully incorporated in to the criminals botnet, and under their remote control.

Those last three steps are what’s new and innovative, and what’s going to spell the ruin for many of the most important malware analysis tools and techniques antivirus vendors use to combat the malware plague.

By infecting their victims computer with a unique and “locked” version of bot agent (or malware), and ensuring that it will only ever run on that particular victims computer, it means that any samples that may eventually be acquired by the antivirus vendor(s) wont actually be useful to them. Automated analysis systems that take in malware samples from spam traps, web crawlers, etc. and execute them in virtual environments or sandboxes etc. will not yield the real botnet agent for study nor details of the true botnet CnC. Meanwhile, malware samples obtained from forensic retrieval processes or submitted by antivirus customers will not work (e.g. they will either not function maliciously or not execute at all in an analysis environemnt) – because they are encoded and locked specifically to the victims machine.

This “locking” process isn’t new in itself. Many commercial software vendors use this technique – for example, Microsoft uses the same technique for detecting pirated versions of their operating system and enforcing their licensing policy.In fact many manufacturers of DIY malware construction kits use the same techniques to protect their toolkits from being both pirated and falling in to the hands of security vendors. However, in this case the botnet operators are using it as a technique to ensure that samples of their malicious bot agents are useless to antivirus vendors.

Sure, a skilled malware reverse engineer could manually work around this kind of software locking mechanism, but its a slow and tedious process even for the most experienced folks – and manual analysis done in this way doesn’t remotely scale in any meaningful way to counter this threat. That said, if the (real) botnet agent also sends through an updated system inventory to the botnet CnC server each time it connects, and the “signature” no longer matches the one that the bonet operator originally associated with that particular botnet agent, then the botnet operator will know that someone is tampering with their software and disconnect the victim from the botnet (or perhaps launch an attack at the investigators/analysts computer)

As botnet operators (and general malware authors) further adopt this kind of victim-specific locking practice to protect their malware investment, and as the sophistication of the locking increases (as it inevitably will), the antivirus industry is going to have to rethink many of the techniques it currently relies upon for sample analysis and signature generation. There is no easy option for countering this new criminal practice.

Saturday, May 23, 2009

If you can't protect it, you'd better be able to detect it!

The security trend over the last half-decade has been towards "protection" and we've seen technologies such as IDS morph in to IPS and network sniffing evolve in to DLP.

What I find amusing/worrying is that this laser focus on protection means that organizations have increasingly dropped the ball where it comes to threats that currently have no protection solution on the market. Basically, an attitude of "if I can't protect against it, then I don't want to know about it" has become prevalent within the security industry.

So, on that note, I found it refreshing to read the brief story over at Dark Reading How To Protect Your Organization From Malicious Insiders by Michael Davis. It's been a long standing mantra of mine that "If you can't protect it, you'd better be able to detect it!"

The 'Insider Threat' is one of the more insidious threats facing corporates today (especially in economic turmoil) and there really are so many ways for a knowledgeable employee to screw things up if they wanted to. I've had to do a mix of forensics and internal pentests within these areas in the past and it's always a potential playground of carnage.

But it's a little distressing to me that with the global sales push on DLP solutions many organizations have essentially thrown away their common sense. What I've observed is that enterprises that were initially deeply concerned about the potential of insider threat jumped heavily on to the DLP bandwagon and see this class of security technology as a way of over coming the threat. Then once they've deployed the DLP solution it's as if a mental box is ticked - "insider threat = solved" - and they move on to their next priority.

The problem is that DLP sucks as a protection system against the real insider threat and its rollout within an enterprise can be a substantial distraction to security & audit teams responsible for tracking the threat. Add to that the fact the executive support for further insider threat protection strategies quickly wanes after DLP has been rolled out -- "DLP = job done".

DLP will help identify (and block) many clear-text data leakage routes from an enterprise, however it'll do nothing against an insider that backdoors a server or Easter-eggs a DB to self destruct in a couple of weeks time - yet the mindset is that an investment has been made in DLP, and that since these kinds of insider threats can't be "protected" against, it's a problem too tough to solve (even though it may have been "solved" previously to the DLP solution - but that budget has now been used up - and DLP is supposed to reduce costs).

What ever happened to "detection"? As far as the insider threat goes, if you can't protect against it, you'd damn-well better ensure you can detect it. Failing that, I hope you're budgeting enough for post-attack disaster recovery and forensics.

Think of it this way. Say you're running a public library. You can bag check everyone that leaves the library to make sure they aren't stealing your books - and that's a wise precaution. But that doesn't mean you should skimp on the smoke detectors. The threat is "book loss" but there are clear differences between protection and detection strategies.