Monday, December 27, 2010
Depending upon the goals of the penetration test, things like installing physical keyloggers on the receptionists computer (doing this surreptitiously while engaged in conversation with the receptionist - hands dangling down the back of the computer...) in order to capture emails and physical door entry codes, dropping a little wireless Compaq/HP iPaq in the plant-pot for a day of wireless sniffing etc., dropping "malware" infected USB keys in the office car park in the morning (waiting for the "finders" to check them out on their office computer by lunchtime) and pretending to be official fire extinguisher inspectors and getting access (and a little alone-time) in their server farm.
Anyhow, today I spotted an interesting gadget that would have been pretty helpful on many of these physical engagements - The PlugBot. It's a wireless PC inside what looks like a plug adapter.
If you're not a penetration tester - perhaps you should read about it anyway. Something to "keep an eye on" within your own organization then.
Friday, December 10, 2010
We'll it appears that Google Maps has/is been/being used for command and control of the various protesting actions by some - tracking where the police/brocades/ambulances are etc.
It's an interesting use of the mapping technology.
Student protesters use Google Maps to outwit police on the Metro.co.uk
Wednesday, December 8, 2010
Cynicism in the run up to Christmas? Bah-humbug :-)
Anyway, despite all that, "predictions" can be pretty useful - but only if they're (mostly) correct and can be actionable. So, with that in mind, I've posted some "expectations" (rather than predictions) for 2011. I think it's important to understand the trends behind certain predictions. A prediction that comes from no where, with no context, and with no qualification is about as helpful as a TSA officer.
Here are the 2011 predictions (aka expectations) I posted on the Damballa blog:
- The cyber-crime ecosystem will continue to add new specialist niches that straddle the traditional black and white markets for both the tools they produce and information they harvest. The resulting gray-markets will broaden the laundering services they already offer for identities and reputation.
- Commercial developers of malware will continue to diversify their business models and there will be a steady increase in the number of authors that transition from “just building” the malware construction kits to running and operating their own commercial botnet services.
- The production of “proof-of-concept” malware, hitherto limited to boutique penetration testing companies, will become more mainstream as businesses that produce mechanical and industrial goods find a greater need to account for threats that target their physical products or production facilities.
- 4. Reputation will be an increasingly important factor in why an organization (or the resources of that organization) will be targeted for exploitation. As IP and DNS reputation systems mature and are more widely adopted, organized cyber-criminals will be more cognizant of the reputation of the systems they compromise and seek to leverage that reputation in their evasion strategies.
- The pace at which botnet operators update and reissue the malware agents on their victims’ computers will continue to increase. In an effort to avoid dynamic analysis and detection technologies deployed at the perimeter of enterprise networks or operating within the clouds of anti-virus service providers, criminal operators will find themselves rolling out new updates every few hours (which isn’t a problem for them).
- Malware authors will continue to tinker with new methods of botnet control that abuse commercial web services such as social networks sites, micro-blogging sites, free file hosting services and paste bins – but will find them increasingly ineffective as a reliable method of command and control as the pace in which takedown operations by security vendors increases.
- The requirement for malware to operate for longer periods of time in a stealthy manner upon the victim’s computer will become ever more important for cyber-criminals. As such, more flexible command and control discovery techniques – such as dynamic domain generation algorithms – will become more popular in an effort to thwart blacklisting technologies. As the criminals mature their information laundering processes, the advantage of long-term host compromises will be evident in their monetary gains.
- The rapidity in which compromised systems are bought, sold and traded amongst cyber-criminals will increase. As more criminals conduct their business within the federated ecosystem, there will be more opportunity for exchanging access to victim computers and greater degrees of specialization.
- Botnet operators who employ web-based command and control portals will enhance their security of both the portal application and the data stolen from their botnet victims. Encryption of the data uploaded to the data drop sites will increase and utilize asymmetric cryptography in order to evade security researchers who reverse engineer the malware samples.
- The requirement for “live” and dynamic control of victims will increase as botnet operators hone new ways of automatically controlling or scripting repeated fraud actions. Older botnets will continue their batch-oriented commands for noisy attacks, but the malware agents and their command and control systems will grow more flexible even if they aren’t used.
Sunday, December 5, 2010
You'll find some comments on the other blog, but I wanted to add some more thoughts here - based upon some thoughts shared by others on the topic.
I guess the issue lying at the heart of the question is whether, by implementing a blocking (or filtering) policy based upon the findings/classification of a dynamic reputation system, you'd be gaining better protection than having implemented a stand alone IPS.
To issues come in to play in the the decision - How "complete" is the dynamic reputation system? and How "reliable" is the IPS?
As I said in the original posting - advanced dynamic reputation systems have been coming along in leaps and bounds. We're not talking about some static blacklist here and neither are we limiting things to classic IP reputation systems that deal with one threat category at a time. Instead we're talking about systems that take as inputs dozens of vetted threat detection and classification lists, realtime feeds of streaming DNS/Domain/Netflow/Registration/SpamTrap/Sinkhole/etc. data and advanced machine learning algorithms.
From experience (and empirical evidence), blocking the things that a dynamic reputation system says is bad or very suspicious at the network perimeter appears to out perform IPS - if the count of victim machines is anything to go by.
One of the key failings of IPS is that its reputation is better than its performance. What I mean by that is an IPS is limited to its signatures/algorithms for detecing know threat profiles and exploit techniques. These are not all encompasing - and you'll normally only fine the first "in-the-wild" exploit for a vulnerability covered (or exploits that get used by popular commercial hacking tools and IPS testing agencies) - rather than all the obfuscation and evasion techniques. You may remember the blog I did a little while about the commercial exploit testing services used by the badguys - such as Virtest.com.
So, here's my thinking. It's better to block known bad and provable dangerous/suspicious servers (independent or restricted to a particular protocol - depending upon your tolerance for pain) than on a hope that your IPS is going to stop some (hopefully) past-seen permutation of a particular exploit being served by the attacking server.
Some may argue that you're still at risk of servers that are unkown to a dynamic reputation system. Are you though? Think of it this way. You have a dynamic reputation system that is taking live datafeeds etc (as described above) for the entire Internet. If a server (or service) has never been seen and doesn't have a reputational score - then it's already suspicious and could probably be blocked for the timebeing.
Defense in depth is still a good option though!
Friday, November 5, 2010
The complexity of protecting these computers is well beyond the average user - so why does the industry proceed with this sham? Maybe there's an air of addiction to the legacy solution. In general though, if a security technology is dependent upon the successful operation and maintenance of the software by the end user, then it's predestined to fail.
What could a future end-user security ecosystem look like? I let my mind wonder a little and posted something up on the Damballa site... "A Future Security Ecosystem".
Cross-posting the blog below...
Earlier this week, while attending a conference in Germany, I was asked to reflect on what would be the “next big thing” for combating organized Internet crime… something that could be achievable 5 years from now. I’ve always been a proponent of doing as much as possible to remove the consumer from being responsible for securing themselves. By that, what I mean is all too often corporations assume that their primary security defense is for their own customers to be secure, and the corporation’s security is conceptually a backup defense – kind of like mopping up the exceptions. The problem here though is that consumers can’t defend themselves, and those “exceptions” are all too rapidly becoming the norm. I once wrote a paper covering the concepts of continuing to do business with malware infected customers – and much of that has been applied successfully to online banking systems. But is there something new we (as an industry) could be doing? Getting back to a 5-year framework, one future threat response ecosystem could revolve around a shared platform of “who’s infected and with what.” The concepts are rather simple. At the network layer, it is increasingly possible to identify computers that have been infected with botnet malware – particularly the criminal tools used to conduct real-time fraud on the victims’ computers. What if it was possible to share that information (live) with the organization that the victim is currently trying to do online transactions with? For example, let’s say that I know that John Doe’s PC is currently infected with a Zeus malware variant under the control of the LonelySharks crime syndicate based in Chile and – in the last 10 minutes – that computer has been in contact with the command-and-control (CnC) servers the criminals are using. As John Doe opens his Web browser and connects to XYZ Bank Inc., the banks web application can query a live database of whether Joe Does computer has been noted as being infected recently. In this case, XYZ Bank Inc. finds out that the computer John is using is infected and that the criminal operators behind the malware typically conduct banking fraud. XYZ Bank Inc. can now undertake a number of additional transaction monitoring processes and change the way that new banking transactions from John Doe’s computer are handled (e.g. he’s never done an online transfer to ABC Electrical supplier before – so perhaps the bank may want to do some homework about this ABC Electrical supplier account now too). They may also want to alert John that they’re doing this and provide advice on how best to remove the threat from his own computer. The net result of all this is the fact that the business can continue to do business with their infected customer – as they know when (and how) to be more vigilant to fraud attempts. Perhaps this doesn’t sound like much of an advance – but you should try speaking with anyone in the financial services field. A little bit of alerting can go a long way in protecting the customer (and organization) from fraud – and can help close down the operations of the criminals much faster. The key to this is being able to identify which computers are infected (in real time), being able to associate the computer to a particular threat, and being able to share this information in a legal and private way. Obviously ISP’s are in a perfect position to help. They are already beginning to implement network-wide passive botnet detection systems and could (if allowed to) make the association between computer and user (or subscriber in this instance). At the moment I doubt they’d be legally allowed to share this information with anyone beyond the victim themselves. But, what if… …what if it was possible for an ISP customer to subscribe to a service where they allow the ISP to identify the threats targeted at them (and the threat that they have become victim to), and to be able to share that information with a list of authorized companies that the user does business with regularly. Assuming that the “check” done by the business is only done at the time the user’s computer is in operation, the prospect of privacy invasion is mute. The technologies to do all this largely exist today. Would the prospect of additional privacy loss (to organizations I’m already dealing with and authenticating myself to) concern me? I don’t believe so. Would I be prepared to pay for this? Sure, if the price is right… But perhaps the model could be even more beneficial for all concerned. If I’m a subscriber to this service, since it’s the banks or businesses that I’m doing transactions with that benefit the most from all this data sharing, perhaps I don’t need to pay for my subscription? Would those organizations pay my ISP to know where I’m infected (or any other of their customers at the same ISP are infected)? Hell yeah. They’re hunting for companies that can supply them with this data. So, if they’re already looking to buy this info, perhaps my ISP doesn’t need to charge me for this service (and all the other great anti-threat stuff they can do for me in the cloud) – instead they can get it directly from the businesses I regularly do online transactions with? If that’s not so palatable to the ISP’s, perhaps the organizations I do online business with will offer me discounts or better rates directly if I opt-in and allow my ISP to share the information? Would it be economically viable for my online shares trading platform provider to reduce my transaction fees a little – since they have more confidence in their fraud detection processes now they know whether my computer is tainted or not? I suspect they probably would. There is of course a long way to go – but this is one of the things I thought would be a valuable security ecosystem for combating much of the fraud now evident. And I think a 5-year goal could be achievable.
Monday, October 25, 2010
Where were the bad guys hosting there CnC servers for the first half of the year? Damballa has just released a blog covering the top-10 worst offender service providers as well as a breakdown by country. Guess who's at the top of the lists...
Botnet Hosting (H1 2010) Blog
Tuesday, September 28, 2010
There's a problem though. These technologies not only tend to be more smoke and mirrors than usual, but are increasingly being evaded by the malware authors and expose the corporate enterprise to a new range of threats.
Earlier this week I released a new whitepaper on the topic - exposing the techniques being used by malware authors and botnet operators to enumerate and subvert these technologies. The paper is titled "Automated In-Network Malware Analysis".
I also blogged on the topic yesterday over on the Damballa site - here.
Automated In-Network Malware Analysis
Someone once told me that the secret to a good security posture lies in the art of managing compromise. Unfortunately, given the way in which the threat landscape is developing, that “compromise” is constantly shifting further to the attacker’s advantage.
By now most security professionals are aware that the automated analysis of malware using heavily instrumented investigation platforms, virtualized instances of operating systems or honeypot infrastructures, are of rapidly diminishing value. Access to the tools that add sophisticated evasion capabilities to an everyday piece of malware and turn it into a fine honed one-of-a-kind infiltration package are simply a few hyperlinks away.
Embedding anti-detection functionality can be achieved through a couple of check-boxes, no longer requiring the attacker to have any technical understanding of the underlying evasion techniques.
Figures 1 & 2: Anti-detection evasion check-boxes found in a common Crypter tool for crafting malware (circa late 2008).
Throughout 2010 these “hacker assist” tools have been getting more sophisticated and adding considerably more functionality. Many of the tools available today don’t even bother to list all of their anti-detection capabilities because they have so many – and simply present the user with a single “enable anti’s” checkbox. In addition, new versions of their subscriber-funded tools come out at regular intervals – constantly tuning, modifying and guaranteeing their evasion capabilities.
Figure 3: Blackout AIO auto-spreader for adding worm capabilities and evasion technologies to any malware payload. Recommended retail price of $59 (circa August 2010).
Pressure for AV++
In response to the explosive growth in malware volumes and the onslaught of unique one-of-a-kind target malware that’s been “QA Tested” by their criminal authors prior to use in order to guarantee that there’s no desktop anti-virus detection, many organizations have embarked upon a quest for what can best be described as “AV++”.
AV++ is the concept behind some almost magical array of technologies that will capture and identify all the malware that slips past all the other existing layers of defense. Surprisingly, many organizations are now investing in heavily instrumented investigation platforms, virtualized instances of operating systems or honeypot infrastructures – all the things that are already know to have evasions and bypassing tools in circulation – despite the evidence. Has fear overcome common sense?
An area of more recent concern lies within the newest malware creator tool kits and detection methodologies. While many of the anti-detection technologies found in circulation over the last 3-4 years have matured at a steady pace, the recent investments in deploying automated malware analysis technologies within a targeted enterprise’s network have resulted in new innovations and opportunities for detection and evasion.
Just as the tactic of adding account lockout functionality to email accounts in order to prevent password bruteforcing created an entirely new threat (the ability to DoS the mail system by locking out everyone’s email account) so we see the development of new classes of threats in response to organizations that attempt to execute and analyze malware within their own organizations.
In a “damned if you do, and damned if you don’t” context, the addition of magical AV++ technologies being deployed within the borders of an enterprise network has opened the doors to new and enhanced evasion tactics.
To best understand the implications and dynamics of the new detection and evasion techniques being used by the criminals targeting businesses I’ve created a detailed white paper on the topic.
Sunday, September 19, 2010
So whats all this about? Apparently, the new processor can be "upgraded" by purchasing what amounts to a license key for turning on the embedded functionality of the chip. Or, to put it another way, you've purchased a PC with a downgraded Pentium processor with disabled features - but can "enable" those features at a later date by simply purchasing the aforementioned "upgrade card".
There's a lot of fervor concerning this particular innovation from Intel. Granted, the concepts aren't particularly new and other technology companies have tried similar tactics in the past (e.g. I was once told that the IBM Z-Series mainframes ship with everything installed but, depending upon the license you purchased, not all the capacity/features of the system are enabled), but It's not something I'm a particular fan of. Then again, it would seem to me that I'm probably not the type of consumer that Intel would be marketing this product strategy to either.
The Intel site describing the upgrade technology/processes/etc. can be found at http://retailupgrades.intel.com/ - although it does appear to still be in a state of "under construction" as evidenced with the following response to the FAQ question of "Which PC's with this upgrade work on?"
Good luck with this one Intel. It's not like I'll be buying any product (Intel or other) knowing that it had been intentionally disabled and subject to an additional fee for activation.
The exception would be if I felt like doing a bit of RE to get the full functionality without buying in to the whole marketing "vision" (subject to license agreements, yadda, yadda, yadda...).
Saturday, September 18, 2010
It's always fun to watch HD Moore as he covers the latest roadmap for Metasploit - explaining the progress of various evasion techniques as they're integrated in to the tool and deriding the progress of various "protection" technologies.
A couple of things he said at the time stuck in my mind and I've been musing over them throughout last week. One comment - in response to a question that had been raised - was that IDS/IPS evasion is already sufficient within Metasploit and that further techniques would be "like kicking a cripple kid". Granted, not very PC - but that's the purpose of such statements.
I agree to a certain extent that IDS/IPS technologies can be evaded - but there's a pretty broad spectrum to IDS/IPS technologies and 'one size doesn't fit all'. For example, HD Moore mentioned that simply using HTTP compression (i.e. GZIP) is enough to evade the technology. Not so. For IDS/IPS technologies with full protocol parsing modules (rather than packet-based signature matching) such techniques won't work. But that's by the by. Depending upon the sophistication of the attacker and their knowledge of the strengths and weaknesses of the IDS/IPS technology, evasions can often be found in short order (depending upon the type of vulnerability being exploited). While it's obviously to HD Moores advantage to talk a good game on behalf of Metaspolit and novel evasion techniques, it doesn't hurt to be reminded that there is an agenda to making such broad claims.
The other comment he made related to the progress of adding more advanced payloads and exploit techniques. While I can't remember precisely the terms he used, the way he was discussing the topic - how much fun everyone was having inventing and developing the new techniques - I couldn't help by feel a little ashamed that things within the professional (attack-based) security field had reached this level.
What do I mean? Well, the way in which HD Moore was describing things to the audience I couldn't help but think in terms of physical weapons research. The description of the nestled exploit and evasion modules and how the developers/researchers were going about developing better, faster and more efficient techniques made me visualize a game of one-up man-ship between bullet designers. Something like the following...
Researcher 1: I think we should make a bullet that's Teflon coated but acts like a dum-dum bullet that expands to make a bigger hole in the target.Researcher 2: Researcher 1: No, I've got a better idea. Instead of using the dum-dum style of bullet, I've come up with a way of making it fragment quicker and completely eviscerate the target internally. Researcher 2: How about we add that new flaming compound so that as the target gets eviscerated he'll combust at the same time. Researcher 1: That's cool! I bet there'll be crimson smoke coming out of the target too. Ha ha. Cool! Lets build it and test it against those homeless people across the road.
Granted, "good enough" protection can be defeated by using a "good enough" evasion technique. But I wonder when (or if) we'll ever need people to be more responsible for their actions developing what are effectively the cyber-equivalent of weapons? I strongly doubt that there'll ever be the cyber-equivalent of the Hague Convention though.
Saturday, September 4, 2010
Nevertheless, their perspective of infinite malware is quite correct. Given that malware can by dynamically generated (checkout the paper on x-morphic attack engines), exhibit polymorphic capabilities and is generally created faster than it can be counted, captured and cataloged, then for all intents and purposes it is infinite.
Which means I have to chuckle when I hear or read any media coverage about the number of malware a particular vendor has captured and written detection signatures for. It's like saying "look, I tripped over 2,543,234 pieces of malware around the world last year and developed protection of each of them". Then, with my mathematicians hat on... infinite threats minus 2,543,234discovered threats still leaves an infinite number of threats. Or, expressing detection coverage as a percentage of scale of the threat = zero percent.
Obviously that's not precisely true. Anti-virus technologies are generally OK at detecting the stuff they've seen before and with generic catch-all signatures they can often capture or label related families of malware as being malicious - or at the very least "suspicious". The problem tends to grow in to frustration when practically every binary file downloaded from the Internet gets marked as "suspicious" - and hence the label becomes meaningless.
Despite all this, Sophos is spot on - there's an intinite number of malware out there, and there'll be more tomorrow. Welcome to the day after yesterday.
Friday, August 20, 2010
Add to that a complementary technology - one offering more advanced features in the realm of preemptive threat detection (and perhaps "protection") and used to aid and extend blacklists - is that of clustering.
To help explain these technological terms (and whats happening in this field of preemptive technology) I wrote a couple of technical blogs that were published in SC Magazine this week. With a bit of luck you'll find them educational and a bit of fun.
Part One: Blacklists, clustering and The Matrix
Part Two: Blacklists, clustering and The Matrix
Monday, July 12, 2010
Thoughts on the topic went up on the Damballa blog site earlier today and are mirrored below...
Last month I gave a couple of presentations covering the current state of cellular mobile botnets – i.e. malware installed on mobile phone, smartphone and cellular devices designed to provide remote access to the handset and everything on it. While malware attacks against dumb and smart phones are nothing new, the last 3 years of TCP/IP default functionality, compulsory data plans, access and provisioning of more sophisticated development API’s, have all made it much easier for malware developers to incorporate remote control channels in to their malicious software. The net effect is the growing “experimentation” of cellular botnets.
I purposefully use the term “cellular” so as to focus attention on the botnet agents’ use of the mobile Telco’s cellular network for Internet access – rather than more localized WiFi and Bluetooth services. Worms such as Commwarrior back in 2005 made use of Bluetooth and MMS to propagate between handsets – but centralized command and control (CnC) was elusive at the time (thereby greatly limiting the damage that could be caused, and effectively neutering of any criminal monetization aspirations). More recently thoughh, as access to the TCP/IP stack within the handsets has become more accessible to software developers through better API functionality by the OS vendors, the tried and tested CnC topologies for managing (common) Internet botnets are be successfully applied and bridged to cover cellular botnet control.
Discussions about Smartphone botnets are making it to the media more frequently – albeit mostly the IT and security press – for example, “Botnet Viruses Target Symbian Smartphones“. Based upon the last couple of presentations I’ve given on the topic, lots of people are worried about cellular botnet advances – no more so than the Telco providers themselves.
Sure, there are plenty of ways of infecting a Smartphone – successful vectors to date have been through Trojaned applications, fraudulent app store applications, USB infections, desktop synchronization software, MMS attachments, Bluetooth packages, unlocking platform application downloads/updates, etc. – but relatively little has been publicly discussed about the use of exploit material. As we all unfortunately know, one of the key methods of infecting desktop computers is through the exploitation of software vulnerabilities. Are we about to see the same thing for Smartphones? Will cellular botnets similarly find that handset exploitation will be the way to propagate and install botnet agents?
In all likelihood, vulnerability exploitation is likely to a lesser problem for Smartphone – at least in the near future. Given the diversity in hardware platforms, operating systems and chip architectures, it’s not as easy to create reliable exploits that can affect more than one manufacturers line of product. That said though, some product lines are numbered in the tens of millions of devices, and the OS’s are becoming increasingly better at making the underlying hardware transparent for malicious software and exploitation. I’ll also add that there are plenty of vulnerabilities, “reliable” exploits up for sale and interested researchers bug hunting away – but at the moment there’s little financial gain for professional botnet operators compared to the well established (and much softer) desktop market of exploitable systems. But we have to be careful to not marginalize the threat, it’s worth understanding that botnets are already being developed and (in very limited and targeted distribution) are being used for installing botnet agents on vulnerable handsets.
This is of course causing increasing heartburn for the mobile telco providers – since their subscription models essentially mean that they’re responsible for cleaning up infected handsets and removing the malicious traffic, much more so than traditional ISP’s are. If a handset is infected, their customer will likely incur a huge bill and (as what typically happens) the Telco will not be able to recover the losses from the customer. Attempts to recover the cost from the customer will increasingly yield two results – 1) they won’t be a customer any longer and 2) the negative PR will have them rolling in pain.
Fortunately, as the cellular botnets become more common and sophisticated in their on-device functionality, they’re also going to become more mainstream and closely related to classic Internet botnets. What this means is that their CnC channels and infrastructure will increasingly be close to (or the same as) “standard” botnets. Which in turn means that cellular botnets can be thwarted at the network layer within the mobile Telco operator’s own networks (similar to what some major ISP’s are trialing with their residential customers) – thereby turning the threat in to something that they can protect against. How is that possible? Well, a quick browse of the Damballa website should provide a fair bit of insight in to that – and perhaps I’ll post a follow-up blog on key techniques sometime soon.
One particular "best practice" that I think needs to be re-thunk... "don't write your password down."
I blogged a little more on the topic over on the Damballa site... It's safer to write your password down...
Common wisdom over the last couple of decades has been to never write down the passwords you use for accessing networked services. But is now the time to begin writing them down? Threats are constantly evolving and perhaps it’s time to revisit one of the longest standing idioms of security – “never write a password down”.
Back in the day, a password was a critical part of the corporate identity system. You supplied your user ID and password pair in order to get online and to access key corporate resources. Access controls then extended the authentication model to enable greater control of what users could see, do and change. As new systems came online, and as business extended beyond the in-house corporate networks, additional (i.e. separate) authentication systems came in to play. Despite multiple attempts at developing and deploying single sign-on (SSO), most employees still need to juggle a dozen passwords in order to do their work. If they have external Internet accounts as well, then they’ll be juggling several dozen additional passwords. Once you thrown in their personal Internet accounts (webmail, Twitter, Facebook, LinkedIn, PayPal, Amazon, etc.) you’re quickly neck-deep in password soup.
Whats traditionally been the problem with writing down password anyway? Well, since passwords are the critical ingredient for access control, corporate security teams have long “educated” employees in to never writing them down. To do so would potentially expose yourself to impersonation – and you’d ultimately be responsible for whatever (damage) the impersonator did in your name.
In the meantime, Internet guides, popular PC magazines, and practically every website that forces you to create a login account, all extol the virtues of never writing your passwords down. They also give you lots of additional advice – such as “use a strong password”, “use a unique password”, “never use the same password on a different site”, etc. All of which make it incredibly difficult for any practically minded human to keep track of which password belongs to which website. The net result being that the “password rules” are being repeatedly broken.
Now, to ease some of this burden, there have been a spurt of software tools that’ll help remember passwords on your behalf. For example, the popular web browsers all provide some capability in this area. The problem though is that the bad guys have better tools. Practically all of today’s malware(along with all those botnets you hear about each day) have the built-in capabilities of grabbing/stealing both the passwords you’ve remembered and type in each time you visit a favorite website, and the passwords being conveniently “remembered” by the software on your computer.
Why would writing down a password be good? Well, it’s not a question of being good – just better. Granted, anything you type on your computer can (and will) be grabbed by the malware it’s been compromised with- but the lowest hanging fruit for the bad guys lies with all the stuff you’ve already asked your computer to remember on your behalf. After 3 months of use, web browser “remember” functions may have captured 50+ sets of authentication details. Within a few seconds of computer compromise, all three moths worth of stored credentials will have been copied and stolen (oh, and they’re neatly formatted and sorted) – so the malware doesn’t need to do any work, and it doesn’t matter if your anti-virus software gets an update tomorrow capable of detecting the malware and removing it. The damage is already done.
Staying hidden on a victims computer is not a trivial task for many malware – particularly wide-spread Internet malware (anythingwith a name you may have read about). There are lots of things that can go wrong. AV updates may detect the infection, dropper websites may be taken down, uploading sites may be sinkholed, CnC domains may be hijacked, etc. so it’s become important for modern malware to steal as much information as possible within the shortest possible time. Factors such as conveniently storing all your authentication details on your computer and recycling popular (i.e. memorable) passwords reduce the time the malware needs to be operating in order to steal critical data.
What about a few high-level odds?
- 1:3 – home PC being infected with malware with password stealing capabilities in a given year.
- 1:4 – home PC being infected with a botnet agent in a given year
- 1:8 – corporate PC being infected with malware with password stealing capabilities in a given year
- 1:12 – corporate PC being infected with a botnet agent in a given year
- 1:160 – your car being stolen in a given year
- 1:700 – your home being burgled
- 1:600,000 – being struck by lightning
- Don’t use the same password on multiple websites
- Don’t let your computer “remember” your password!
- Use a “strong” password – preferably something with 12+ mixed characters
- Don’t use a predictable algorithm – e.g. abc
- Change your passwords regularly. For sites with lots of personal information and associated monies, change every 2-3 months. For other sites, try every 6-12 months.
- Don’t reuse past passwords – even if you think it’s a cool password.
- Don’t write your password down.
The first 6 password recommendations would trump the 7th in most cases – provided you take care in how and where you write your passwords down. Be smart about it… but don’t underestimate the risks posed by modern malware either.
Tuesday, June 22, 2010
If the migratory cluster of bar stools and hotel chairs encircling the obligatory way-too-small table contains more than a pair of reformed hackers or pentesters, by listening in you'll end up gaining quite a bit of insight in to why the better hackers are so often successful (and you'll probably also pick up a few tell's for future reference).
While there's much literature and many tutorials to be found that explain the technical aspects of how to successfully compromise corporate defenses, exploit systems and ultimately extract data, there's actually very little "guidance" on which systems should be targeted and why, once you've breached the network. Sure, there's plenty of discussions covering the technical aspects of how to raise privileges (e.g. locating and exploiting the Active Directory server in order to acquire corporate user/admin credentials etc.), but which systems really provide the treasure trove?
Quite a few folks I've been speaking with will initially (and specifically) target the systems used by the corporate security teams. These systems are important for a couple of reasons; 1) internal security folks often have good access to a wide range of other systems that may be valuable and 2) By keeping an eye on the "watchers" you'll know when you're close to being caught and can stay a couple steps ahead. Personally, I think it's a ballsy move if you can pull it off - but it's not something I'd throw in as a priority. There are a lot of inherent risks in trying to tackle systems maintained and watched by the professionally paranoid - so it may be more prudent to gather better intel first.
Another primary target for some folks is to go after the obvious corporate data repositories - the backend databases, business intelligence systems and storage facilities. This mode of attack I'd associate much more with the quick "get in and get out of dodge as fast as you can" - maximizing the potential reward by sacrificing (IMHO) a fair degree of stealthiness and persistence. If typically works very well - and is an ideal tactic for "compelling result" penetration testing or hackers looking for rapidly monetizable data.
A tactic that I've always preferred (dependent upon the specific objectives of the pentest of course) is to initially locate and target the QA systems. For the folks that target the corporate secuity systems or go after the official data repositories, going after the QA systems sounds not only unexciting but also like a complete and utter waste of time. But hear me out first. QA systems really are a veritable treasure trove of corporate data. Consider the following:
- Like a smelly hobo camped outside a high-street McDonalds, both security analysts and helpdesk alike tend to keep their distance from (what are typically) "unmanaged" QA systems.
- QA systems often contain complete copies of the high-value corporate data so that development teams and QA/Testing personnel can actually test the applications correctly. You'll often also note that the more "valuable" a particular suite of data, application or business process is, the higher the probability that the QA copies of the data will in fact be real-time mirror images of live data.
- Nobody ever "owns" the QA systems. They're always the last systems to get patched (if ever) and access controls typically hover between poor and non-existent.
- When was the last time anyone bothered to look at the audit logs? With so many ad-hoc system use, trials and testing, it's a nightmare from both a detection and forensics perspective. QA systems are an ideal place to recon an enterprise network from and retain a persistent toe-hold within the organization.
- QA systems typically have "temporary" access to to all the core business systems and data repositories within a corporate network. By "temporary" I mean in theory if you listen to the server administrators - in practice they can be considered permanent gateways.
- Testing systems are typically littered with copies of entire development source code trees - making it a piece of cake to acquire the latest business logic, intellectual property or hard-coded/embedded passwords to other critical systems within the corporate entity.
The primary objectives and "styles" of the hackers/pentesters reminds me a little of those old Western gold-rush films. Rounding up the Sheriff and his deputies and locking them up in their own jail before robbing the bank is a little analogous to going after the security folks/systems. Meanwhile the priority targeting of the corporate data repositories reminds me of a stagecoach robbery - the pounding of hooves and guns blazing. Yet going after the QA systems reminds me of a movie in which the villains dig up the ground under the saloon and casino - hoovering up all the gold dust that patrons had lost over the years through the cracks in the floorboards.
Grab a beer with a friendly hacker or pentester and ask them how they'd earn their gold.
Saturday, May 29, 2010
At the moment it looks like were entering the Waxing Gibbous stage for anti-FUD (Fear, Uncertainty and Despair) movement. In recent weeks the proliferation of calls to deal with FUD within the security industry has picked up. Depending upon the particular sector, you'll encounter discussions about overcoming the fears associated with shifting data in to the cloud, why "advanced" threats aren't so important if the bulk of attacks don't need to be, etc.
As you'd expect, there are quite a few security folks who make their dime by being vocal about a particular topic, and it's that time of the cycle that the anti-FUD speeches get dusted off and replayed. That's not to say that the anti-FUD folks are unique. There's an biannual waxing and waning to the Full Disclosure movement too, along with annual revisits to the topic of Vulnerability Purchasing Programs, etc.
The anti-FUD movement consequently promotes their own kind of "FUD" - speculating that the world would be a better place if FUD ceased to exist in the security world, and that organizations would be better able to prepare their defenses without the distractions of the next biggest threat.
Some aspects of the anti-FUD cause I might just agree with, but in general I'm less inclined to to follow much of rhetoric from die-hard security officinardos. Why? Well, for the most part, many of their statements are naive in that they obviously fail to understand the world they live in. Listening to them you'd think this is an IT security problem - but in reality "FUD" is a critical element of the sales cycle - regardless of whether you're selling car tires or anti-zit cream.
Every second car advertisement on TV extols the virtue of their safety features, even drunk-driving and "wear your seat-belt" literature distributed state authorities cover the gruesome consequences of not following the rules and taking appropriate actions. FUD gains the attention of the viewer/reader, educates them in some capacity and makes them think more about the consequences of their actions (or inaction's).
FUD is everywhere - just watch the ads covering Zit cream and Tampons on TV, and you'll get the idea. FUD is a critical element of the sales cycle by eliciting a reaction to the message (generally - aiming for a buying reaction).
Folks that jump on their anti-FUD high horses, from my own experience, tend to struggle with commercial sales because they fail to understand what FUD is all about - education, compulsion and sales.
Having said all that, lets not go to the other extreme though. In order to make their FUD more compelling and elicit a greater compulsion for listeners, some sales folks will stretch the truth in to the realm of fiction. These folks need to be quickly reigned-in by the company paying their paycheck. To do otherwise would inevitably result in pissed off customers and a loss of business.
Final thoughts? The security industry is no different from any other industry with innovative products aimed at solving the problems of today and the future. FUD is a way of life, get used to it.
Monday, May 10, 2010
If so, did you know you can actually see the data encoded on your card?
Over the weekend I stumbled upon a very interesting blog titled "Another Science Experiment" covering the use of finely ground rust dust to see how the data is encoded on to standard credit card magnetic tracks.
I'll let the photo's below do the talking...
Sunday, May 9, 2010
In a lot of cases, these particularly sophisticated malware samples only manage to get caught up in the wash of general malware samples because of some circuitous and "unlucky" compromise paths - or because they're several months old and the "discoverers" have finished reaping the reward of having investigated them. Most of the really interesting bespoke malware samples rarely come via mainstream discovery and sample sharing systems though - in fact the majority of them rarely go beyond the virtual walls of the organization or government department that were targeted or victimized by them.
Given all the discussions about Advanced Persistent Threats (APT), Advanced Malware and Next Generation Malware (NG Malware), I thought it was about time to disclose some of the techniques being used within the commercial world in the production of such sophisticated malware... hence this blog entry being the first in a series covering "Military Grade Malware".
Military Grade Malware
I use the term "Military Grade Malware" to encompass the following key concepts:
- A legal contractual agreement exists between the professional software development team and the purchasing organization.
- The expectation is that the "product" will be used for purposes beyond financial and criminal fraud.
- The intended distribution of the malware will be limited in scope and typically only be deployed in very specific environments.
- The malware is designed to be stealthy and continue to operate for extended periods of time - typically against a sophisticated adversary.
In the past I've used the term "weaponized" to encompass malware that makes use of exploit material as part of its critical operations - but this term only extends so far.
There are plenty of boutique security consulting organizations out there that offer "weaponization" services. They will typically review and study the latest vulnerability disclosures, develop reliable exploits for use against specific operating systems (e.g. an exploit for a popular Vietnamese instant messaging client running on Microsoft Windows XP SP3 with the Vietnamese language pack installed), and pass the final QA-checked exploit on to their client.
Most of the organizations I've come across that provide this kind of service have strong affiliations with their local government. That said though, a handful of them are more mercenary and will provide their weaponized exploits to other "friendly" governments. I'll point out at this stage though that this is a wholly different arrangement compared to vulnerability research teams working within companies that develop commercial vulnerability scanning and exploitation tools.
The provisioning of (reliable) weaponized exploits will generally be governed by formal legal contracts. It's not easy work though. Many people see the plethora of public vulnerability disclosures and hear about the odd zero-day exploit doing the rounds, but the development of reliable exploits that meet the contractual demands of the client is not a simple task. A company that can deliver a half-dozen ruggedly reliable weaponized exploits each year is doing very well - and will be compensated accordingly.
The weaponization of malware in my opinion generally only encompasses the binding of a "standard" malware component to a particularly good/reliable/weaponized exploit.
For example, a client may have a preferred Remote Access Trojan (RAT). This RAT is consequently bound to the latest weaponized exploit - i.e. the RAT is merely the payload of the successful exploitation.
In another example, a versatile malware agent may support a library of exploits that it can use to worm and propagate around a targeted network. In this case, the weaponized exploit is constructed to be compatible with the malware agent and is added as an "update".
Both examples would fulfill the generic term "weaponized malware", but there is a difference between this type of malware and what I'd tend to term "Military Grade" malware, since military grade malware may or may not actually make use of weaponized exploit materials.
What are the features and techniques of military grade malware? I'll begin to cover those details in subsequent blog posts...
Earlier this week I discussed the topic over on Damballa's blog site in the entry titled: A Treasury of Dumps. The blog provides a few samples of whats available and how the criminals are using them to augment their search for potential sellers.
Tuesday, May 4, 2010
With that in mind, you're also probably not going to want to build your botnet in a way that its growth is reliant upon a single infection vector or content distribution vehicle. The solution nowadays lies with the strategy of running multiple campaigns against your targets.
Just as political contenders running for office unleash a barrage of sophisticated and targeted campaigns to draw in supporters, professional botnet builders similarly unleash their own barrage of targeted campaigns - looking to sucker en mass their victims.
To understand botnet building campaigns a little better, I've thrown up a blog on the topic over at the Damballa site - Botnet Building Campaigns.
Monday, April 26, 2010
The last few years have shown a steady increase in the sophistication of the tools and tactics the disaffected use online. Social networking applications, Web 2.0 technologies and the general availability of what can best be described as “military grade” cyber attack tools make it a trivial task for protestors to launch crippling attacks from anywhere around the world.
The massive adoption of social networking portals and micro-blogging services in turn created a new generation of centralized Command-and-Control (CnC) capabilities that quickly and easily organize protests for international participants from all walks of life. The simplicity with which these technologies can be leveraged for attack coordination against governments and commercial organizations cannot be underestimated.
A second generation of cyber-protesting tools has emerged, encompassing a disturbing blend of criminal technology and activist enthusiasm. A growing number of movements are asking their members to deliberately install botnets on their hosts and within their networks in order to participate in more sophisticated and effecting cyber-protests.
Botnets have always been considered a severe threat that removes PCs and servers from IT control. However, botnet compromises have always come from the accidental and unknowing installation of bot malware. The purposeful and intentional acceptance of bot malware, however laudable the cause, presents a dangerous challenge to any organization concerned about maintaining control over network assets and demonstrating proper fiduciary responsibility.
In short, the introduction of social networking CnC and an increasingly diverse range of motivations and common-cause group memberships is opening the doors to new cyber-protesting possibilities – and to criminal misappropriation of hacktivist botnets. This whitepaper examines the evolutionary path of opt-in botnets, including how tactics have changed, why anyone would willingly choose to join a botnet, and what activist botnets mean to organizations that find themselves both victims and enablers of a botnet-driven attack.
Monday, March 29, 2010
If you think you know your Bot's from your APT's, and your script-kiddies from your cyber criminals, then it's time to take the plunge and join the coolest threat research team out there and make a real difference to Internet security.
Drop me an email if you're interested in the role...
Job Position: Threat Analyst
Job Area: Research
Internet security is evolving at an increasingly rapid pace. As the thrust and parry of attack vectors and defensive tactics force technologies to advance, the biggest security threat now facing enterprise organizations lies with botnets. The Damballa Research team spearheads global threat research and botnet detection innovation.
Damballa’s dedicated research team is responsible for botnet threat analysis and detection innovation. From our Internet observation portals, and using the latest investigative technologies to intercept and capture samples, the research team studies the techniques employed by criminal botnet operators to command and control their zombie hordes – mapping their spread and evolution – and developing new technologies to both detect and thwart the threat.
As a Threat Analyst you would be part of the team responsible for providing the threat intelligence that powers the core technologies of Damballa’s products – working with massive threat intelligence collections and cutting-edge botnet detection technologies.
The rapid evolution of the threat means that, as a Threat Analyst, you will also need to be able to deep-dive in to the botnet masters lair – turning over the rocks they hide under and visiting the online portals they do their business in – and be capable of analyzing the evidence of their passing. A key to being successful in this role is the ability to provide internal departments and customers with comprehensive intelligence on newly uncovered botnets and other targeted threats – and to be able to communicate the threat in a clear and concise manner.
Collaborating with the marketing and engineering teams, the Threat Analyst will often need to craft scripts to automate the extraction of botnet intelligence and make it available to the company’s other technologies and its knowledgebase as well as responding to ad-hoc requests for malware analysis driven by business and client needs to determine characteristics, functionality, and/or recommend countermeasures.
The position may entail interaction with the media following the successful outcome of directed research or response activities.
• Intelligence gathering and updating of Damballa threat knowledgebases
• Responding to customer queries for deep-dive information on particular botnets and malware
• Independent threat analysis and data mining of new botnet instances
• Investigation of new botnet command and control tactics and subsequent enumeration of botnet operators
• Focused analysis of botnet outbreaks within enterprise and ISP networks
• Contribution to research and commercial papers describing the evolving botnet threat
Skills & Experience:
• Experience as a cyber-threat analyst, or similar technical consulting role
• Good understanding of TCP/IP networking and security
• Strong script building and automation skills
• Database query formulation and stored procedure manipulation
• Ability to troll underground Internet forums and criminal sites/portals for new botnet intelligence
• BS or MS in Computer Science, Engineering or Physical Sciences
• 3+ years of IT industry experience with 2+ years of Internet security experience
• Proficient in multiple compiled and scripting languages (Perl, Python, Ruby, Java, C, etc.)
• Proficient query design in relational databases (Postgres/pgsql preferred)
• Excellent formal communication and presentation skills
• Ability to read and translate multiple international languages a bonus
Friday, March 26, 2010
This is from CNNMoney and their story on how to "Speed up your sluggish computer".
Granted there are many sucky protection suites out there (and many more fake-antivirus products that criminals are peddling), but this particular advice entry is unhelpful and funny at the same time.
Firstly,this particular advice is ill informed. Sure, there are some overlaps in protection capabilities like anti-popup blockers and firewalls, but only on paper. They're complementary overlaps, as their capabilities to perform (and be managed) as pop-up blockers and firewalls tend to be quite different and increase overall. Defense in depth etc. Sure - like I said earlier - desktop protection is a dog on system resources.
Secondly, while I have nothing against ESET's Nod32 Antivirus product (I even use it on a couple of my computers at home - along with a handful of other av products), reference in this "guide" for speeding up sluggish computers smacks of a paid-for advertisement. Further depreciating the advice.
Third and final? "The Mac Fix" funnily enough is true - Mac users tend to not use security software. Like motorcycle riders swerving amongst rush hour traffic on the highway without a helmet, I'd class these Mac users as "temporary citizens" of the Internet.
Sunday, March 21, 2010
As you probably already know, I spent a fair amount of time developing and improving Intrusion Prevention System (IPS) technologies in my tenure with ISS (and then later, under IBM). During that time there were a number of market dynamics that required me to spend quite a bit of time reviewing, analyzing and evaluating the various DLP technologies - both at the host and then network levels. In general though, I was not impressed with the technology - and still aren't. From my perspective, DLP is a bit of a white elephant and is probably going to go down in the annuls of Security History in the chapter next to NAC. Don't get me wrong, as a concept DLP has its place, but in practice it fails to provide any compelling features that aren't (or can't be) delivered using other more common (and existing) enterprise security technologies.
Now, being a networking kind of guy, the thing I find most interesting about network-based DLP is the show and dance the various DLP vendors make about Deep Packet Inspection (DPI) - you'd almost think that they invented the technology and that it only to DLP. Lets get this straight from the beginning - DPI existed within IPS (and IDS) for 5+ years before even the first DLP companies became incorporated and, whats more, products like ISS' Proventia fully parse many hundreds of networking and content-level protocols - many times more than even the most mature dedicated DLP product out there.
So, if you're thinking DLP is a new and vital technology to roll out in your enterprise (particularly at the network layer), my advice would be to look to a top-tier IPS appliance instead because you'll find better protocol and content inspection coverage, and higher capabilities in inspecting traffic for critical data leakage. One day I'd love to see a head-to-head appliance review of the various vendors products detecting and defending against all the most common data leakage techniques/tactics.
DLP and Botnets
So, how useful is DLP in combating botnets? First of all, we obviously need some degree of clarification about "combating botnets". Lets break this down in to three separate botnet attack phases:
- Preventing hosts from becoming botnet victims,
- Detecting and stopping the leakage of confidential corporate information,
- Cleanup and remediation of bot infected victims.
In order to understand DLP capabilities in preventing hosts from becoming botnet victims (from a network perspective), we need to bear in mind the limitations of DPI and the most common mechanisms hosts succumb to being compromised and joining a botnet.
- Criminals leverage a broad spectrum of attack vectors in order to compromise their target - with the most common being spam/phishing emails that convince the user to infecting themselves, malicious drive-by-download sites the exploit vulnerabilities in the Web browser and removable media worming (e.g. USB devices). Unless the DLP solution is configured to watch inbound network traffic and scrutinize URL's (perhaps using a URL blacklist for checking against), the probability of detecting the malicious payloads is remote - and anti-spam and perimeter Web gateway technologies would be a much more effective solution here. IPS technologies would also excel in dealing with the exploits being used to compromise the Web browser vulnerabilities.
- Inspection of the HTTP/FTP/etc. downloads or email attachments is of course possible - but it will be a struggle to to identify the malicious intent of the binary files, but should best be dealt with using anti-virus technologies - particularly products with good behavioral analysis engines and, in a pinch, virtual/sandbox dynamic-analysis of malware.
Detecting and stopping the leakage of confidential corporate information
Detecting the information leakage from bot infected hosts should be an easy task - after all, that's supposed to be DLP's bread and butter. Unfortunately it's not quite as easy as it sounds.
- The signatures (or "fingerprints") DLP devices use are generally tuned to specific forms of structured data. For example, SSN's, credit card details and address details have a specific structure. As such, DLP solutions are generally good at spotting this kind data being transmitted across a network and leaking from the enterprise (just as IPS's can too). As such, DLP appliances can easily detect the "clear text" transport of these kinds of data.
- Unfortunately, botnet operators tend not to transport/extract confidential data past perimeter inspection/detection technologies in "clear text". Obviously, if the bot agent chooses to transport the data to a remote server over HTTPS, then all the traffic will be encrypted. But botnet operators don't even need to do that...
- Purchasing, managing and configuring Web server certificates for HTTPS can be tedious and can often result in "invalid certificate" alerts - which would in turn alert the user of any folks inspecting the system logs. As such, many botnet operators have decided to not use HTTPS - instead they extract their stolen data over un-encrypted HTTP, but they compress and encrypt the data they're stealing from on the victims machine before sending. I.e. the transport is unencrypted, by the file being transferred is itself encrypted and cannot be inspected by DLP (or any other DPI technology).
- Armed with a blacklist of known botnet Command and Control (CnC) channels or file drop-boxes, the DLP solution could keep watch over who the victim system is communicating with and block those - but there are already plenty of IP/Domain/URL blocking technologies already out there that are more efficient.
- It's important to understand that many professional botnet operators have moved away from stealing classic datasets (e.g. credit card details, SSN's, etc.), and towards more valuable datasets (e.g. software source code, CFO banking credentials, prototype designs) - which happen to be considerably more difficult to detect with DLP technologies (especially if the data is encrypted of course).
- DLP is limited to specific protocols and specific file/attachment types for inspection. To evade detection, the criminal botnet operator just needs to use an "unsupported" protocol/format.
Well, I can't think of anything that DLP offers in this realm.
Clobbering Botnets with DLP
In general, DLP makes for a very poor anti-botnet technology. DLP is adequate enough detecting the simple stuff - e.g. a user sending an email with 10,000 credit card details - but is ill positioned to detect an automated bot agent obfuscating or encrypting a compressed file of corporate secrets.
In fact, as far as I'm concerned, I can't really see a reason for it existing as a separate security technology anyway. Existing IPS technologies and signatures include just about all of the data leakage detection features already.
That all said, DLP is probably adequate enough for detecting stupid user mistakes, but useless for combating professional criminals - whether they're botnet operators or insider threats.
Friday, March 19, 2010
Here's some of this morning's email apology:
I am mortified, as is everyone in our marketing team, that this has happened.Thanks for the quick response Sophos. Apology accepted.
The messages were not posted on that guy's blog by an employee of Sophos, but by a worker at an external company hired by our marketing department.
We have called the company concerned in for a meeting today, and will be reading the riot act to them. Furthermore, we will be ensuring that this kind of activity stops immediately, as it runs counter to everything we believe in as a computer security company.
There's enough junk on the internet already - we don't need firms representing computer security companies adding to the problem with such inane and unprofessional posts.
We strive to be much much better than this, and on this occasion things went badly wrong. I'm genuinely sorry.
Just so you know, we are going to put better processes in place so that third party agencies understand what Sophos does and doesn't find acceptable in promoting our brand.
Thursday, March 18, 2010
Earlier this week, a customer asked me what was the smartest and most sophisticated thing I’d seen malware authors doing recently. He was probably expecting me to mention some new toolset feature such as auto-cracking CAPTCHA’s for webmail spamming or the custom advertiser routines for redirecting in-browser advertising… instead, I discussed the new host-locked malware versions that are being experimented with by a number of professional botnet operators.
Three years ago I wrote a paper covering the one-of-a-kind exploitation techniques that were being adopted by drive-by-download distributors and exploit delivery systems. The paper – X-Morphic Exploitation – covers the generation of one-off “custom” exploits and malware that are created for each potential victim visiting the attackers malicious Web site. One of the techniques covered related to the creation and delivery of serial variant malware and how each unique sample was only ever served to a single victim – all as a means of defeating signature-based protection technologies (and, to a smaller extent, bulk analysis of malware samples).
Well, as you’d expect, the threat has moved on. While the X-Morphic exploit delivery platforms have grown more and more popular over the last three years, it would seem that the botnet builders have adopted an additional new (and rather powerful) technique that makes it even more difficult for malware analysts and bulk analysis tools to deal with their malicious bot agents – and it taken right out of the commercial anti-piracy cookbook.
To explain whats going on, it’s probably easiest to step through a botnet infection that makes use of the new technique:
- The would-be victim/user is browsing the Internet and stumbles upon a drive-by-download Web page. The page cycles through a number of Web browser vulnerabilities – locates an exploit that will work against the users browser – exploits the vulnerability – inserts a shellcode payload and causes the newly introduced (and hidden) process(es) to execute.
- A hidden process downloads a “dropper” file on to the victims computer, and causes it to execute. The dropper may be a custom package created just for this victim (i.e. X-Morphic generated) or one that is being served to all potential victims for that day/week.
- The dropper unpacks itself – unraveling all of the tools, scripts and malware agents it needs on to the victims computer – and then proceeds to hide the malicious payload components (e.g. disabling the hosts antivirus protection, turning off auto-updates, modifying startup processes, root-kitting the botnet agent), cleans itself up by removing all redundant files and evidence of the installation activities, and finally starts up the actual botnet agent.
- The first time the botnet agent starts up, it does a number of checks to see whether or not it has Internet access (e.g. deciding whether a corporate proxy is in use) and whether or not its running on a “real” victims computer (i.e. that it’s not running in a sandbox or virtualized environment – which would indicate that someone is trying to analyze and study the malware itself). If everything looks good and the coast is clear (so to speak), the botnet agent does a quick system-level inventory of the victims computer (e.g. CPU ID, HDD serial number, network card MAC, BIOS version, etc.) and then makes its first connection to the botnet’s Command and Control (CnC) – registering the victims computer as a member of the botnet, and sending through the unique system inventory data.
- In response, the botnet CnC immediately sends an updated bot agent to the victims computer – uninstalling the old agent, and installing the new agent. However, this new agent is specifically created and “locked” to the victims computer – i.e. it is unique to this particular victim and will not run on any other computer.
- Once the new “locked” bot agent is installed, it connects to a different CnC server – and the victim’s computer is now fully incorporated in to the criminals botnet, and under their remote control.
Those last three steps are what’s new and innovative, and what’s going to spell the ruin for many of the most important malware analysis tools and techniques antivirus vendors use to combat the malware plague.
By infecting their victims computer with a unique and “locked” version of bot agent (or malware), and ensuring that it will only ever run on that particular victims computer, it means that any samples that may eventually be acquired by the antivirus vendor(s) wont actually be useful to them. Automated analysis systems that take in malware samples from spam traps, web crawlers, etc. and execute them in virtual environments or sandboxes etc. will not yield the real botnet agent for study nor details of the true botnet CnC. Meanwhile, malware samples obtained from forensic retrieval processes or submitted by antivirus customers will not work (e.g. they will either not function maliciously or not execute at all in an analysis environemnt) – because they are encoded and locked specifically to the victims machine.
This “locking” process isn’t new in itself. Many commercial software vendors use this technique – for example, Microsoft uses the same technique for detecting pirated versions of their operating system and enforcing their licensing policy.In fact many manufacturers of DIY malware construction kits use the same techniques to protect their toolkits from being both pirated and falling in to the hands of security vendors. However, in this case the botnet operators are using it as a technique to ensure that samples of their malicious bot agents are useless to antivirus vendors.
Sure, a skilled malware reverse engineer could manually work around this kind of software locking mechanism, but its a slow and tedious process even for the most experienced folks – and manual analysis done in this way doesn’t remotely scale in any meaningful way to counter this threat. That said, if the (real) botnet agent also sends through an updated system inventory to the botnet CnC server each time it connects, and the “signature” no longer matches the one that the bonet operator originally associated with that particular botnet agent, then the botnet operator will know that someone is tampering with their software and disconnect the victim from the botnet (or perhaps launch an attack at the investigators/analysts computer)
As botnet operators (and general malware authors) further adopt this kind of victim-specific locking practice to protect their malware investment, and as the sophistication of the locking increases (as it inevitably will), the antivirus industry is going to have to rethink many of the techniques it currently relies upon for sample analysis and signature generation. There is no easy option for countering this new criminal practice.