In what sometimes feels like a past life after a heavy day dealing with botnets, I remember fondly many of the covert and physical penetration tests I've worked on or had teams engaged in.
Depending upon the goals of the penetration test, things like installing physical keyloggers on the receptionists computer (doing this surreptitiously while engaged in conversation with the receptionist - hands dangling down the back of the computer...) in order to capture emails and physical door entry codes, dropping a little wireless Compaq/HP iPaq in the plant-pot for a day of wireless sniffing etc., dropping "malware" infected USB keys in the office car park in the morning (waiting for the "finders" to check them out on their office computer by lunchtime) and pretending to be official fire extinguisher inspectors and getting access (and a little alone-time) in their server farm.
Anyhow, today I spotted an interesting gadget that would have been pretty helpful on many of these physical engagements - The PlugBot. It's a wireless PC inside what looks like a plug adapter.
If you're not a penetration tester - perhaps you should read about it anyway. Something to "keep an eye on" within your own organization then.
Monday, December 27, 2010
Friday, December 10, 2010
Google Maps for Command & Control
You've probably heard about the protests going on in London concerning the proposed uptick in University fees and the way in which some of the actions (from both sides) have gotten out of hand.
We'll it appears that Google Maps has/is been/being used for command and control of the various protesting actions by some - tracking where the police/brocades/ambulances are etc.
It's an interesting use of the mapping technology.
Student protesters use Google Maps to outwit police on the Metro.co.uk
We'll it appears that Google Maps has/is been/being used for command and control of the various protesting actions by some - tracking where the police/brocades/ambulances are etc.
It's an interesting use of the mapping technology.
Student protesters use Google Maps to outwit police on the Metro.co.uk
Wednesday, December 8, 2010
Threat Landscape in 2011
OK, so it's that time of the year again and all the security folks are out making predictions. And, as usual, I have a number of inbound calls for me to pump out the same. Not necessarily "the same" predictions though - since why would marketing and PR teams want to pimp "the same" predictions as everyone else... that'll never get mentioned in the press... ideally a few predictions about how the world will come to an end and preferably in a way that no one has though of before. You know the sort of prediction I mean - "By the end of 2011, cyber criminals will have full control of the electronic systems that control sewer pipes in the US and will be extorting cities for millions of dollars - or else they flood the city and cause massive deaths from typhoid and plague."
Cynicism in the run up to Christmas? Bah-humbug :-)
Anyway, despite all that, "predictions" can be pretty useful - but only if they're (mostly) correct and can be actionable. So, with that in mind, I've posted some "expectations" (rather than predictions) for 2011. I think it's important to understand the trends behind certain predictions. A prediction that comes from no where, with no context, and with no qualification is about as helpful as a TSA officer.
Here are the 2011 predictions (aka expectations) I posted on the Damballa blog:
Cynicism in the run up to Christmas? Bah-humbug :-)
Anyway, despite all that, "predictions" can be pretty useful - but only if they're (mostly) correct and can be actionable. So, with that in mind, I've posted some "expectations" (rather than predictions) for 2011. I think it's important to understand the trends behind certain predictions. A prediction that comes from no where, with no context, and with no qualification is about as helpful as a TSA officer.
Here are the 2011 predictions (aka expectations) I posted on the Damballa blog:
- The cyber-crime ecosystem will continue to add new specialist niches that straddle the traditional black and white markets for both the tools they produce and information they harvest. The resulting gray-markets will broaden the laundering services they already offer for identities and reputation.
- Commercial developers of malware will continue to diversify their business models and there will be a steady increase in the number of authors that transition from “just building” the malware construction kits to running and operating their own commercial botnet services.
- The production of “proof-of-concept” malware, hitherto limited to boutique penetration testing companies, will become more mainstream as businesses that produce mechanical and industrial goods find a greater need to account for threats that target their physical products or production facilities.
- 4. Reputation will be an increasingly important factor in why an organization (or the resources of that organization) will be targeted for exploitation. As IP and DNS reputation systems mature and are more widely adopted, organized cyber-criminals will be more cognizant of the reputation of the systems they compromise and seek to leverage that reputation in their evasion strategies.
- The pace at which botnet operators update and reissue the malware agents on their victims’ computers will continue to increase. In an effort to avoid dynamic analysis and detection technologies deployed at the perimeter of enterprise networks or operating within the clouds of anti-virus service providers, criminal operators will find themselves rolling out new updates every few hours (which isn’t a problem for them).
- Malware authors will continue to tinker with new methods of botnet control that abuse commercial web services such as social networks sites, micro-blogging sites, free file hosting services and paste bins – but will find them increasingly ineffective as a reliable method of command and control as the pace in which takedown operations by security vendors increases.
- The requirement for malware to operate for longer periods of time in a stealthy manner upon the victim’s computer will become ever more important for cyber-criminals. As such, more flexible command and control discovery techniques – such as dynamic domain generation algorithms – will become more popular in an effort to thwart blacklisting technologies. As the criminals mature their information laundering processes, the advantage of long-term host compromises will be evident in their monetary gains.
- The rapidity in which compromised systems are bought, sold and traded amongst cyber-criminals will increase. As more criminals conduct their business within the federated ecosystem, there will be more opportunity for exchanging access to victim computers and greater degrees of specialization.
- Botnet operators who employ web-based command and control portals will enhance their security of both the portal application and the data stolen from their botnet victims. Encryption of the data uploaded to the data drop sites will increase and utilize asymmetric cryptography in order to evade security researchers who reverse engineer the malware samples.
- The requirement for “live” and dynamic control of victims will increase as botnet operators hone new ways of automatically controlling or scripting repeated fraud actions. Older botnets will continue their batch-oriented commands for noisy attacks, but the malware agents and their command and control systems will grow more flexible even if they aren’t used.
Sunday, December 5, 2010
Reputation or Exploit?
The other day I was blogging on the Damballa site about the principles behind dynamic reputation systems - Building A Dynamic Reputation - and trying to answer a question that came up over whether dynamic reputation systems can replace IPS.
You'll find some comments on the other blog, but I wanted to add some more thoughts here - based upon some thoughts shared by others on the topic.
I guess the issue lying at the heart of the question is whether, by implementing a blocking (or filtering) policy based upon the findings/classification of a dynamic reputation system, you'd be gaining better protection than having implemented a stand alone IPS.
To issues come in to play in the the decision - How "complete" is the dynamic reputation system? and How "reliable" is the IPS?
As I said in the original posting - advanced dynamic reputation systems have been coming along in leaps and bounds. We're not talking about some static blacklist here and neither are we limiting things to classic IP reputation systems that deal with one threat category at a time. Instead we're talking about systems that take as inputs dozens of vetted threat detection and classification lists, realtime feeds of streaming DNS/Domain/Netflow/Registration/SpamTrap/Sinkhole/etc. data and advanced machine learning algorithms.
From experience (and empirical evidence), blocking the things that a dynamic reputation system says is bad or very suspicious at the network perimeter appears to out perform IPS - if the count of victim machines is anything to go by.
One of the key failings of IPS is that its reputation is better than its performance. What I mean by that is an IPS is limited to its signatures/algorithms for detecing know threat profiles and exploit techniques. These are not all encompasing - and you'll normally only fine the first "in-the-wild" exploit for a vulnerability covered (or exploits that get used by popular commercial hacking tools and IPS testing agencies) - rather than all the obfuscation and evasion techniques. You may remember the blog I did a little while about the commercial exploit testing services used by the badguys - such as Virtest.com.
So, here's my thinking. It's better to block known bad and provable dangerous/suspicious servers (independent or restricted to a particular protocol - depending upon your tolerance for pain) than on a hope that your IPS is going to stop some (hopefully) past-seen permutation of a particular exploit being served by the attacking server.
Some may argue that you're still at risk of servers that are unkown to a dynamic reputation system. Are you though? Think of it this way. You have a dynamic reputation system that is taking live datafeeds etc (as described above) for the entire Internet. If a server (or service) has never been seen and doesn't have a reputational score - then it's already suspicious and could probably be blocked for the timebeing.
Defense in depth is still a good option though!
You'll find some comments on the other blog, but I wanted to add some more thoughts here - based upon some thoughts shared by others on the topic.
I guess the issue lying at the heart of the question is whether, by implementing a blocking (or filtering) policy based upon the findings/classification of a dynamic reputation system, you'd be gaining better protection than having implemented a stand alone IPS.
To issues come in to play in the the decision - How "complete" is the dynamic reputation system? and How "reliable" is the IPS?
As I said in the original posting - advanced dynamic reputation systems have been coming along in leaps and bounds. We're not talking about some static blacklist here and neither are we limiting things to classic IP reputation systems that deal with one threat category at a time. Instead we're talking about systems that take as inputs dozens of vetted threat detection and classification lists, realtime feeds of streaming DNS/Domain/Netflow/Registration/SpamTrap/Sinkhole/etc. data and advanced machine learning algorithms.
From experience (and empirical evidence), blocking the things that a dynamic reputation system says is bad or very suspicious at the network perimeter appears to out perform IPS - if the count of victim machines is anything to go by.
One of the key failings of IPS is that its reputation is better than its performance. What I mean by that is an IPS is limited to its signatures/algorithms for detecing know threat profiles and exploit techniques. These are not all encompasing - and you'll normally only fine the first "in-the-wild" exploit for a vulnerability covered (or exploits that get used by popular commercial hacking tools and IPS testing agencies) - rather than all the obfuscation and evasion techniques. You may remember the blog I did a little while about the commercial exploit testing services used by the badguys - such as Virtest.com.
So, here's my thinking. It's better to block known bad and provable dangerous/suspicious servers (independent or restricted to a particular protocol - depending upon your tolerance for pain) than on a hope that your IPS is going to stop some (hopefully) past-seen permutation of a particular exploit being served by the attacking server.
Some may argue that you're still at risk of servers that are unkown to a dynamic reputation system. Are you though? Think of it this way. You have a dynamic reputation system that is taking live datafeeds etc (as described above) for the entire Internet. If a server (or service) has never been seen and doesn't have a reputational score - then it's already suspicious and could probably be blocked for the timebeing.
Defense in depth is still a good option though!
Subscribe to:
Posts (Atom)