The world is abuzz this week with some flaming malware – well “Flame”
is the family name if you want to be precise. The malware package
itself is considerably larger than what you’ll typically bump into on
average, but the interest it is garnering with the media and antivirus
vendors has more to do with the kinds of victims that have sprung up –
victims mostly in the Middle East, including Iran – and a couple of
vendors claiming the malware as being related to Stuxnet and Duku.
A technical report on sKyWIper was released by the Laboratory of Cryptography and Systems Security (CrySys Lab) over at the Budapest University of Technology and Economics
yesterday covering their analysis of the malware – discovered earlier
in May 2012 – and they also drew the conclusion that this threat is
related (if not identical) to the malware described by the Iran National
CERT (MAHER) – referred to as Flamer. Meanwhile, Kaspersky released some of their own analysis of “Flame” on Monday and created a FAQ based upon their interpretation of the malware’s functionality and motivations.
There is of course some debate starting about the first detection of
Flamer. Given the malware’s size and number of constituent components it
shouldn’t be surprising to hear that some pieces of it may have been
detected as far back as March 1st 2010 – such as the file “~ZFF042.TMP”
(also seen as MSSECMGR.OCX and 07568402.TMP) – analyzed by Webroot and attributed to a system in Iran.
While
it’s practically a certainty that the malware was created and infected a
number of victims before it was “detected” in May, I’d caution against
some of the jumps people are making related to the attribution of the
threat.
Firstly, this behemoth of a malware pack is constructed of a lot of
different files – many of which are not malicious; with the package
including common library files (such as those necessary for handling
compression and video capture) as well as the Lua
virtual machine. Secondly, when you’re limited to an 8.3 file naming
convention, even malicious files are likely to have name collisions –
resulting in many spurious associations with past, unrelated, threats if
you’re googling for relationships. And finally, why build everything
from scratch? – it’s not like malware authors feel honor bound to adhere
to copyright restrictions or steal code from other malware authors –
nowadays we see an awful lot of code recycling and simple theft as
criminals hijack the best features from one another.
As you’d expect from a bloated malware package developed by even a
marginally capable hacker, there are a lot of useful features included
within. It’s rare to see so many features inside a single malware sample
(or family), but not exceptional. As Vitaly Kamluk of Kaspersky stated –
“Once a system is infected, Flame begins a complex set of operations,
including sniffing the network traffic, taking screenshots, recording
audio conversations, intercepting the keyboard, and so on,” – which is
more typical of an attack kit rather than a piece of malware. What do I
mean by “attack kit”? Basically a collection of favorite tools and
scripts used by hackers to navigate a compromised host or network. In
the commercial pentesting game, the consultant will normally have a
compressed file (i.e. the “attack kit”) that he can shuttle across the
network and drop on any hosts he gains access to. That file contains all
of the tools they’re going to need to unravel the security of the
(newly) compromised host and harvest the additional information they’ll
need to navigate onto the next targeted device. It’s not rocket science,
but it works just fine.
I’m sure some people will be asking whether the malware does anything
unique. From what I can tell (without having performed an exhaustive
blow-by-blow analysis of the 20Mb malware file), the collection of files
doesn’t point to anything not already seen in most common banking
Trojans or everyday hacking tools. That doesn’t make it less dangerous –
it merely reflects the state of malware development, where “advanced”
features are standard components and can be incorporated through
check-box-like selection options at compile time.
For malware of this ilk, automated propagation of infections (and
infectious material) is important. Flame includes a number of them –
including the commonly encountered USB-based autorun and .lnk
vulnerabilities observed in malware families like Stuxnet (and just
about every other piece of malware since the disclosure of the
successful .lnk infection vector), and that odd print spooler
vulnerability – which helps date the malware packaged. By that I mean it
helps date the samples that have been recovered – as there is currently
no evidence of what the malware package employed prior to these recent
disclosures, or what other variants that are circulating in the wild
(and not been detected by antivirus products today).
Are these exploits being used for propagation evidence that Stuxnet,
Duku and Flame were created and operated by the same organization?
Honestly, there’s nothing particularly tangible here to reach that
conclusion. Like I said before, criminals are only too happy to steal
and recycle others code – and this is incredibly common when it comes to
the use of exploits. More importantly, these kinds of exploits are
incorporated as updates into distributable libraries, which are then
consumed by malware and penetration tool kits alike. Attack kits similar
to Flame are constantly being updated with new and better tool
components – which is why it will be difficult to draw out a timeline
for the specific phases of the threat.
That all said, if the malware isn’t so special – and it’s a
hodgepodge of various public (known) malicious components – why has it
eluded antivirus products in the victim regions for so long? It would be
simple to argue that these regions aren’t known for employing
cutting-edge antimalware defenses and aren’t well served with
local-language versions of the most capable desktop antivirus suites,
but I think the answer is a little simpler than that – the actors behind
this threat have successfully managed their targets and victims –
keeping a low profile and not going for the masses or complex setups.
This management aspect is clearly reflected in the kill module of the
malware package. For example, there seems to be a module named
“browse32″ that’s designed to search for all evidence of compromise
(e.g. malware components, screenshots, stolen data, breadcrumbs, etc.)
and carefully remove them. While many malware families employ a cleanup
capability to hide the initial infection, few include the capability of
removing all evidence on the host (beyond trashing the entire computer).
This, to my mind, is more reflective of a tool set designed for human
interactive control – i.e. for targeted attacks.
Wednesday, May 30, 2012
Detecting Malware is Only One Step of Many
Dealing with the malware threat isn’t a Boolean problem anymore. By
that I mean being able to detect (and block) a malicious binary isn’t
the conclusion to the threat, but rather it’s a perspective on the
status of the threat – a piece of evidence tied to the lifecycle of a
breach.
Following on from yesterday’s blog covering the Antivirus Uncertainty Principle, I believe it’s important to differentiate between the ability to detect the malware binary from the actions and status of the malware in operation. Antivirus technologies are effective tools for detecting malicious binaries – either at the network layer, or through host-based file inspection – but their presence is just one indicator of a bigger problem.
For example, let’s consider the scenario of the discovery of a used syringe lying on the pavement outside your office entryway. It is relatively easy to identify a syringe from several yards away, and closer you get to it, the easier it is to determine if it has been used before – but it’ll take some effort and a degree of specialization to determine whether the syringe harbors an infectious disease.
That’s basically the role of commercial antivirus products – detecting and classifying malware samples. However, what you’re not going to be able to determine is whether anyone was accidentally stuck by the needle, or whether anyone is showing symptoms of the infectious disease it may have harbored. To answer those questions you’ll need a different, complementary, approach.
In the complex ballet of defense-in-depth protection deployment, it is critical that organizations be able to qualify and prioritize actions in the face of the barrage of alerts they receive daily. When it comes to the lifecycle of a breach and construction of an incident response plan, how do you differentiate between threats? Surely a malware detection is a malware detection, is a malware detection?
First off the bat, the detection of malware isn’t the same as the detection of an infection. The significance of a malware detection alert coming from your signature-based SMTP gateway is different from one coming from your proxy ICAP-driven malware dynamic analysis appliance, which is different again from the alert coming from the desktop antivirus solution. The ability to qualify whether the malware sample made it to the target is significant. If the malware was detected at the network-level and never made it to the host, then that’s a “gold star” for you. If you detected it at the network-level and it still made it to host, but the host-based antivirus product detected it, that’s a “bronze star”. Meanwhile, if you detected it at the network-level and didn’t get an alert from the host-based antivirus, that’s a… well, it’s not going to be a star I guess.
Regardless of what detection alerts you may have received, it’s even more important to differentiate between observing a malware binary and the identification of a subsequent infection. If the malware was unable to infect the host device, how much of a threat does it represent?
In the real world where alerts are plentiful, correlation between vendor alerts is difficult, and incident response teams are stretched to the breaking point, malware detections are merely a statistical device for executives to justify the continued spend on a particular protection technology. What really matters is how you differentiate and prioritize between all the different alerts – and move from malware detection to infection response.
Take for example a large organization that receives alerts that 100 devices within their network have encountered Zeus malware in a single day. First of all, “Zeus” is a name for several divergent families of botnet malware used by hundreds of different criminal operators around the world – and comes in a whole bunch of different flavors and capabilities, and are used by criminals in all sorts of ways. “Zeus” is a malware label – not a threat qualification. But I digress…
Let’s say that your network-based antivirus appliance detected and blocked 40 of those alertable instances (statistically signature-based antivirus solutions would probably have caught 2 of the 40, while dynamic malware analysis solutions would catch 38 of the 40). From an incident responder’s perspective there was no threat and no call to action from these 40 alerts.
That leaves 60 Zeus malware that made it to the end device. Now let’s say that 5 of those were detected by the local-host antivirus product and “removed”. Again, from an incident responder’s perspective, no harm – no foul.
Now the interesting part – what if you could differentiate between the other 55 Zeus malware installations? How does that affect things?
If we assume you’ve deployed an advanced threat detection system that manages to combine the features of malware binary detection and real-time network traffic analysis with counterintelligence on the criminals behind the threat, you could also identify the following communications:
Spotting a used syringe is one thing. It’s quite another to identify and support someone who’s been infected with the disease it contained.
Following on from yesterday’s blog covering the Antivirus Uncertainty Principle, I believe it’s important to differentiate between the ability to detect the malware binary from the actions and status of the malware in operation. Antivirus technologies are effective tools for detecting malicious binaries – either at the network layer, or through host-based file inspection – but their presence is just one indicator of a bigger problem.
For example, let’s consider the scenario of the discovery of a used syringe lying on the pavement outside your office entryway. It is relatively easy to identify a syringe from several yards away, and closer you get to it, the easier it is to determine if it has been used before – but it’ll take some effort and a degree of specialization to determine whether the syringe harbors an infectious disease.
That’s basically the role of commercial antivirus products – detecting and classifying malware samples. However, what you’re not going to be able to determine is whether anyone was accidentally stuck by the needle, or whether anyone is showing symptoms of the infectious disease it may have harbored. To answer those questions you’ll need a different, complementary, approach.
In the complex ballet of defense-in-depth protection deployment, it is critical that organizations be able to qualify and prioritize actions in the face of the barrage of alerts they receive daily. When it comes to the lifecycle of a breach and construction of an incident response plan, how do you differentiate between threats? Surely a malware detection is a malware detection, is a malware detection?
First off the bat, the detection of malware isn’t the same as the detection of an infection. The significance of a malware detection alert coming from your signature-based SMTP gateway is different from one coming from your proxy ICAP-driven malware dynamic analysis appliance, which is different again from the alert coming from the desktop antivirus solution. The ability to qualify whether the malware sample made it to the target is significant. If the malware was detected at the network-level and never made it to the host, then that’s a “gold star” for you. If you detected it at the network-level and it still made it to host, but the host-based antivirus product detected it, that’s a “bronze star”. Meanwhile, if you detected it at the network-level and didn’t get an alert from the host-based antivirus, that’s a… well, it’s not going to be a star I guess.
Regardless of what detection alerts you may have received, it’s even more important to differentiate between observing a malware binary and the identification of a subsequent infection. If the malware was unable to infect the host device, how much of a threat does it represent?
In the real world where alerts are plentiful, correlation between vendor alerts is difficult, and incident response teams are stretched to the breaking point, malware detections are merely a statistical device for executives to justify the continued spend on a particular protection technology. What really matters is how you differentiate and prioritize between all the different alerts – and move from malware detection to infection response.
Take for example a large organization that receives alerts that 100 devices within their network have encountered Zeus malware in a single day. First of all, “Zeus” is a name for several divergent families of botnet malware used by hundreds of different criminal operators around the world – and comes in a whole bunch of different flavors and capabilities, and are used by criminals in all sorts of ways. “Zeus” is a malware label – not a threat qualification. But I digress…
Let’s say that your network-based antivirus appliance detected and blocked 40 of those alertable instances (statistically signature-based antivirus solutions would probably have caught 2 of the 40, while dynamic malware analysis solutions would catch 38 of the 40). From an incident responder’s perspective there was no threat and no call to action from these 40 alerts.
That leaves 60 Zeus malware that made it to the end device. Now let’s say that 5 of those were detected by the local-host antivirus product and “removed”. Again, from an incident responder’s perspective, no harm – no foul.
Now the interesting part – what if you could differentiate between the other 55 Zeus malware installations? How does that affect things?
If we assume you’ve deployed an advanced threat detection system that manages to combine the features of malware binary detection and real-time network traffic analysis with counterintelligence on the criminals behind the threat, you could also identify the following communications:
- 5 of the infected devices are attempting to locate the command and control (C&C) infrastructure of a botnet that was shut down ten months ago. While the Zeus malware may be caching stolen credentials and data on the victim’s device, it cannot ever pass them to the criminals.
- 20 of the infected devices are attempting to reach old and well known criminal C&C infrastructure; however your content filtering and IPS technologies operate with blacklists that are now blocking these particular domains.
- 8 of the Zeus installations are old and not “proxy-aware”, and are incapable of reaching the bad guys C&C while those devices are within your network.
- 6 of the Zeus infected devices are communicating with an external C&C that has been “sinkholed” and is under the control of a security vendor somewhere. While the original criminal operators no longer have control of the botnet, the infected devices are still managing to navigate your network defenses and upload stolen data somewhere – and there’s no guarantee that the remote security vendor isn’t selling that intelligence on to someone else.
- Of the remaining 16 Zeus infected devices that are successfully navigating your network defenses and are engaging with the remote criminals, 3 belong to a botnet operated by a criminal organization specializing in banking Trojans based in the Ukraine, 6 belong to a botnet operated by criminals that focus upon click-fraud and DNS manipulation based in the Netherlands, and 7 belong to a botnet operator that targets US-based financial sector organizations based in China.
Spotting a used syringe is one thing. It’s quite another to identify and support someone who’s been infected with the disease it contained.
Malware Uncertainty & False Positives
The antivirus industry has been trying to deal with false positive
detection issues for a long, long time -and it’s not going to be fixed
anytime soon. To better understand why, the physicist in me draws an
analogy with Heisenberg’s Uncertainty Principle – where, in its simplest
distillation, the better you know where an atom is, the less likely
you’ll know it’s momentum (and vice versa) – aka the “observer effect“.
In the malware detection world, the more positive you are that
something is malware, the less likely you’ll catch other malware. And
the reverse of that, the better you are at detecting a spectrum of
malware, the less positive you will be that it is malware.
If that particular geek-flash doesn’t make sense to you, let me offer you this alternative insight then. The highest fidelity malware detection system is going to be signature based. The more exacting the signature (which optimally would be a unique hash value for a particular file), the greater the precision in detecting a particular malicious file – however, the precision of the signature means that other malicious files that don’t meet the exacting rule of the signature will slip by. On the other hand, a set of behaviors that together could label a binary file as malicious is less exacting, but able to detect a broader spectrum of malware. The price for that flexibility and increased capability of detecting bad stuff comes at the cost of an increased probability of false positive detections.
In physics there’s a variable, ℏ the reduced Planck constant – that acts a bit like the fulcrum of a teeter-totter (“seesaw” for the non-American rest-of-the-world); it’s also a fundamental constant of our universe – like the speed of light. In the antivirus world of Uncertainty Principles the fulcrum isn’t a universal constant, instead you could probably argue that it’s a function of cash. The more money you throw at the uncertainty problem, the more gravity-defying the teeter-totter would appear to become.
That may all sound a little discomforting. Yes, the more capable your antivirus detection technologies are in detecting malware, the more frequently false positives will crop up. But you should also bear in mind that, in general, the overall percentage of false positives tends to go down (if everyone is doing things properly). What does that mean in reality? If you’re rarely encountering false positives with your existing antivirus defenses, you’re almost certainly missing a whole lot of maliciousness. It would be nice to say that if you’re getting a whole lot of false positives you must, by corollary, be detecting (and stopping) a shed-load of malware — but I don’t think that’s always the case; it may be because you’re just doing it wrong. Or, as the French would say – C’est la vie.
If that particular geek-flash doesn’t make sense to you, let me offer you this alternative insight then. The highest fidelity malware detection system is going to be signature based. The more exacting the signature (which optimally would be a unique hash value for a particular file), the greater the precision in detecting a particular malicious file – however, the precision of the signature means that other malicious files that don’t meet the exacting rule of the signature will slip by. On the other hand, a set of behaviors that together could label a binary file as malicious is less exacting, but able to detect a broader spectrum of malware. The price for that flexibility and increased capability of detecting bad stuff comes at the cost of an increased probability of false positive detections.
In physics there’s a variable, ℏ the reduced Planck constant – that acts a bit like the fulcrum of a teeter-totter (“seesaw” for the non-American rest-of-the-world); it’s also a fundamental constant of our universe – like the speed of light. In the antivirus world of Uncertainty Principles the fulcrum isn’t a universal constant, instead you could probably argue that it’s a function of cash. The more money you throw at the uncertainty problem, the more gravity-defying the teeter-totter would appear to become.
That may all sound a little discomforting. Yes, the more capable your antivirus detection technologies are in detecting malware, the more frequently false positives will crop up. But you should also bear in mind that, in general, the overall percentage of false positives tends to go down (if everyone is doing things properly). What does that mean in reality? If you’re rarely encountering false positives with your existing antivirus defenses, you’re almost certainly missing a whole lot of maliciousness. It would be nice to say that if you’re getting a whole lot of false positives you must, by corollary, be detecting (and stopping) a shed-load of malware — but I don’t think that’s always the case; it may be because you’re just doing it wrong. Or, as the French would say – C’est la vie.
Subscribe to:
Posts (Atom)