Saturday, January 30, 2010

Network ADS - Playing at Botnet Detection

With botnets and more recently APT's plastered across the news, you'll struggle to find a security vendor that hasn't spent a furious couple of weeks repositioning their preemptive detection technologies as "anti-botnet" or "anti-APT".

Needless to say, digging a little deeper in to these technologies - beyond their cursory marketing spin - is probably a good idea, especially if your company executives are thrashing about looking to take any steps that'll keep them from being the next Google-like victim and making headline news.

Yesterday I pulled together my thoughts on the use of network-based Anomaly Detection Systems (ADS) in their capacity as botnet detection tools. In a nutshell, NADS is fine for dealing with those big and noisy Internet botnets that everyone writes about in the news, but not much chop against the types of botnets normally found successfully operating within enterprise networks.

My thoughts and analysis can be found on the Damballa blog - Detecting Botnets with Network ADS - and is also cross-posted below...

-------

Many businesses have already deployed Anomaly Detection Systems (ADS) within their enterprises whether they know it or not. Most ADS technologies can be discovered operating at the host-level – typically integrated in to the popular desktop antivirus suites of the major security vendors – where they can be often be found functioning in a hybrid detection mode somewhere between a personal firewall and a behavioral analysis engine.

Network-based ADS (NADS) on the other hand serve a different purpose within large enterprises. Their deployments are far fewer than host-based ADS, and are often used by security teams to detect major changes in network activity – typically analyzing and regulating traffic flow. Optimized to view high volumes of network traffic across an enterprise as fast as possible (in real-time in many cases), they rely upon specially crafted abstract protocols of the traffic – such as NetFlow, JFLow, NetStream, etc. – where content visibility is sacrificed for analysis speed.

Over the last 2 years NADS technologies have increasingly been positioned as having an anti-botnet capability – which has caused much confusion amongst those responsible for managing ADS deployments and those responsible for enterprise-wide security. NADS do in fact have some value as an enterprise-level botnet detection tool, but their capabilities are all too often misrepresented.

How capable are NADS in detecting and mitigating botnets? What are their strengths and weaknesses? The following is a summary of my observations and experiences (garnered over the last 5+ years) in using NADS as an enterprise security technology – warts and all.

Botnet detection & mitigation strengths:

  1. A correctly configured and baselined NADS deployment is capable of detecting the high-volume attack output from certain classes of botnets. By identifying voluminous email sending (e.g. spam agent or spam proxy operation) or crafted port-specific traffic (e.g. DDoS agent operation) and tracking that back to specific hosts, the infected systems operating in this manner can often be classed as being a member of a botnet.
  2. Many general-purpose Internet botnet malware make use of worming capabilities to propagate around enterprise networks by exploiting unpatched software flaws in vulnerable hosts. If a NADS solution has been correctly baselined, it can be relatively easy to spot the anomalous traffic this propagating threat creates – thereby alerting security teams to a new malware outbreak.
  3. By configuring the NADS system to account for “normal working hours”, different detection thresholds can be utilized to aid in the detection of hosts that are actively communicating with botnet Command and Control (CnC) nodes. For example, if a host suddenly commences HTTP or IRC communication with an IP address located in Tibet at 3:00am in the morning – this will likely be very suspicious.
  4. If an organization has suffered a sudden and large botnet infection, the constant polling of some botnet malware variants will quickly become apparent and will aid in the identification of those bot infected hosts.
  5. If the NADS system supports the use of blacklists, a list of known botnet CnC’s can be used as a means of tracking the volume of data that has been sent or received by enterprise hosts that are part of the botnet. Study of this logging can help reveal the scope of a breach and the types of information the criminal botnet operator is targeting.

Botnet detection & mitigation weaknesses:

  1. Very few botnets encountered within enterprises nowadays are noisy and spew copious volumes of spam or participate in devastating DDoS attacks. Botnet masters have largely moved on from this activity – and will only order bot infected hosts to operate this way if they’ve already exhausted all other value from the compromised hosts, or if they never bothered to figure out the infected hosts were actually located within an enterprise (in which case, if they had, they would have probably sold them to someone else for a good price). Therefore, basing detection of a botnet infection on copious volumes of attack traffic is either too late in the botnet lifecycle or was just bothersome (and not a security risk) to begin with.
  2. By basing botnet detection upon the identification of outbound malicious traffic, the enterprise security team have failed to preempt the malicious operation of the botnet and are forced to deal with the voluminous output of an ongoing attack. Detection and mitigation of the botnet control instructions that instructed the attack to begin would have been more efficient and less damaging.
  3. Baselining an enterprise NADS deployment – and keeping that baseline current – is almost impossible in the vast majority of businesses. New application deployments and software updates, along with roaming users and peer-to-peer communications, mean that enterprise network traffic is not as consistent and predictable as it was even only half-a-decade ago. As such, it becomes increasingly difficult to spot the worming traffic generated by botnets attempting to propagate around the network. This has become further complicated by the fact that the malware authors themselves have learned much from past attacks and have intentionally become more stealthy and deliberately slowed their propagation pace to avoid anomaly detection systems.
  4. Botnet malware is more often than not designed to function only when the infected host is actually in operational use by its authorized user. As such, it is increasingly difficult to identify anomalous traffic from real traffic as the user goes about their regular Internet surfing.
  5. Most botnet malware found within enterprise networks are proxy aware. This means that they borrow the users credentials and funnel all their CnC traffic through the corporate proxy servers – i.e. they do not use non-standard ports or protocols to navigate out to the Internet or between internal systems. The vast majority of botnet malware rely upon HTTP or HTTPS for communication.
  6. Timing is everything for botnet operators nowadays. No sooner has a host been compromised through a drive-by-download vector or Trojan file download, that it connects back to its CnC ready to receive both a updated malware package and a new set of instructions. As such, unless the NADS solution is configured to react in real-time to the identification of a new botnet infection, the threat will have either moved on or already become more severe.
  7. Many of the more commercially-minded botnet operators invest in fast-fluxing and domain-fluxing CnC technologies. The unrelenting changes in CnC IP addresses and hosts names can quickly overwhelm NADS systems.

So, in summary, I’d say that NADS does have a role to play in botnet detection – but only a very minor one, and even that’s diminishing all the time. NADS deployments make for capable enterprise-wide network health monitoring systems, but have faltered against advanced threats like botnets and even more stealthy threats such as Advanced Persistent Threats (APT’s). I’d liken NADS to a school nurse – constantly overseeing the health of the entire student population and dealing with the odd knee scrape and cut lip – but not trained or equipped to deal with major head trauma or the results of a shooting spree.

Sunday, January 24, 2010

Ablative Security

The threats facing enterprise networks are incredibly diverse. Attack vectors are constantly changing and a never-ending sea of zero-day vulnerabilities plague those responsible for assuring corporate defenses.

While I'm familiar with just about every host and network security technology on the market today, I've been wondering if there are alternative ways of handling the quadratic equation of threats versus protection.

A concept I'm toying around with relates to Ablative Armor - essentially the concept of a protective armor that is (partially) destroyed in the process of defence - and whether it has legs from a network security perspective.

Ablative materials were used to protect returning space re-entry vehicles for a time, and are currently used as advanced "reactive armor" on heavy tanks and other front-line vehicles. The material and reaction type doesn't matter here for this discussion - merely the fact that these kinds of technologies provide some of he most advanced protection around. However, while they're carrying out their protection, they are similarly consumed by the defense - but (most importantly) leave the key equipment untouched.

Does ablative protection exist in IT security today? In some ways, perhaps it does. While at ISS, we came up with a technology called Buffer Overflow Exploit Protection (BOEP) which was designed to monitor system memory and, if it saw anything that looked like exploitation of a stack overflow, caused the host to immediately shutdown or reboot. BOEP works great as a preemptive protection technology - and in a way you could argue that by causing the system to reboot in this way is inelegant, but it may partially fit the bill of "ablative".

Perhaps the use of active honeypots/honeynets could constitute ablative security? They're throwaway systems designed to lure attacks (and attackers) to them - for both study and diversion - and are generally consumed in the process (i.e. once they're infected/compromised, they can't really be trusted and used for other tasks). But, at the end of the day, perhaps honeypots/honeynets aren't really a defense after all - being merely telescopes for studying attacks rather than front-line defenses of critical assets? Probably.

What about sinkholes? By dynamically/automatically hijacking the command and control domain names used by botnet nmasters and diverting all traffic to a sinkhole - does that constitute an aspect of ablative security? Maybe - after all, once you sinkhole that domain, neither the attacker or target can reuse that domain (or IP address) for much afterwards - and the attackers are alerted to who the defenders are. But still, I'm not so sure.

So, what would ablative security (or armor) look like in a network security sense? I doubt that we'd want the firewall to suddenly start smoking like a Gemini heat shield and shut itself off (permanently) upon thwarting the latest zero-day exploit.

If ablative security revolves around the targeting system being consumed in the process of thwarting an attack, perhaps automatic nuke-and-pave host-level responses are in order. For example, a virtual watcher program monitors the health of the virtually hosted operating system a user is using. They browse a malicious drive-by-download Web site, the "host" gets infected, and starts doing bad things. The canaries within the compromised "host" inform the virtual watcher, which then notifies the user to the fact that they're compromised (perhaps even telling them to take a couple of minutes off to grab a coffee), then automatically proceeds with re-imaging the "host" from a known good/save version. In this model the "canaries" are consumed in the defense of the system.

I think that this approach may be one take on the concept of ablative security, and there must be others. You could argue that the use of canaries for detection borders on honeypot functionality or something else. That said, there's nothing to say that the canaries have to exist within the host environment - and could just as easily (or perhaps more easily) exist at the network level instead.

My gut feel is that the concept of ablative security has a degree of unseen usefulness in protecting against some of the threats out there today and coming at us in the future. I'm going to ponder on it for a while. If anyone has thoughts on the topic, I'd love to hear them.

Whats more important - preemptive, or post-preemptive?

Preemptive security technologies - they're great, and you can't beat them. Well, that's how it's supposed to work anyway. If only life was so simple.

The core idea behind preemptive protection technologies is to detect and stop entire classes of threat from successfully compromising the integrity of a host, network or application. Sales and marketing teams are only too eager to throw around the "preemptive" term - which can lead to rather embarrassing discussions between customers and technical engineers as the two have work out why a particular threat that was supposed to have been stopped, managed to get through the defenses. Its very rarely because the "preemptive" technology failed to to what it was designed to do - and almost exclusively because of nuances in the attack.

For example, there are litterally dozens of technologies out there being touted as "preemptive" protection against drive-by-download attacks. Some of these technologies focus upon detecting the presence of a shellcode payload, with others may hone in on JavaScript obfuscation techniques. However, despite these preemptive detection technologies, there is a growing list of vectors that bypass each technology. For example, drive-by-download attacks that use Flash scripting instead of JavaScript, or embed the shellcode in a different file - rather than within the JavaScript. The net result is customers scratching there heads and looking for answers. The nuances will often escape them - and the vendors R&D team will have a few late nights adding detection capabilities for the new evasion technique or encoding scheme (if they're lucky).

Don't get me wrong, "Preemptive" protection is damned important. You need it a lot more than some just-in-time signature update. But you've also got to realize that the more ground-breaking the "preemptive" protection is, the narrower its focus is in threat mitigation.

Whats more important than "preemptive" protection? In my mind its post-preemptive protection detection - i.e. being able to rapidly detect when all your combination's of "pre" protection didn't quite work, and your network got nailed despite the effort (and resources) you expended. If you focus exclusively on trying to prevent hosts, networks and applications from being compromised, you're going to have a damned hard time detecting when your systems do in fact get p0w3d by some Internet criminals.

This is particular important when you're facing a more organized and motivated opponent - such as those running an APT operation against your organization.

I discussed this in more detail the other day in my Damballa blog - “Preemptive Protection” Isn’t – If You’re Battling APT’s - and cross posted below...

------>

There’s been no shortage of press covering Advanced Persistent Threats (APTs) this week. While there have been plenty of post-hack discussions over the past few years following the big public breaches, this one’s different – there’s almost a kind of relief that this one’s made it out in the open. I can liken it the relief and revelations that followed that first major tobacco manufacturer’s decision to admit that smoking actually probably wasn’t so good for you after all…

Unfortunately, the revelation of several dozen major organizations being the victim of this particular APT example has just about every security vendor on the planet clamoring to extol and position their latest nicotine patch equivalent. Or, perhaps more appropriately, a lock-box to prevent you from reaching for another cigarette.

In the hussle-bussle of vendors claiming “First” or “Preemptive”, there’s a lot of weighted wordage flying about. But if that’s all true, if a particular vendor was “First” in its discovery, why didn’t they stop the threat or protect the currently known victims? Didn’t they understand the significance of what they had already discovered? Did they choose to keep the information to themselves for competitive advantage? I can’t answer those questions – and frankly any answers I’d likely receive in return from these “First” vendors would probably be carefully word-smithed by a gaggle of marketing folks.

What about “Preemptive”? I like that word – it’s important. Having developed and invented many security technologies that fall in to that bucket over the last decade, I can categorically state that “Preemptive” is good. But (and you know there’d be a “but”), it’s not good enough…

Those nicotine patch equivalent vendors are going on about how they could/would/will/have/might preemptively…

  • …detect the fact that the user is visiting a URL that’s probably dangerous
  • …detect the malicious JavaScript or HTML that delivered the exploit
  • …detect the exploit shellcode
  • …detect the buffer overflow
  • …detect the memory manipulation of the exploit
  • …detect the malicious payload
  • …detect the malware component
  • …detect the malicious behaviors of the compromised application
  • …detect the inappropriate behaviors of the compromised host
  • …detect the malicious network behaviors

…and by “detecting” the APT, they’d have been able to protect against it (or an aspect of it). But at the end of the day, all those technologies, for one reason or another, failed to protect these organization from being a (very public) victim of the APT.

Why? Because APT’s aren’t like average-Joe malware, botnets, script-kiddies, hackers, fraud artists and cybercriminal attacks. The thing that makes APT attacks different from the other forms of cyber-attack can best be summed up with the mantra “if at first you don’t succeed – try, try and try again.”

The vast majority of Internet attacks – especially mass Internet botnets – are opportunistic attacks. The bad guys have a broad objective in mind along with a number of tools they specialize in and have a ceiling to the amount of effort they’re willing to expend. They will optimize a particular attack vector, select the preferred delivery method, and pound the Internet (and everyone on it) with that toolset until they’re acquired enough victims. So, while many of the attacks may appear to be “targeted” (e.g. Spear Phishing), their objectives are rather limited (e.g. immediate financial fraud), and if they don’t succeed against the currently highlighted target they’ll simply move on to the next.

APT’s don’t follow this model. If a particular attack vector, tool, technology or exploit didn’t (or is unlikely to) work, they switch to another – never changing targets nor focus.

What does that mean in practice? Regardless of the perimeter or host security technology you deploy, and how “preemptive” it is, it isn’t going to stop an APT. Sure, each “preemptive” technology worked just fine – stopping each and every attack vector, malicious payload or strange behavior it was supposed to – but the criminal operators targeting your organization just move on to the next tool or vector until they find one that works. And lets not forget (or kid ourselves), this probing of network defenses and “preemptive” protection doesn’t happen as an overnight barrage of simultaneous attacks from a small cluster of IP addresses tracked down to the Chinese Army. No, this is low and slow stuff spread over many days, weeks or months, routed via a variety of sources and proxies from around the world – or even through your business partners.

So, can all of these nicotine patch sellers protect your organization against APT’s? No, of course not. They can protect against many of the vectors that may be tried and probably identify the particular exploit or malware they end up using, but at the end of the day APT’s will win.

Which brings me to my final point. I don’t care how you got infected or became the latest APT victim – because you will be – so get over it and do something already. If a criminal operations team is willing to spend the time, effort and monies to target your organization, they will win! So, how do you defeat APT’s? Simple, you detect their presence as fast as you possibly can and remediate the victims almost as fast.

OK, so “preemptive” protection is important – but being able to know when that “preemptive” protection has failed is even more important!

FailSafe

Let me put on my Damballa hat for the moment. I’ve been getting a bunch of queries about whether the Damballa FailSafe solution detects the “Google APT thing in the news”. The answer is Yes, and many of the other APT’s that you haven’t heard about (and are unlikely to hear about). You see, from our technology perspective, we don’t care how you became a victim either (you can debate that’s my influence or cynicism leaking through). Lying at the heart of our technology is the ability to identify the suspicious and unauthorised remote control of systems within the enterprise. All this is done at the network level and an APT’s command and control (CnC) is generally no different from a successful mass-Internet botnet, an insider threat or even a remote access trojan hand placed by a criminal operative. The motivations behind a botnet, insider threat and APT may be wildly different – but the CnC communications do not.

It gets a little tougher distinguishing between a brand new targeted botnet, an insider threat or an APT purely from their CnC traffic. But in reality the trick is to identify those threats that have already navigated your layers of corporate defenses and shut them down. Deciding which particular threat was politically/financially/ethics motivated comes afterwards.

Was this “Google APT thing in the news” the first APT to place Google under it’s cross-hairs? No. Is it the only APT targeting Google? No. Will it be the last APT to be targeting Google? No. Will targeted enterprises be able to prevent APT’s from getting in? No. Is it possible to detect when an APT has successfully bypassed all your “preemptive” protection technologies and compromised your systems? Yes.

Thursday, January 21, 2010

Advanced Persistent Threats

I've been getting lots of questions on what precisely is an Advanced Persistent Threat - or APT for short - from all kinds of angles.

As such, the Damballa team have created the executive two-pager that helps answer "What is an Advanced Persistent Threat?"

Wednesday, January 13, 2010

Tethered Espionage

News of corporate espionage amongst the Fortune-100 - with targets like Google and Adobe - has been breaking all day. It's interesting to note the thoughts of the different commentators and their take on the China slant.

Earlier today I blogged (rather extensively) on my take of the news. You can find those comments posted here - Corporate Espionage and Tethered Criminal Actions - and copied below...

--------------------

The media is buzzing with the latest news concerning Google and Adobe and the targeted attacks directed at their corporate systems. While it’s news, it’s important to understand that this isn’t something that’s only just happened – rather it’s been something that both these organizations (and dozens more) have been subjected to for quite some time; it’s just become public, and they’re admitting to be the victims. But this is important.

I’ve been providing security consultancy advice for a couple of decades. I’ve been pulled in to do post attack forensics along with specialized pentesting, bug-hunting and reverse engineering for the majority of the Fortune 500 companies and in all that time, unless they were required to by law, not one have gone public about the attacks they were subjected to and the losses they have incurred. That’s why this Google/Adobe/etc. news is so significant – some Fortune-500 companies are actually saying “hey, enough already, we’re under constant attack – we need to do something collectively about this!”

Whats the primary vehicle for these (ongoing) attacks? You’ll hear plenty of discussion portraying viruses and malware as being the problem, and plenty of implications that the Chinese government lies behind the attack(s). But let’s be clear – that’s a fantastically simplistic view of the threat. Implying that the threat lies with targeted malware and China is like saying that drunk driving deaths are due to poor car design, and that the underlying cause is a particular beer brewery.

Malware is just a tool. The fundamental element to these (and any espionage attack) lies with the tether that connects the victim with the attacker. Advanced Persistent Threats (APT), like their bigger and more visible brother “botnets”, are meaningless without that tether – which is more often labeled as Command and Control (CnC).

The methods for getting a malware agent into an organization and on to key/critical hosts are incredibly diverse but, most importantly, can best be phrased as “trivial”. If someone wants to infect systems within a targeted organization and is willing to spend more than a few thousand dollars worth of effort to do so, it’ll happen – simple as that. Just as importantly, the malware being distributed and used in these kinds of attacks can be thought of as a Swiss Army knife with Klingon cloaking capabilities.

I jest only in part about the Klingon cloaking part – but it actually works well as a visual metaphor. Just as the Klingon Warbirds must decloak in order to launch their attack with photon torpedoes etc., APT’s and botnets must decloak themselves at the network level in order to maintain their CnC connections and be successful in harvesting espionage data. While APT’s are more surreptitious when it comes to CnC connectivity, their weakness lies in their network communications. At the host level, the probability of detecting an installation prior to actual financial/legal damage lies largely in the realm of dragons and mermaids.

Looking at the botnets we identify and track at Damballa that target enterprise networks, many of them fall in to the classification realm of APT’s. The malware component is under constant change – often being updated on a daily basis. Meanwhile the low-and-slow stealthy CnC traffic navigates the corporate network, weaves it’s way through fast fluxing networks and stratified levels of command relays, and makes it back to the team who’s really in control of the compromised assets – a bunch of contracted criminals located somewhere safe and far away. I use the term “team” on purpose because this is an organized collective of professional operators – each with their own skills and specialties.

I see a lot of discussions about preventing systems from being compromised – in fact most of the security business today is exclusively focused on threat prevention. But, you know what, every year (for the last two decades at least) as antivirus vendors release their annual threat reports the percentage of hosts known (or suspected) of being a victim and running malware has increased. As we launch in to 2010, I think the percentage most industry experts and veterans would throw about would be 35-40 percent of all Internet connected systems are compromised and currently running malware. Despite the terrific advances in detection, mitigation and cleanup – the numbers continue to go up. Despite the new detection technologies, the bad guys retain their lead. APT’s related malware lie in a particular niche, but they aren’t being prevented from getting in to an targeted organization. Let’s just face facts – if someone wants in on your organization and are willing to invest time and resources to do so, the probability that they will be successful in doing so certainly favors them.

Detecting and mitigating the CnC – breaking that tether of control – lies at the heart of dealing with this threat. By blocking those CnC channels, the bad guys can’t remotely control your enterprise systems, and they can’t extract the secret data they want. Tracing back who lies at the end of the CnC communication ultimately leads to he contracted criminals running the operation. The fact that those criminals happen to be located in a particular country is only part of identifying the instigators of the threat – but it’s probably as far as we’ll get.

Like I said earlier, I’ve had to deal with many of these threats before. In the UK, it appeared that many of the corporate espionage attacks were masterminded by French or US entities. In Taiwan it appeared to be China and South Korea. In China it appeared to be Taiwan and Australia. In Greece it appeared to be Turkey and Egypt. And so on… but those are only my specific experiences. [unfortunately, not a single corporate victim ever went public about the attacks they fell victim to - and probably never will... sigh]

With regards to the APT’s and botnets that Damballa tracks, detects and mitigates… well, those CnC’s are spread around all over the world and most likely reflect the locations of the professional teams that contract out there services, rather than the location of their their ultimate customers.

My advice to organizations being targeted with APT’s, botnets and unauthorized remote control of corporate resources? Focus on the network CnC – and mitigate there. By all means protect your perimeter and clean up your hosts – that’ll keep the unsophisticated script-kiddies and rif-raf off your systems – but it means very little to the pros. Success in dealing with this threat – the threat that Google, Adobe, and most global businesses (and governments) face constantly – is to identify which assets are currently compromised and “nuke-and-pave” them asap. I.e. identify systems that are trying to connect to their remote CnC, immediately cut that tether, and rapidly rebuild that system from a known good state (which is increasingly looking like a bare-metal state). If you can get that notification-to-rebuilt process down to 20 minutes or less, you’ll be in a good position to deal with this class of threat long term. Until then, you’re just messing around at playing detective.

Sunday, January 10, 2010

Database of DIY Trojans and Bots

What does it take to search, locate and acquire free copies of the current generation of Trojan and Bot DIY construction kits? Practically nothing nowadays.

I noticed that its actually getting even easier to get your hands on these kinds of nefarious technologies this morning with the public availability of an online database from the folks over at Indetectables.
This new DIY Trojan and Bot database is currently online and serving up multiple public versions of the popular kits - such as Bifrost and Poison - along with a growing selection of plugin's for them. For example, if you're a Poison Trojan developer, the site hosts multiple versions ranging from 0.0 through to 3.2, along with the "free" plugins - such as "Firefox password recovery", WiFi scanning, host power controls and remote port scanning (to name a few).

If you're thinking of downloading the DIY kits and using them, remember the following:
1) Using them against a system you're unauthorized to access is illegal in most countries.
2) The probability that the DIY kits and/or the malware agents they create are backdoored is typically very high.
3) Your traffic to this database (and other similar sites) is logged, and those logs may be requested by legal authorities in the future.

Sunday, January 3, 2010

Old Zeus DIY Still Evading Antivirus

The Zeus DIY malware construction kits can be purchased for anything between $4,000 to $0.00 - depending upon the age of the kit and the exploit packs shipped with it. One of the "most recent" Zeus kits circulating the bargain-basement hacking forums is version 1.2.4.2 - dated May 2009.

A colleague of mine over at Damballa, Christopher Elisan, posted a short educational walk-through of this Zeus version for the uninitiated - Zeus 4 U. It's worth noting just how easy it's become to generate new Zeus botnet agents - and what the configuration defaults are (e.g. the default banks the keylogger functions target).

Most surprisingly (and disappointingly) is how commercial antivirus detection of the malware created by this DIY kit is still languishing after seven months!