Showing posts with label DNS. Show all posts
Showing posts with label DNS. Show all posts

Monday, November 9, 2015

The Incredible Value of Passive DNS Data

If a scholar was to look back upon the history of the Internet in 50 years’ time, they’d likely be able to construct an evolutionary timeline based upon threats and countermeasures relatively easily. Having transitioned through the ages of malware, phishing, and APT’s, and the countermeasures of firewalls, anti-spam, and intrusion detection, I’m guessing those future historians would refer to the current evolutionary period as that of “mega breaches” (from a threat perspective) and “data feeds”.
Today, anyone can go out and select from a near infinite number of data feeds that run the gamut from malware hashes and phishing URL’s, through to botnet C&C channels and fast-flux IPs. 

Whether you want live feeds, historical data, bulk data, or just API’s you can hook in and ad hoc query, more than one person or organization appears to be offering it somewhere on the Internet; for free or as a premium service.

In many ways security feeds are like water. They’re available almost everywhere if take the time to look, however their usefulness, cleanliness, volume, and ease of acquiring, may vary considerably. Hence there value is dependent upon the source and the acquirees needs. Even then, pure spring water may be free from the local stream, or come bottled and be more expensive than a coffee at Starbucks.

At this juncture in history the security industry is still trying to figure out how to really take advantage of the growing array of data feeds. Vendors and enterprises like to throw around the term “intelligence feeds” and “threat analytics” as a means of differentiating their data feeds from competitors after they have processed multiple lists and data sources to (essentially) remove stuff – just like filtering water and reducing the mineral count – increasing the price and “value”.
Although we’re likely still a half-decade away from living in a world were “actionable intelligence” is the norm (where data feeds have evolved beyond disparate lists and amalgamations of data points into real-time sentry systems that proactively drive security decision making), there exist some important data feeds that add new and valuable dimensions to other bulk data feeds; providing the stepping stones to lofty actionable security goals.

From my perspective, the most important additive feed in progressing towards actionable intelligence is Passive DNS data (pDNS).

For those readers unfamiliar with pDNS, it is traditionally a database containing data related to successful DNS resolutions – typically harvested from just below the recursive or caching DNS server.

Whenever your laptop or computer wants to find out the IP address of a domain name your local DNS agent will delegate that resolution to a nominated recursive DNS server (listed in your TCP/IP configuration settings) which will either supply an answer it already knows (e.g. a cached answer) or in-turn will attempt to locate a nameserver that does know the domain name and can return an authoritative answer from that source.

By retaining all the domain name resolution data and collecting from a wide variety of sources for a prolonged period of time, you end up with a pDNS database capable of answering questions such as “where did this domain name point to in the past?”, “what domain names point to a given IP address?”, “what domain names are known by a nameserver?”, “what subdomains exist below a given domain name?”, and “what IP addresses will a domain or subdomain resolve to around the world?”.

pDNS, by itself, is very useful, but when used in conjunction with other data feeds its contributions towards actionable intelligence may be akin to turning water in to wine.

For example, a streaming data feed of suspicious or confirmed malicious URL’s (extracted from captured spam and phishing email sources) can provide insight as to whether the customers of a company or its brands have been targeted by attackers. However, because email delivery is asynchronous, a real-time feed does not necessarily translate to current window of visibility on the threat. By including pDNS in to the processing of this class of threat feed it is possible to identify both the current and past states of the malicious URL’s and to cluster together previous campaigns by the attackers – thereby allowing an organization to prioritize efforts on current threats and optimize responses.

While pDNS is an incredibly useful tool and intelligence aid, it is critical that users understand that acquiring and building a useful pDNS DB isn’t easy and, as with all data feeds, results are heavily dependent upon the quality of the sources. In addition, because historical and geographical observations are key, the longer the pDNS data goes back (ideally 3+ years) and the more data the sources cover global ISPs (ideally a few dozen tier-1 operators), the more reliable and useful the data will be. So select your provider carefully – this isn’t something you ordinarily build yourself (although you can contribute to a bigger collector if you wish).

If you’re looking for more ideas on how to use DNS data as a source and aid to intelligence services and even threat attribution, you can find a walk-through of techniques I’ve presented or discussed in the past here and here.

-- Gunter

Monday, October 6, 2014

If Compliance were an Olympic Sport

First published on the NCC Group blog - 6th October 2014...

It probably won’t raise any eyebrows to know that for practically every penetration tester, security researcher, or would-be hacker I know, nothing is more likely to make their eyes glaze over and send them to sleep faster than a discussion on Governance, Risk, and Compliance (i.e. GRC); yet the dreaded “C-word” (Compliance) is a core tenant of modern enterprise security practice.

Security professionals that come from an “attacker” background often find that their contention with Compliance is that it represents the lowest hurdle – with some vehemently arguing that too many security standards appear to be developed by committee and only reach fruition through consensus on the minimum criteria. Meanwhile, there is continuous pressure for businesses to master their information system security practices and reach an acceptable compliance state.

Compliance, against public standards, has been the norm for the majority of brand-name businesses for over a decade now, and there’s been a general pull-through elevation of security performance (or should that be appreciation?) for other businesses riding the coat-tails of the big brands. But is it enough?

When I think of big businesses competing against each other in any industry vertical sector, I tend to draw parallels with international sporting events – particularly the Olympic Games. In my mind, each industry vertical is analogous to a different sporting event. Just as athletes may specialise in the marathon or the javelin, businesses may specialise in financial services or vehicle assembly,with each vertical - each sport - requiring a different level of specialisation and training.

While professional athletes may target the Olympic Games as the ultimate expression of their career, they must first navigate their way through the ranks and win at local events and races. In order to achieve success they must, of course, also train relentlessly. And, as a former sporting coach of mine used to say, “the harder you train, the easier you’ll succeed.”

I see compliance as a training function for businesses. Being fully compliant is like spending three hours a day running circuits around the track in preparation for being a marathon runner. Compliance with a security policy or standard isn’t about winning the race, it’s about making sure you’re prepared and are ready to run the race when its time to do so.

That said, not all compliance policies or standards are equal. For example, I only half-heartedly jest when I say that PCI compliance is the sporting equivalent of being able to tie your shoe-laces. Although it’s not much in the grand scheme of security, and while it’s not going to help you win any races, it’s one less thing to trip you up.

If compliance standards represent the various training regimes that an organisation could choose to follow, then “best practices” may as well be interpreted as the hiring of a professional coach; for it’s the coach’s responsibility to optimise the training, review the latest intelligence and scientific breakthroughs, and to push the athlete on to ever greater success.

In the world of information security, striving to meet (and exceed) industry best practices allows an organisation to counter a much broader range of attacks, to be better prepared for more sophisticated threats and to be more successful and efficient when recovering from the unexpected. It’s like elevating your sporting preparedness from limping in to 64th place in the local high school 5k run due to a cramp in your left leg, to being fit and able to represent your country at the Olympic Games.

My advice to organisations that don’t want to find themselves listed in some future breach report, or to watch their customers migrate to competitors because of yet another embarrassing security incident, or trip over their untied shoe-laces, is to move beyond the C-word and adopt best practices. Constant commitment and adherence to best security practices goes a long way to removing unnecessary risk from a business.

However, take caution. “Best practice” in security isn’t a static goal. The coach’s playbook is always developing. As the threat landscape evolves and a litany of new technologies allow you to interface and interact with clients and customers in novel and productive ways, best practices in security will also evolve and improve over time as new exercises and techniquesare added to the roster.

Improve the roster and develop the playbook and you’re sure beat those looming threats and push your business and customer service through the finish line.

The Pillars of Trust on the Internet

As readers may have seen recently, I've moved on from IOActive and joined NCC Group. Here is my first blog under the new company... first published September 15th 2014...

The Internet of today in many ways resembles the lawless Wild West of yore. There are the land-rushes as corporations and innovators seek new and fertile grounds, over yonder there are the gold-diggers panning for nuggets in the flow of big data, and crunching under foot are the husks of failed businesses and discarded technology.

For many years various star-wielding sheriffs have tried to establish a brand of law and order over the Internet, but for every step forward a menagerie of robbers and scoundrels have found new ways to pick-pocket and harass those trying to earn a legitimate crust. Does it really have to continue this way?

Over the years I’ve seen many technologies invented and embraced with the goal of thwarting the attackers and miscreants that inhabit the Internet.

I’m sure I’m not alone in the feeling that with each new threat (or redefinition of a threat) that comes along someone volunteers another “solution” that’ll provide temporary relief; yet we continue to find ourselves in a never-ending swatting match with the tentacles of cyber crime.

With so many threats to be faced and a slew of jargon to wade through, it shouldn’t be surprising to readers that most organisations (and their customers) often appear baffled and bewildered when they become victims of cyber crime – whether that is directly or indirectly.

While the newspapers and media outlets may discuss the scale of stolen credit cards from the latest batch of mega-breaches and strive to provide common sense (and utterly ignored) advice on password sophistication and how to be mindful of what we’re clicking on, the dynamics of the attack are easily glossed over and subsequently lost to those that are in the best position to mitigate the threat.

The vast majority of successful breaches begin with deception, and depend upon malware. The deception tactics usually take the form of social engineering – such as receiving an email pretending to be an invoice from a trusted supplier – with the primary objective being the installation of a malicious payload.

The dynamics of the trickery and the exploits used to install the malware are ingeniously varied but, all too often, it’s the capabilities of the malware that dictate the scope and persistence of the breach.

While there exist a plethora of technologies that can layered one atop another like some gargantuan wedding cake to combat each tactic, tool, or subversive technique the cyber criminal may seek to employ in their exploitation of a system, doing so successfully is as difficult as attempting to stack a dozen feral cats – and just as likely to leave you scratched and scarred.

In the past I’ve publicly talked about the paradigm change in the way organisations have begun to approach breaches… to accept that they will happen repeatedly and to prioritise on the rapid (and near instantaneous) detection and automated remediation of the compromised systems, rather than waste valuable cycles analysing yesterday’s malware or exploits, or churning over attribution possibilities.

But I think there’s a second paradigm change underway – one which doesn’t attempt to change the entire Internet, but instead focuses on mitigating the deception tactics used by the attackers at the root and creating a safe and trusted environment to conduct business within.

I think the time has come to build (rather than give lip-service to) a safe corner of the Internet and expand from there. It’s the reason I’ve come and joined NCC Group. It is my hope and aspiration that the Domain Services division will provide that anchor point, that Rock of Gibraltar, that technical credibility and wherewithal necessary to regain trust in doing business over the Internet once again.

A core tenant to building a trusted and safe platform for business has to start with the core building blocks of the Internet. Domain Name System (DNS) and Domain registration lie at the very heart of the Internet and yet, from a security perspective, they’ve been largely neglected as a means to neutering the most common and vile social engineering vectors of attack.

Couple tight control of domain registration and DNS with perpetual threat monitoring and scanning, merge it with vigilant policing of secure configuration policies and best practices (not some long-in-the-tooth consensus-strained minimum standards of a decade ago), and you have the pillars necessary to elevate a corner of the Internet beyond the reach of the general lawlessness that’s plaguing business today. And that’s before we get really innovative.

It wasn’t guns or graves that tamed the West of yore, it was the juggernaut of technology that began with railway lines and the telegraph. The mechanisms for restoring business trust in the Internet are now in play. Exciting times lay ahead.

Sunday, November 25, 2012

Persistent Threat Detection (on a Budget)


If there’s one simple – high impact – thing you could do to quickly check whether your network has been taken over by a criminal entity, or uncover whether some nefarious character is rummaging through your organizations most sensitive intellectual property out of business hours, what would it be? In a nutshell, I’d look to my DNS logs.

It’s staggering to me how few security teams have gotten wise to regularly interrogating the logs from their recursive DNS servers. In many ways DNS logging can be considered sprinkling flour on the floor to track the footsteps of the culprit who’s been raiding the family fridge. Each step leaves a visible impression of where and how the intruder navigated the kitchen, and their shoe size.

Whenever an electronic intruder employs their tools to navigate your network, tries to connect back to their command and control server, or attempts to automatically update the malicious binaries they've installed upon the system they have control over (or wish to control), those victim devices tend to repeatedly resolve the domain names that that attacker is operating from. Therefore, armed with a list of known bad domain names and/or IP addresses, it’s a trivial task to mine the DNS logs and identify any successful intrusions.

Depending upon how authoritative your “blacklist” of criminal domains is, and how picky you are about the IP destinations that the domain names are resolving to, you can rapidly spot those nefarious shoe impressions in the flour.

One word of caution though, this isn’t a comprehensive technique for detecting persistent threats operating within your network – but it is one of the simplest! It also has the highest impact – particularly if you’re operating on a shoestring budget.

An obvious limitation of DNS log mining is the depth and accuracy of the blacklist you’re matching DNS events to – so you’ll want to ensure that the list you’re using covers the types and classes of threats you’re most interested in detecting. While there are plenty of free blacklists out there, the vast majority of them deal with spam, phishing and drive-by hosts… so you’ll want to invest some time shopping around a little.

Here are a few tips to using DNS as a means of detecting persistent threats (advanced or otherwise):

  • Turn DNS logging on! Seriously, do it now… you can read the rest of this blog after you've turned it on.
  • Select a bunch of relevant blacklists that contain malicious domains associated with the threats (and criminal actors) you’re most interested in.
  • Create a list of IP address ranges for countries, companies or regions that computer systems within your organization shouldn’t be communicating with, and use this as a second-pass filter for spotting other unwanted or likely malicious traffic.
  • Scrape your DNS logs frequently – ideally at least once per week.
  • If you’re worried about a handful of specific threats (e.g. a criminal operator or state that is likely targeting your organization), scrape your DNS logs for relevant domain names hourly – and alert upon their discovery.
  • Even if you’re only scraping you’re DNS logs weekly, don’t throw them away after you’re done. Try to keep your DNS logs for multiple years at least. (Note: DNS logs compress to almost nothing – so they’re not going to take up much space).
  • Consider scraping all the older logs (going back up to 5 years) once a month or so. New domains will be added to the blacklists you’re using over time, and new intelligence can shed new insight in to past intrusions. It’ll also help to establish when the intruder first compromised your network when you do find one.
  • If your DNS server allows it, turn on logging of “failed” lookups – i.e. NX domain requests. While these events won’t help in your blacklist lookups, they will help identify malware families that make use of domain generation algorithms (DGA) as well as “broken” applications within your network that need some tuning in finding their legitimate destinations.
  • DNS log scraping can be conveniently done off-line through simple batch script processing. So the impact on the team responsible for securing the corporate infrastructure is minimal after a nominal development investment.

If you’re not happy with the quality of the blacklist you’ll be able to bring to bear in uncovering the persistent threats likely already operating within your environment, or if it would be helpful to do a “one-off” check of your DNS logs and to help build the internal business case for investing in a more permanent detection solution, let me know.

Thursday, June 25, 2009

New Bot-powered Pharmaceutical Scam Network

The other day we staggered across a strange botnet. It was only small as far as IP addresses were concerned, but gigantic as far as domains under management (greater than 25,000 currently in use). The cost of registering that number of domains is a significant investment by the botnet operators - at $20 each to register, you're looking at $500k in setup costs alone).

But the strangeness doesn't end there. The botnet is being used for pharmaceutical scams (i.e. Canada Drugs), but the DNS lookup process is messed up. Somehow the botnet operators have figured out how to manipulate the .com root servers in to doing some weirdness - and having them act as the authoritative resolvers for the 3LDs.

I'm not sure how this situation arose, but the criminals behind this are making good use of the flaw/exploit/manipulation. What they now effectively have is a system that prevents the good-guys from shutting down the resolution of where their scam Web sites are. Not good.

I'm still looking in to how this arose and what the real (longer-term) ramifications of this are. But its new to me and definitely in the strange/weird department.

I've posed a full blog about this botnet over on the Damballa site - Strange Bot-powered Scam Network. Take a look at whats going on.