Friday, November 20, 2015

Battling Cyber Threats Using Lessons Learned 165 Years Ago

When it comes to protecting the end user, the information security community is awash with technologies and options. Yet, despite the near endless array of products and innovation focused on securing that end user from an equally broad and expanding array of threats, the end user remains more exposed and vulnerable than at any other period in the history of personal computing.

Independent of these protection technologies (or possibly because of them), we’ve also tried to educate the user in how best (i.e. more safely) to browse the Internet and take actions to protect themselves. With a cynical eye, it’s almost like a government handing out maps to their citizens and labeling streets, homes, and businesses that are known to be dangerous and shouldn’t be visited – because not even the police or military have been effective there.

Today we instruct our users (and at home, our children) to be careful what they click-on, what pages or sites they visit, what information they can share, and what files they should download. These instructions are not just onerous and confusing, more often than not they’re irrelevant – as, even after following them to the letter, the user can still fall victim.

The fact that a user can’t click on whatever they want, browse wherever they need to, and open what they’ve received, should be interpreted as a mile-high flashing neon sign saying “infosec has failed and continues to fail” (maybe reworded with a bunch of four-letter expletives for good measure too).
For decades now thousands of security vendors have brought to market technologies that, in effect, are predominantly tools designed to fill vulnerable and exploited gaps in the operating systems lying at the core of devices the end users rely upon. If we’re ever to make progress against the threat and reach the utopia of users being able to “carelessly” using the Internet, those operating systems must get substantially better.

In recent years, great progress has been made in the OS front – primarily smartphone OS’s. The operating systems running on our most pocket-friendly devices are considerably more secure than those we rely upon for our PC’s, notebooks, or servers at home or work. There’s a bunch of reasons why of course – and I’ll not get in to that here – but there’s still so much more that can be done.
I do believe that there are many lessons that can be learned from the past; lessons that can help guide future developments and technologies. Reaching back a little further in to the past than usual – way before the Internet, and way before computers – there are a couple of related events that could shine a brighter light on newer approaches to protecting the end user.

Back in 1850 a Hungarian doctor named Ignaz Semmelweis was working in the maternity clinic at the General Hospital in Vienna where he noted that many women in maternity wards were dying from puerperal fever - commonly known as childbed fever. He studied two medical wards in the hospital – one staffed by all male doctors and medical students, and the other by female midwifes – and counted the number of deaths in each ward. What he found was that death from childbirth was five times higher in the ward with the male doctors.

Dr. Semmulweis tested numerous hypothesis as to the root cause of the deadly difference – ranging from mothers giving birth on their sides versus their backs, through to the route priests traversed the ward and the bells they rang. It appears that his Eureka moment came after the death of a male pathologist who, upon pricking his finger while doing an autopsy on a woman who had died of childbed fever, had succumbed to the same fate (apparently being a pathologist in the mid-19th century was not conducive to a long life). Joining the dots, Dr. Semmulweis noted that the male doctors and medical students were doing autopsies while the midwifes were not, and that “cadaverous particles” (this is a period of time before germs were known) were being spread to those birthing mothers.

Dr. Semmulweis’ medical innovation? “Wash your hands!” The net result, after doctors and midwifes started washing their hands (in lime water, then later in chlorine), was that the rate of childbed fever dropped considerably.

Now, if you’re in the medical trade, washing your hands multiple times per day in chlorine or (by the late 1800’s) carbolic acid, you’ll note that it isn’t so good for your skin or hands.

In 1890 William Stewart Halsted of Johns Hopkins University asked the Goodyear Tire and Rubber Company if they could make a glove of rubber that could be dipped in carbolic acid in order to protect the hands of his nurses – and so was born the first sterilized medial gloves. The first disposable latex medical gloves were manufactured by Ansell and didn’t appear until 1964.

What does this foray in to 19th century medical history mean for Internet security I hear you say? Simple really, every time the end user needs to use a computer to access the Internet and do work, it needs to be clean/pristine. Whether that means a clean new virtual image (e.g. “wash your hands”) or a disposable environment that sits on top of the core OS and authorized application base (e.g. “disposable gloves”), the assumption needs to be that nothing the user encounters over the Internet can persist on the device they’re using after they’ve finished their particular actions.

This obviously isn’t a solution for every class of cyber threat out there, but it’s an 80% solution – just as washing your hands and wearing disposable gloves as a triage nurse isn’t going to protect you (or your patient) from every post-surgery ailment.

Operating system providers or security vendors that can seamlessly adopt and automatically procure a clean and pristine environment for the end user every time they need to conduct activities on or related to the Internet will fundamentally change the security game – altering the battle field for attackers and the tools of their trade.

Exciting times ahead.


-- Gunter

Wednesday, November 18, 2015

Exploiting Video Console Chat for Cybecrime or Terrorism

A couple of days ago there was a lot of interest in how terrorists may have been using chat features of popular video console platforms (e.g. PS4, XBox One) to secretly communicate and plan their attacks. Several journalists on tight deadlines reached out to me for insight in to threat. Here are some technical snippets on the topic that may be useful for future reference:

  • In-game chat systems have been used by cyber-criminals for over a decade to conduct business and organize transfers of stolen data. Because the chat systems within games tend to use proprietary protocols and exist solely within a secure connection to the game vendors server, it is not ordinarily possible to eavesdrop or collectively intercept these communications without some level of legal access to the central server farm. While the game vendors have the ability to inspect the chat traffic, this level of inspection (when conducted - which is rare) tends to focus on inappropriate language and bullying, and that inspection or evidence gathering is almost exclusively limited to text-based communications.
  • As games (particularly multi-player first-person shootem-up games) have embraced real-time voice chat protocols, it has become considerably more difficult to inspect traffic and identify inappropriate communications. Most responses to abuse are driven my multiple individuals complaining about another in-game player - rather that dynamic detection of abuse.
  • This difficulty in monitoring communications is well known in the criminal community and is conveniently abused. Criminals tend to not use their own personal account details, instead use aliases or, more frequently, stolen user credentials - and may electronically proxy their communications via TOR and other anonymizing proxy services to avoid people working out their physical location. There is a sizable underground market for stolen on-line gaming user credentials. When using stolen credentials, the criminals will often join specific game servers and use pre-arranged times for games (and sub-types of games) to ensure that they will be online with the right group(s) of associates. These game times/details are often discussed in private message boards.
  • While US law enforcement has expended efforts to intercept communications and ascertain geographical location information from TOR and proxy services in the past, it is difficult - since the communications themselves are typically encrypted. Intercepting in-game communications are very difficult because of the complex legal and physical server relationships between (lets say for example) Sony (running the PlayStation network), Electronic Arts (running the account management system and some of the gaming server farm), and the game development team (who implemented the communication protocol and runs the in-game service). For law enforcement, getting the appropriate legal interception rights to target an individual (criminal) is complex in this situation and may be thwarted anyway if the criminals choose to use their own encryption tools on top of the game - i.e. the in-game communications are encrypted by the criminals using a third-part non-game tool.
  • Console chat typically takes the form of either text or voice-based chat. Text-based chat is much easier to analyze and consequently easier for console operators and law enforcement to identify threats and abuse. In addition, text-based communications are much easier to store or archive - which means that, after an event, it is often possible for law enforcement to obtain the historical communication logs and perform analysis. Voice-based chat is much more difficult to handle and typically will only be inspected in a streaming fashion because the data volumes are so large - making it impractical to store for any extended period of time. There are also more difficulties in searching voice traffic for key words and threats. Video-based chat is even more difficult again to dynamically inspect, monitor, and store.
-- Gunter

Tuesday, November 17, 2015

Panel Selection of Penetration Testing Vendors

Most large companies have settled into a repeatable model in the way they undertake penetration testing and engage with their penetration testing suppliers. A popular model for companies that need to have several dozen pentests performed per year is to have a “board” or “panel” of three or four vetted companies and to rotate one provider in and out of the scheme per year – meaning that there is potentially a total refresh of providers every few years.

As vendor performance models go there is a certain undeniable logic to the process. However, it is worth questioning if these “board” models actually deliver better results – in particular, are the full spectrum of vulnerabilities being examined and are the individual consultants capable of delivering the work? In general, I’d argue that such a model often fails to meet these core requirements.


Advanced companies (e.g. brand-name software manufacturers) that require access to the most skilled talent-pool of penetration testers and reverse engineers tend to select vendors based upon the skills and experience of the consultants they employ – often specifically calling out individual consultants by names within the terms of the contract. They also pay premium rates to have access to that exclusive talent pool. In turn, the vendors that employ those consultants market and position their own companies as advanced service providers. For these companies, talent is the critical buying decision and it is not uncommon for the client organization to engage with new vendors when highly skilled or specialized consultants move between service providers.

Most large companies are not as sophisticated in discerning the talent pool needed to review and secure their products – yet still have many of the same demands and needs from a penetration testing skills perspective. For them, vendor selection is often about the responsiveness of the service provider (e.g. can they have 6 penetration testers onsite within two weeks in Germany or Boston) and the negotiated hourly-rate for their services. The churn of vendors through the “board” model is typically a compromise effort as they try to balance the needs of negotiating more favorable contractual terms, overcoming a perception of skill gaps within their providers consulting pool, and serve a mechanism for tapping a larger pool of vetted consultants.

From past observations, there are several flaws to this model (although several elements are not unique to the model).
  1. Today's automated vulnerability scanners (for infrastructure, web application, and code review) are capable of detecting up to 90% of the vulnerabilities an “average” penetration tester can uncover manually if they use their own scripts and tools. Managed vulnerability scanning services (e.g. delivered by managed security service providers (MSSP)) typically reach the same 90% level, but tend to provide the additional value of removing false positives and confirming true positives. If these automated tools and services already cover 90% of the vulnerability spectrum, organizations need to determine whether closing the gap on the remaining 10% is worth the consulting effort and price. Most often, the answer is “yes, but…” where the “but…” piece is assigned a discrete window of time and effort to uncover or solve – and hence value. Organizations who adopt the “board” approach often fail to get the balance between tools, MSSP, and consultant-led vulnerability discovery programs. There are significant cost savings to be had when the right balances have been struck.
  2. Very few consultants share the same depth of skills and experience. If an organization is seeking to uncover vulnerabilities that lie out of reach of automated discovery tools, it is absolutely critical that the consultant(s) undertaking the work have the necessary skills and experience. There is little point throwing a 15 year veteran of Windows OS security at an Android mobile application served from the AWS cloud – and vice versa. To this end, clients must evaluate the skill sets of the consultants that are being offered up by the vendor and who are expected to do the work. The reality of the situation is that clients that don’t pay the necessary attention can almost guarantee that they’ll get the second-rung consultants (pending availability) to perform this important work. The exception being when a new vendor is being evaluated, and they’ll often try to throw their best at the engagement for a period of time in order to show their corporate value – but clients should not anticipate the same level of results in subsequent engagements unless they are specific about the consultants they need on the job.
  3. Rotating a vendor in or out of a program based upon an annual schedule independent of evaluating the consultants employed by the company makes little sense. Many penetration testing companies will have a high churn of technical staff to begin with and their overall technical delivery capabilities and depth of skills specialization will flux though-out the year. By understanding what skill sets the client organization needs and the amount of experience in each skill area in advance, those organizations can better rationalize their service providers consulting capabilities – and negotiate better terms.
  4. Because consultant skills and experience play such an important role in being able to uncover new vulnerabilities, client organizations should consider cross-vendor teams when seeking to penetration test and secure higher-priority infrastructure, applications, and products. Cherry-picking named consultants from multiple vendors to work on an important security requirement tends to yield the best and most comprehensive findings. Often there is the added advantage of those vendors choosing to compete to ensure that their consultants do the best team work on the joint project – hoping that more follow-on business will fall in their direction.


While “board” or “panel” approaches to penetration testing vendor management may have an appeal from a convenience perspective, the key to getting the best results (both economical and vulnerability discovery) lies with the consultants themselves. 

Treating the vendor companies as convenient payment shells for the consultants you want or need working on your security assignments is OK as long as you evaluate the consultants they employ and are specific on which consultants you want working to secure your infrastructure, applications, and products. To do otherwise is a disservice to your organization.

-- Gunter

Monday, November 9, 2015

The Incredible Value of Passive DNS Data

If a scholar was to look back upon the history of the Internet in 50 years’ time, they’d likely be able to construct an evolutionary timeline based upon threats and countermeasures relatively easily. Having transitioned through the ages of malware, phishing, and APT’s, and the countermeasures of firewalls, anti-spam, and intrusion detection, I’m guessing those future historians would refer to the current evolutionary period as that of “mega breaches” (from a threat perspective) and “data feeds”.
Today, anyone can go out and select from a near infinite number of data feeds that run the gamut from malware hashes and phishing URL’s, through to botnet C&C channels and fast-flux IPs. 

Whether you want live feeds, historical data, bulk data, or just API’s you can hook in and ad hoc query, more than one person or organization appears to be offering it somewhere on the Internet; for free or as a premium service.

In many ways security feeds are like water. They’re available almost everywhere if take the time to look, however their usefulness, cleanliness, volume, and ease of acquiring, may vary considerably. Hence there value is dependent upon the source and the acquirees needs. Even then, pure spring water may be free from the local stream, or come bottled and be more expensive than a coffee at Starbucks.

At this juncture in history the security industry is still trying to figure out how to really take advantage of the growing array of data feeds. Vendors and enterprises like to throw around the term “intelligence feeds” and “threat analytics” as a means of differentiating their data feeds from competitors after they have processed multiple lists and data sources to (essentially) remove stuff – just like filtering water and reducing the mineral count – increasing the price and “value”.
Although we’re likely still a half-decade away from living in a world were “actionable intelligence” is the norm (where data feeds have evolved beyond disparate lists and amalgamations of data points into real-time sentry systems that proactively drive security decision making), there exist some important data feeds that add new and valuable dimensions to other bulk data feeds; providing the stepping stones to lofty actionable security goals.

From my perspective, the most important additive feed in progressing towards actionable intelligence is Passive DNS data (pDNS).

For those readers unfamiliar with pDNS, it is traditionally a database containing data related to successful DNS resolutions – typically harvested from just below the recursive or caching DNS server.

Whenever your laptop or computer wants to find out the IP address of a domain name your local DNS agent will delegate that resolution to a nominated recursive DNS server (listed in your TCP/IP configuration settings) which will either supply an answer it already knows (e.g. a cached answer) or in-turn will attempt to locate a nameserver that does know the domain name and can return an authoritative answer from that source.

By retaining all the domain name resolution data and collecting from a wide variety of sources for a prolonged period of time, you end up with a pDNS database capable of answering questions such as “where did this domain name point to in the past?”, “what domain names point to a given IP address?”, “what domain names are known by a nameserver?”, “what subdomains exist below a given domain name?”, and “what IP addresses will a domain or subdomain resolve to around the world?”.

pDNS, by itself, is very useful, but when used in conjunction with other data feeds its contributions towards actionable intelligence may be akin to turning water in to wine.

For example, a streaming data feed of suspicious or confirmed malicious URL’s (extracted from captured spam and phishing email sources) can provide insight as to whether the customers of a company or its brands have been targeted by attackers. However, because email delivery is asynchronous, a real-time feed does not necessarily translate to current window of visibility on the threat. By including pDNS in to the processing of this class of threat feed it is possible to identify both the current and past states of the malicious URL’s and to cluster together previous campaigns by the attackers – thereby allowing an organization to prioritize efforts on current threats and optimize responses.

While pDNS is an incredibly useful tool and intelligence aid, it is critical that users understand that acquiring and building a useful pDNS DB isn’t easy and, as with all data feeds, results are heavily dependent upon the quality of the sources. In addition, because historical and geographical observations are key, the longer the pDNS data goes back (ideally 3+ years) and the more data the sources cover global ISPs (ideally a few dozen tier-1 operators), the more reliable and useful the data will be. So select your provider carefully – this isn’t something you ordinarily build yourself (although you can contribute to a bigger collector if you wish).

If you’re looking for more ideas on how to use DNS data as a source and aid to intelligence services and even threat attribution, you can find a walk-through of techniques I’ve presented or discussed in the past here and here.

-- Gunter