Sunday, July 10, 2016

The Future of Luxury Car Brands in a Self-Driving City

For the automotive manufacturing industry, the next couple of decades are going to be make or break for most of the well known brand names we're familiar with today. 

With the near term prospect of "self-driving" cars and city-level smart traffic routing (and monitoring) infrastructure fundamentally changing the way in which we drive, and the shift in city demographics that promotes a growing move away from wanting (or being able to afford) a personal vehicle, it should be clear to all that the motoring practices of the last century are on a trajectory to disappear pretty quickly.

As self-driving cars eventually negate the "love of driving" and city traffic routing and control systems begin to rigorously enforce variable speed limits, congestion charging, and overall traffic management, the personal car becomes more and more just another impersonal transport system. If that's likely the case (or even partially the case), then what does the future hold for the manufacturers of luxury cars?

Earlier this month I spent a week in Bavaria, Germany, visiting customers and prospects. The economies of cities like Stuttgart and Munich fundamentally revolve around the luxury automotive industry. Companies like BMW, Audi, and Porsche define the standard in personal vehicle luxury and generally lead the world in technical innovation (especially in safety features). Speaking with locals around Bavaria there is a very real fear that the next two decades could see the fall and eventual demise of these brands.

If the act of "driving" is completely replaced with computer control systems and the vehicle itself eventually becomes a commodity (because every vehicle performs the same way, travels at the same speeds, and is carefully governed by city traffic management systems), luxury vehicle "performance" is no longer a perceived value. As the mandated vehicle safety designs are achieved by all manufacturers and there's only a small percentage difference between the best and the worst (yet all getting "five stars"), advanced safety innovation no longer becomes a distinguishing factor. Finally, as Millennials (and the majority of city-bound Generation X and Y) give up the love, desire, and financial capability to own a personal vehicle - and instead seek "on-demand" public transport systems the likes that Uber and its kin will spawn - then "luxury" becomes a style choice without a premium.

Like those Bavarians I spoke with, these luxury car manufacturers are going to have to change dramatically if they are to continue to be the brands they are today. Despite all the technical innovation they've been renowned for over the last century, it does appear that they are late to the party and need to dramatically change their businesses in pretty short order.

As a BMW owner myself, I'm surprised at how far the company appears to be behind the global changes. I'd have thought that such a technically innovative company would have grasped the social and economic affects on luxury vehicle sales to city dwellers for the coming decade or two. While BMW (and other luxury car brands) have doubled down on vehicle performance, emission controls, renewable energy, and environmentally friendly design, it feels like they've been caught flat-footed in the innovation and desires of people (and city planners) to remove themselves from being the weakness behind the steering wheel... and the implication on all luxury vehicle brands.

I'm positive that the engineers at BMW and other traditionally innovative vehicle manufacturers have many relevant technologies tested and maybe shelved in their laboratories and around test tracks.

While I doubt that "luxury" will form less of a vehicles buying decision in the future - especially when the trend is towards fleet management of such vehicles (e.g. taxis, delivery, etc.) - I think that, for these companies to survive, they're going to have to become "technology companies".

Although late to the party, present-day luxury vehicle manufacturers can transform in to strong technology companies. For example, some opportunities could include:
  • With several decades of technical safety R&D innovation (e.g. collision avoidance, LADAR, automated parking, route guidance systems, land management, sleepy driver recognition, etc.) they already have the credentials and respect in the industry (and with consumers) as being the research leaders... so why not bundle up these safety features and license them under their brands. For example, the future Google self-driving car... music by Bose, safety by BMW.
  • As designers of engines (combustion, hybrid, and electric) they have decades of experience in design and performance. That could translate in to innovating city-wide refueling management platforms and systems.
  • "Smart Cities" are still mostly a desire rather than a reality. There is huge opportunity for proven technology companies to come in and define the rules, criteria, monitoring, and management of city-wide traffic control systems. Detailed knowledge of vehicle performance, capabilities, and safety controls whole be an ideal platform for building upon.
  • Regardless of just how many driver-less cars come to market over the coming decades, there are still going to be hundreds of millions of cars that were never built or designed to be "driver-less". There is an obvious requirement for supplemental or conversion kits for older vehicles - not just their own models.
The list above could be expanded considerably and I doubt that similar thoughts haven't also been discussed at various points in the last half-decade by the luxury brands themselves. However it would seem to me that now is a time of action.

It'll be very interesting to see how these luxury vehicle manufacturers reinvent themselves. If they have the funds now, then not only should they continue to innovate down safety technology paths, but they should probably be looking down the acquisition path... bringing into the fold new tech companies specializing in fleet and city vehicle management, taxi and courier management and control systems, city traffic monitoring and control systems, and maybe even a new generation of refueling station.

Saturday, July 9, 2016

Next Generation Weapons: The Eye Burner Rifle

The fantasy worlds of early 20th Century science fiction writers, in many ways, appear to be "now-ish" in terms of the technologies we'll wage war or police the civil population. Many of the weapons proposed a century ago were nuclear-based... well, perhaps "Atomic" was the more appropriate label at the time. Some authors pursued electric guns or "lightening" throwers, and by the mid-20th century the more common man-portable weapon systems were based upon high-powered laser systems.

When I think of new weapon systems... man-portable... and likely to be developed and employed within the coming quarter century, I think that many of the systems will integrate automatic target acquisition processes and coherent light - for"less lethal" confrontations. The term "less lethal" is of course relative and doesn't exclude weapon systems that are proficient at maiming and causing great pain or suffering.

One such system that, given current technological advances, lies within the finger tips of today's weapon designers could encompass the use of high intensity light, automated facial feature recognition, and "high-powered" laser light - and have a higher degree of target incapacitation than current personal small-arms have today.

The concept would be of a handheld configuration (similar size and dimensions of a rifle) that, when manually pointed in the direction of a target, bathes the target in a high intensity "white light" (giving the weapon system a range of say 50 meters) for a short period of time, at which point an embedded high-definition video device uses facial recognition processes to identify the physical eyes of the target currently "lit up", and subsequently automatically aligns a built-in high-powered laser with the targets eyes and fires. The laser, depending upon the power of the light source, would either temporarily or permanently blind the target.

A single trigger pull would bath the target with the main light function (which may temporarily disorient them anyway), but during that trigger pull the automated eye acquisition, eye targeting, and laser firing would happen in a fraction of a second (faster than a bullet could traverse the distance between shooter and target). I guess after the laser has successfully acquired the eyes and fired, the main light function would end... like a half-second burst of white light. To an external observer, the weapon user appeared to just fire a burst of white light at the head or torso of the target.

Obviously there are a lot of nuances to a "future" weapon like this. For example, would the target blink or close their eyes if the initial "white light" was directed at them? - At night, the answer is likely yes, however the facial recognition systems would still work and even a current "off-the-shelf" laser of the 5-20W range is strong enough to "burn through" the eyelids and damage the eyes. During the day it would obviously be easier... in fact perhaps the "white light" component is not required... instead the shooter merely targets the "head" and the rest of the system figures out the eyes and fires (or fries) the eyes of the target.

There are of course questions about ethics. But, compared to several ounces of hollow-point lead flying at several times the speed of sound, the option of permanent blindness is still a recoverable situation for the target.

[Wandering thoughts in SciFi]

Friday, January 29, 2016

Watching the Watchers Watching Your Network

It seems that this last holiday season didn’t bring much cheer or goodwill to corporate security teams. With the public disclosure of remotely exploitable vulnerabilities and backdoors in the products of several well-known security vendors, many corporate security teams spent a great deal of time yanking cables, adding new firewall rules, and monitoring their networks with extra vigilance.

It’s not the first time that products from major security vendors have been found wanting.

It feels as though some vendor’s host-based security defenses fail on a monthly basis, while network defense appliances fail less frequently – maybe twice per year. At least that’s what a general perusal of press coverage may lead you to believe. However, the reality is quite different. Most security vendors fix and patch security weaknesses on a monthly basis. Generally, the issues are ones that they themselves have identified (through internal SDL processes or the use of third-party code reviews and assessment) or they are issues identified by customers. And, every so often, critical security flaws will be “dropped” on the vendor by an independent researcher or security company that need to be fixed quickly.

Two decades ago, the terms “bastion host”, DMZ, and “firewall” pretty much summed up the core concepts of network security, and it was a simpler time for most organizations – both for vendors and their customers. The threat spectrum was relatively narrow, the attacks largely manual, and an organization’s online presence consisted of mostly static material. Yet, even then, if you picked up a book on network security you were instructed in no short order that you needed to keep your networks separate; one for the Internet, one for your backend applications, one for your backups, and a separate one for managing your security technology.

Since that time, many organizations have either forgotten these basic principles or have intentionally opted for riskier (yet cheaper) architectures and just hoping that their protection technologies are up to the task. Alas, as the events of December 2015 have shown us, every device added to a network introduces a new set of security challenges and weaknesses.

From a network security perspective, when looking at the architecture of critical defenses, there are four core principles:

  1. Devices capable of monitoring or manipulating network traffic should never have their management interfaces directly connected to the Internet. If these security devices need to be managed over the Internet it is critical that only encrypted protocols be used, multi-factor authentication be employed, and that approved in-bound management IP addresses be whitelisted at a minimum. 
  2. The management and alerting interfaces of security appliances must be on a “management” network – separated from other corporate and public networks. It should not be possible for an attacker who may have compromised a security device to leverage the management network to move laterally onto other guest systems or provide a route to the Internet. 
  3. Span ports and network taps that observe Internet and internal corporate traffic should by default only operate in “read-only” mode. A compromised security monitoring appliance should never be capable of modifying network traffic or communicating with the Internet from such an observation port. 
  4. Monitor your security products and their management networks. Security products (especially networking appliances such as core routers, firewalls, and malware defenses) will always be a high-value target to both external and internal attackers. These core devices and their management networks must be continuously monitored for anomalies and audited. 

In an age where state-sponsored reverse engineers, security research teams, and online protagonists are actively hunting for flaws and backdoors in the widely deployed products of major security vendors as a means of gaining privileged and secret access to their target’s networks, it is beyond prudent to revisit the core tenets of secure network architecture.

Corporate security teams and network architects should assume not only that new vulnerabilities and backdoors will be disclosed throughout the year, but that those holes may have been accessible and exploited for several months beforehand. As such, they should adopt a robust defense-in-depth strategy including “watchers watching watchers.”

Shodan's Shining Light

The Internet is chock full of really helpful people and autonomous systems that silently probe, test, and evaluate your corporate defenses every second of every minute of every hour of every day. If those helpful souls and systems aren’t probing your network, then they’re diligently recording and cataloguing everything they’ve found so others can quickly enumerate your online business or list systems like yours that are similarly vulnerable to some kind of attack or other.

Back in the dark ages of the Internet (circa the 20th century) everyone had to run their own scans to map the Internet in order to spot vulnerable systems on the network. Today, if you don’t want to risk falling foul of some antiquated hacking law in some country by probing IP addresses and shaking electronic hands with the services you encounter, you can easily find a helpful soul that’s figured it all out on your behalf and turn on the faucet of knowledge for a paltry sum.

One of the most popular services to shine light on and enumerate the darkest corners of the Internet is Shodan. It’s a portal-driven service through which subscribers can query its vast database of IP addresses, online applications and service banners that populate the Internet. Behind the scenes, Shodan’s multiple servers continually scan the Internet, enumerating and probing every device they encounter and recording the latest findings.

As an online service that diligently catalogues the Internet, Shodan behaves rather nicely. Servers that do the scanning aren’t overly aggressive and provide DNS information that doesn’t obfuscate who and what they are. Additionally, they are little more troublesome than Google in its efforts to map out Web content on the Internet.

In general, most people don’t identify what Google (or Microsoft, Yahoo or any other commercial search engine) does as bad, let alone illegal. But if you are familiar with the advanced search options these sites offer or read any number of books or blogs on “Google Dorks,” you’ll likely be more fearful of them than something with limited scope like Shodan.

Unfortunately, Shodan is increasingly perceived as a threat by many organizations. This might be due to its overwhelming popularity or its frequent citation amongst the infosec community and journalists as a source of embarrassing statistics. Consequently, security companies like Check Point have included alerts and blocking signatures in a vain attempt to thwart Shodan and its ilk.

On one hand, you might empathize with many organizations on the receiving end of a Shodan scan. Their Internet-accessible systems are constantly probed, their services are enumerated, and every embarrassing misconfiguration or unpatched service is catalogued and could be used against them by evil hackers, researchers and journalists.

In some realms, you’ll also hear that the bad guy competitors to Shodan (e.g. cyber criminals mapping the Internet for their own financial gain) are copying the scanning characteristics of Shodan so the target’s security and incident response teams assume it’s actually the good guys and ignore the threat.

On the other hand, with it being so easy to modify the scanning process – changing scan types, modifying handshake processes, using different domain names, and launching scans from a broader range of IP addresses – you’d be forgiven for thinking that it’s all a bit of wasted effort… about as useful as a “keep-off-the-grass” sign in Hyde Park.

Although “robots.txt” in its own way serves as a similarly polite request for commercial Web search scanners to not navigate and cache pages on a site, it is most often ignored by scanning providers. It also serves as a flashing neon arrow that directs hackers and security researchers to the more sensitive content.

It’s a sad indictment of current network security practices that a reputable security vendor felt the need and justification to add detection rules for Shodan scans and that their customer organizations may feel more protected for implementing them.

While the virtual “keep-off-the-grass” warning isn’t going to stop anyone, it does empower the groundskeeper to shout, “Get off my land!” (in the best Cornish accent they can muster) and feel justified in doing so. In the meantime, the plague of ever-helpful souls and automated systems will continue to probe away to their hearts content.

Friday, November 20, 2015

Battling Cyber Threats Using Lessons Learned 165 Years Ago

When it comes to protecting the end user, the information security community is awash with technologies and options. Yet, despite the near endless array of products and innovation focused on securing that end user from an equally broad and expanding array of threats, the end user remains more exposed and vulnerable than at any other period in the history of personal computing.

Independent of these protection technologies (or possibly because of them), we’ve also tried to educate the user in how best (i.e. more safely) to browse the Internet and take actions to protect themselves. With a cynical eye, it’s almost like a government handing out maps to their citizens and labeling streets, homes, and businesses that are known to be dangerous and shouldn’t be visited – because not even the police or military have been effective there.

Today we instruct our users (and at home, our children) to be careful what they click-on, what pages or sites they visit, what information they can share, and what files they should download. These instructions are not just onerous and confusing, more often than not they’re irrelevant – as, even after following them to the letter, the user can still fall victim.

The fact that a user can’t click on whatever they want, browse wherever they need to, and open what they’ve received, should be interpreted as a mile-high flashing neon sign saying “infosec has failed and continues to fail” (maybe reworded with a bunch of four-letter expletives for good measure too).
For decades now thousands of security vendors have brought to market technologies that, in effect, are predominantly tools designed to fill vulnerable and exploited gaps in the operating systems lying at the core of devices the end users rely upon. If we’re ever to make progress against the threat and reach the utopia of users being able to “carelessly” using the Internet, those operating systems must get substantially better.

In recent years, great progress has been made in the OS front – primarily smartphone OS’s. The operating systems running on our most pocket-friendly devices are considerably more secure than those we rely upon for our PC’s, notebooks, or servers at home or work. There’s a bunch of reasons why of course – and I’ll not get in to that here – but there’s still so much more that can be done.
I do believe that there are many lessons that can be learned from the past; lessons that can help guide future developments and technologies. Reaching back a little further in to the past than usual – way before the Internet, and way before computers – there are a couple of related events that could shine a brighter light on newer approaches to protecting the end user.

Back in 1850 a Hungarian doctor named Ignaz Semmelweis was working in the maternity clinic at the General Hospital in Vienna where he noted that many women in maternity wards were dying from puerperal fever - commonly known as childbed fever. He studied two medical wards in the hospital – one staffed by all male doctors and medical students, and the other by female midwifes – and counted the number of deaths in each ward. What he found was that death from childbirth was five times higher in the ward with the male doctors.

Dr. Semmulweis tested numerous hypothesis as to the root cause of the deadly difference – ranging from mothers giving birth on their sides versus their backs, through to the route priests traversed the ward and the bells they rang. It appears that his Eureka moment came after the death of a male pathologist who, upon pricking his finger while doing an autopsy on a woman who had died of childbed fever, had succumbed to the same fate (apparently being a pathologist in the mid-19th century was not conducive to a long life). Joining the dots, Dr. Semmulweis noted that the male doctors and medical students were doing autopsies while the midwifes were not, and that “cadaverous particles” (this is a period of time before germs were known) were being spread to those birthing mothers.

Dr. Semmulweis’ medical innovation? “Wash your hands!” The net result, after doctors and midwifes started washing their hands (in lime water, then later in chlorine), was that the rate of childbed fever dropped considerably.

Now, if you’re in the medical trade, washing your hands multiple times per day in chlorine or (by the late 1800’s) carbolic acid, you’ll note that it isn’t so good for your skin or hands.

In 1890 William Stewart Halsted of Johns Hopkins University asked the Goodyear Tire and Rubber Company if they could make a glove of rubber that could be dipped in carbolic acid in order to protect the hands of his nurses – and so was born the first sterilized medial gloves. The first disposable latex medical gloves were manufactured by Ansell and didn’t appear until 1964.

What does this foray in to 19th century medical history mean for Internet security I hear you say? Simple really, every time the end user needs to use a computer to access the Internet and do work, it needs to be clean/pristine. Whether that means a clean new virtual image (e.g. “wash your hands”) or a disposable environment that sits on top of the core OS and authorized application base (e.g. “disposable gloves”), the assumption needs to be that nothing the user encounters over the Internet can persist on the device they’re using after they’ve finished their particular actions.

This obviously isn’t a solution for every class of cyber threat out there, but it’s an 80% solution – just as washing your hands and wearing disposable gloves as a triage nurse isn’t going to protect you (or your patient) from every post-surgery ailment.

Operating system providers or security vendors that can seamlessly adopt and automatically procure a clean and pristine environment for the end user every time they need to conduct activities on or related to the Internet will fundamentally change the security game – altering the battle field for attackers and the tools of their trade.

Exciting times ahead.

-- Gunter

Wednesday, November 18, 2015

Exploiting Video Console Chat for Cybecrime or Terrorism

A couple of days ago there was a lot of interest in how terrorists may have been using chat features of popular video console platforms (e.g. PS4, XBox One) to secretly communicate and plan their attacks. Several journalists on tight deadlines reached out to me for insight in to threat. Here are some technical snippets on the topic that may be useful for future reference:

  • In-game chat systems have been used by cyber-criminals for over a decade to conduct business and organize transfers of stolen data. Because the chat systems within games tend to use proprietary protocols and exist solely within a secure connection to the game vendors server, it is not ordinarily possible to eavesdrop or collectively intercept these communications without some level of legal access to the central server farm. While the game vendors have the ability to inspect the chat traffic, this level of inspection (when conducted - which is rare) tends to focus on inappropriate language and bullying, and that inspection or evidence gathering is almost exclusively limited to text-based communications.
  • As games (particularly multi-player first-person shootem-up games) have embraced real-time voice chat protocols, it has become considerably more difficult to inspect traffic and identify inappropriate communications. Most responses to abuse are driven my multiple individuals complaining about another in-game player - rather that dynamic detection of abuse.
  • This difficulty in monitoring communications is well known in the criminal community and is conveniently abused. Criminals tend to not use their own personal account details, instead use aliases or, more frequently, stolen user credentials - and may electronically proxy their communications via TOR and other anonymizing proxy services to avoid people working out their physical location. There is a sizable underground market for stolen on-line gaming user credentials. When using stolen credentials, the criminals will often join specific game servers and use pre-arranged times for games (and sub-types of games) to ensure that they will be online with the right group(s) of associates. These game times/details are often discussed in private message boards.
  • While US law enforcement has expended efforts to intercept communications and ascertain geographical location information from TOR and proxy services in the past, it is difficult - since the communications themselves are typically encrypted. Intercepting in-game communications are very difficult because of the complex legal and physical server relationships between (lets say for example) Sony (running the PlayStation network), Electronic Arts (running the account management system and some of the gaming server farm), and the game development team (who implemented the communication protocol and runs the in-game service). For law enforcement, getting the appropriate legal interception rights to target an individual (criminal) is complex in this situation and may be thwarted anyway if the criminals choose to use their own encryption tools on top of the game - i.e. the in-game communications are encrypted by the criminals using a third-part non-game tool.
  • Console chat typically takes the form of either text or voice-based chat. Text-based chat is much easier to analyze and consequently easier for console operators and law enforcement to identify threats and abuse. In addition, text-based communications are much easier to store or archive - which means that, after an event, it is often possible for law enforcement to obtain the historical communication logs and perform analysis. Voice-based chat is much more difficult to handle and typically will only be inspected in a streaming fashion because the data volumes are so large - making it impractical to store for any extended period of time. There are also more difficulties in searching voice traffic for key words and threats. Video-based chat is even more difficult again to dynamically inspect, monitor, and store.
-- Gunter

Tuesday, November 17, 2015

Panel Selection of Penetration Testing Vendors

Most large companies have settled into a repeatable model in the way they undertake penetration testing and engage with their penetration testing suppliers. A popular model for companies that need to have several dozen pentests performed per year is to have a “board” or “panel” of three or four vetted companies and to rotate one provider in and out of the scheme per year – meaning that there is potentially a total refresh of providers every few years.

As vendor performance models go there is a certain undeniable logic to the process. However, it is worth questioning if these “board” models actually deliver better results – in particular, are the full spectrum of vulnerabilities being examined and are the individual consultants capable of delivering the work? In general, I’d argue that such a model often fails to meet these core requirements.

Advanced companies (e.g. brand-name software manufacturers) that require access to the most skilled talent-pool of penetration testers and reverse engineers tend to select vendors based upon the skills and experience of the consultants they employ – often specifically calling out individual consultants by names within the terms of the contract. They also pay premium rates to have access to that exclusive talent pool. In turn, the vendors that employ those consultants market and position their own companies as advanced service providers. For these companies, talent is the critical buying decision and it is not uncommon for the client organization to engage with new vendors when highly skilled or specialized consultants move between service providers.

Most large companies are not as sophisticated in discerning the talent pool needed to review and secure their products – yet still have many of the same demands and needs from a penetration testing skills perspective. For them, vendor selection is often about the responsiveness of the service provider (e.g. can they have 6 penetration testers onsite within two weeks in Germany or Boston) and the negotiated hourly-rate for their services. The churn of vendors through the “board” model is typically a compromise effort as they try to balance the needs of negotiating more favorable contractual terms, overcoming a perception of skill gaps within their providers consulting pool, and serve a mechanism for tapping a larger pool of vetted consultants.

From past observations, there are several flaws to this model (although several elements are not unique to the model).
  1. Today's automated vulnerability scanners (for infrastructure, web application, and code review) are capable of detecting up to 90% of the vulnerabilities an “average” penetration tester can uncover manually if they use their own scripts and tools. Managed vulnerability scanning services (e.g. delivered by managed security service providers (MSSP)) typically reach the same 90% level, but tend to provide the additional value of removing false positives and confirming true positives. If these automated tools and services already cover 90% of the vulnerability spectrum, organizations need to determine whether closing the gap on the remaining 10% is worth the consulting effort and price. Most often, the answer is “yes, but…” where the “but…” piece is assigned a discrete window of time and effort to uncover or solve – and hence value. Organizations who adopt the “board” approach often fail to get the balance between tools, MSSP, and consultant-led vulnerability discovery programs. There are significant cost savings to be had when the right balances have been struck.
  2. Very few consultants share the same depth of skills and experience. If an organization is seeking to uncover vulnerabilities that lie out of reach of automated discovery tools, it is absolutely critical that the consultant(s) undertaking the work have the necessary skills and experience. There is little point throwing a 15 year veteran of Windows OS security at an Android mobile application served from the AWS cloud – and vice versa. To this end, clients must evaluate the skill sets of the consultants that are being offered up by the vendor and who are expected to do the work. The reality of the situation is that clients that don’t pay the necessary attention can almost guarantee that they’ll get the second-rung consultants (pending availability) to perform this important work. The exception being when a new vendor is being evaluated, and they’ll often try to throw their best at the engagement for a period of time in order to show their corporate value – but clients should not anticipate the same level of results in subsequent engagements unless they are specific about the consultants they need on the job.
  3. Rotating a vendor in or out of a program based upon an annual schedule independent of evaluating the consultants employed by the company makes little sense. Many penetration testing companies will have a high churn of technical staff to begin with and their overall technical delivery capabilities and depth of skills specialization will flux though-out the year. By understanding what skill sets the client organization needs and the amount of experience in each skill area in advance, those organizations can better rationalize their service providers consulting capabilities – and negotiate better terms.
  4. Because consultant skills and experience play such an important role in being able to uncover new vulnerabilities, client organizations should consider cross-vendor teams when seeking to penetration test and secure higher-priority infrastructure, applications, and products. Cherry-picking named consultants from multiple vendors to work on an important security requirement tends to yield the best and most comprehensive findings. Often there is the added advantage of those vendors choosing to compete to ensure that their consultants do the best team work on the joint project – hoping that more follow-on business will fall in their direction.

While “board” or “panel” approaches to penetration testing vendor management may have an appeal from a convenience perspective, the key to getting the best results (both economical and vulnerability discovery) lies with the consultants themselves. 

Treating the vendor companies as convenient payment shells for the consultants you want or need working on your security assignments is OK as long as you evaluate the consultants they employ and are specific on which consultants you want working to secure your infrastructure, applications, and products. To do otherwise is a disservice to your organization.

-- Gunter

Monday, November 9, 2015

The Incredible Value of Passive DNS Data

If a scholar was to look back upon the history of the Internet in 50 years’ time, they’d likely be able to construct an evolutionary timeline based upon threats and countermeasures relatively easily. Having transitioned through the ages of malware, phishing, and APT’s, and the countermeasures of firewalls, anti-spam, and intrusion detection, I’m guessing those future historians would refer to the current evolutionary period as that of “mega breaches” (from a threat perspective) and “data feeds”.
Today, anyone can go out and select from a near infinite number of data feeds that run the gamut from malware hashes and phishing URL’s, through to botnet C&C channels and fast-flux IPs. 

Whether you want live feeds, historical data, bulk data, or just API’s you can hook in and ad hoc query, more than one person or organization appears to be offering it somewhere on the Internet; for free or as a premium service.

In many ways security feeds are like water. They’re available almost everywhere if take the time to look, however their usefulness, cleanliness, volume, and ease of acquiring, may vary considerably. Hence there value is dependent upon the source and the acquirees needs. Even then, pure spring water may be free from the local stream, or come bottled and be more expensive than a coffee at Starbucks.

At this juncture in history the security industry is still trying to figure out how to really take advantage of the growing array of data feeds. Vendors and enterprises like to throw around the term “intelligence feeds” and “threat analytics” as a means of differentiating their data feeds from competitors after they have processed multiple lists and data sources to (essentially) remove stuff – just like filtering water and reducing the mineral count – increasing the price and “value”.
Although we’re likely still a half-decade away from living in a world were “actionable intelligence” is the norm (where data feeds have evolved beyond disparate lists and amalgamations of data points into real-time sentry systems that proactively drive security decision making), there exist some important data feeds that add new and valuable dimensions to other bulk data feeds; providing the stepping stones to lofty actionable security goals.

From my perspective, the most important additive feed in progressing towards actionable intelligence is Passive DNS data (pDNS).

For those readers unfamiliar with pDNS, it is traditionally a database containing data related to successful DNS resolutions – typically harvested from just below the recursive or caching DNS server.

Whenever your laptop or computer wants to find out the IP address of a domain name your local DNS agent will delegate that resolution to a nominated recursive DNS server (listed in your TCP/IP configuration settings) which will either supply an answer it already knows (e.g. a cached answer) or in-turn will attempt to locate a nameserver that does know the domain name and can return an authoritative answer from that source.

By retaining all the domain name resolution data and collecting from a wide variety of sources for a prolonged period of time, you end up with a pDNS database capable of answering questions such as “where did this domain name point to in the past?”, “what domain names point to a given IP address?”, “what domain names are known by a nameserver?”, “what subdomains exist below a given domain name?”, and “what IP addresses will a domain or subdomain resolve to around the world?”.

pDNS, by itself, is very useful, but when used in conjunction with other data feeds its contributions towards actionable intelligence may be akin to turning water in to wine.

For example, a streaming data feed of suspicious or confirmed malicious URL’s (extracted from captured spam and phishing email sources) can provide insight as to whether the customers of a company or its brands have been targeted by attackers. However, because email delivery is asynchronous, a real-time feed does not necessarily translate to current window of visibility on the threat. By including pDNS in to the processing of this class of threat feed it is possible to identify both the current and past states of the malicious URL’s and to cluster together previous campaigns by the attackers – thereby allowing an organization to prioritize efforts on current threats and optimize responses.

While pDNS is an incredibly useful tool and intelligence aid, it is critical that users understand that acquiring and building a useful pDNS DB isn’t easy and, as with all data feeds, results are heavily dependent upon the quality of the sources. In addition, because historical and geographical observations are key, the longer the pDNS data goes back (ideally 3+ years) and the more data the sources cover global ISPs (ideally a few dozen tier-1 operators), the more reliable and useful the data will be. So select your provider carefully – this isn’t something you ordinarily build yourself (although you can contribute to a bigger collector if you wish).

If you’re looking for more ideas on how to use DNS data as a source and aid to intelligence services and even threat attribution, you can find a walk-through of techniques I’ve presented or discussed in the past here and here.

-- Gunter