Thursday, December 1, 2016

NTP: The Most Neglected Core Internet Protocol

The Internet of today is awash with networking protocols, but at its core lies  a handful that fundamentally keep the Internet functioning. From my perspective, there is no modern Internet without DNS, HTTP, SSL, BGP, SMTP, and NTP.

Of these most important Internet protocols, NTP (Network Time Protocol) is the likely least understood and has the least attention and support. Until very recently, it was supported (part-time) by just one person - Harlen Stenn - "who had lost the root passwords to the machine where the source code was maintained (so that machine hadn't received security updates in many years), and that machine ran a proprietary source-control system that almost no one had access to, so it was very hard to contribute to".

Just about all secure communication protocols and server synchronization processes require that they have their internal clocks set the same. NTP is the protocol that allows all this to happen.

ICEI and CACR have gotten involved with supporting NTP and there are several related protocol advancements underway to increase security of such vital component of the Internet. NTS (Network Time Security), currently in draft version with the Internet Engineering Task Force (IETF), aims to give administrators a way to add security to NTP and promote secure time synchronization.

While there have been remarkably few exploitable vulnerabilities in NTP over the years, the recent growth of DDoS botnets (such as Mirai) utilizing NTP Reflection Attacks shone a new light on its frailties and importance.

Some relevant stories on the topic of how frail and vital NTP has become and whats being done to correct the problem can be found at:



Tuesday, November 29, 2016

The Purple Team Pentest

It’s not particularly clear whether a marketing intern thought he was being clever or a fatigued pentester thought she was being cynical when the term “Purple Team Pentest” was first thrown around like spaghetti at the fridge door, but it appears we’re now stuck with the term for better or worse.

Just as the definition of penetration testing has broadened to the point that we commonly label a full-scope penetration of a target’s systems with the prospect of lateral compromise and social engineering as a Red Team Pentest – delivered by a “Red Team” entity operating from a sophisticated hacker’s playbook. We now often acknowledge the client’s vigilant security operations and incident response team as the “Blue Team” – charged with detecting and defending against security threats or intrusions on a 24x7 response cycle.

Requests for penetration tests (Black-box, Gray-box, White-box, etc.) are typically initiated and procured by a core information security team within an organization. This core security team tends to operate at a strategic level within the business – advising business leaders and stakeholders of new threats, reviewing security policies and practices, coordinating critical security responses, evaluating new technologies, and generally being the go-to-guys for out-of-ordinary security issues. When it comes to penetration testing, the odds are high that some members are proficient with common hacking techniques and understand the technical impact of threats upon the core business systems.

These are the folks that typically scope and eventually review the reports from a penetration test – they are however NOT the “Blue Team”, but they may help guide and at times provide third-line support to security operations people. No, the nucleus of a Blue Team are the front-line personnel watching over SIEM’s, reviewing logs, initiating and responding to support tickets, and generally swatting down each detected threat as it appears during their shift.

Blue Teams are defensively focused and typically proficient at their operational security tasks. The highly-focused nature of their role does however often mean that they lack what can best be described as a “hackers eye view” of the environment they’re tasked with defending.

Traditional penetration testing approaches are often adversarial. The Red Team must find flaws, compromise systems, and generally highlight the failures in the targets security posture. The Blue Team faces the losing proposition of having to had already secured and remediated all possible flaws prior to the pentest, and then reactively respond to each vulnerability they missed – typically without comprehension of the tools or techniques the Red Team leveraged in their attack. Is it any wonder that Blue Teams hate traditional pentests? Why aren’t the Red Team consultants surprised that the same tools and attack vectors work a year later against the same targets?

A Purple Team Pentest should be thought of as a dynamic amalgamation of Red Team and Blue Team members with the purpose of overcoming communication hurdles, facilitating knowledge transfer, and generally arming the Blue Team with newly practiced skills against a more sophisticated attacker or series of attack scenarios.

How to Orchestrate a Purple Team Pentest Engagement

Very few organizations have their own internal penetration testing team and even those that do regularly utilize external consulting companies to augment that internal team to ensure the appropriate skills are on hand and to tackle more sophisticated pentesting demands.

A Purple Team Pentest almost always utilizes the services of an external pentest team – ideally one that is accomplished and experienced in Red Team pentesting.

Bringing together two highly skilled security teams – one in attack, the other in defense – and having them not only work together, but to also achieve all the stated goals of a Purple Team pentest, requires planning and leadership.

To facilitate a successful Purple Team Pentest, the client organization should consider the following key elements:

  • Scope & Objectives - Before reaching out and engaging with a Red Team provider, carefully define the scope and objectives of the Purple Team Pentest. Be specific as to what the organizations primary goals are and what business applications or operational facilities will be within scope. Since a key objective of conducting a Purple Team Pentest is to educate and better arm the internal Blue Team and to maximize the return on a Red Team’s findings, identify and list the gaps that need to be addressed in order to define success.
  • Blue Team Selection - Be specific in defining which pieces of the organization and which personnel constitute the “Blue Team”. Go beyond merely informing various security operations staff that they are now part of a Blue Team. It is critical that the members feel they are a key component in the company’s new defensive strategy. Educate them about the roles and responsibilities of what the Blue Team entails. Prior to engaging with a Red Team provider and launching a Purple Team Pentest, socialize and refine the scope and objectives of the proposed Purple Teaming engagement with the team directly.
  • Red Team Selection - It is important that the client select a Red Team that consists of experienced penetration testers. The greater the skills and experience of the Red Team members, the more they will be able to contribute to the Purple Team Pentest objectives. Often, in pure Red Team Pentest engagements, the consulting team will contain a mix of experienced and junior consultants – with the junior consultants performing much of the tool-based activities under the supervision of the lead consultant. Since a critical component of a Purple Team Pentest lies in the ability to communicate and educate a Blue Team to the attacker’s methodologies and motivations, junior-level consultants add little value to that dialogue. Clients are actively encouraged to review the resumes of the consultants proposed to constitute the Red Team in advance of testing.
  • Playbook Definition - Both sides of the Purple Teaming exercise have unique objectives and methodologies. Creation of a playbook in advance of testing is encouraged and so too is the sharing and agreement between the teams. This playbook loosely defines the rules of the engagement and is largely focused on environment stability (e.g. rules for patch management and rollout during the testing period) and defining exceptions to standard Blue Team responses (e.g. identifying but not blocking the inbound IP addresses associated with the Red Team’s C&C).
  • Arbitrator or Referee - Someone must be the technical “Referee” for the Purple Team Pentest. They need to be able to speak both Red Team and Blue Team languages, interpret and bridge the gap between them, manage the security workshops that help define and resolve any critical threat discoveries, and generally arbitrate according to the playbook (often adding to the playbook throughout the engagement). Ideally the arbitrator or referee for the engagement is not directly associated with, or a member of, either the Red or Blue teams.
  • Daily Round-table Reviews - Daily round-table discussions and reviews of Red Team findings are the center-piece of a successful Purple Team Pentest. Best conducted at the start of each day (mitigating the prospect of long tired days and possible overflow of working hours – curtailing discussion), the Red Team lays out the successes and failures of the previous days testing, while the Blue Team responds with what they detected and how they responded. The review facilitates the discussion of “What and Why” the Red Team members targeted, explain the “How” they proceeded, and allows the Blue Team to query and understand what evidence they may have collected to detect and thwart such attacks. For example, daily discussions should include discussions covering what traffic did the tool or methodology generate, where could that evidence have been captured, how could that evidence be interpreted, what responses would pose the biggest hurdle to the attacker?
  • Pair-down Deep Dives - Allowing members of the teams to “pair down” after the morning review to dive deeper in to the technical details and projected responses to a particular attack vector or exploitation is highly encouraged.
  • Evaluate Attack and Defense Success in Real-time - Throughout the engagement the “Arbitrator” should engage with both teams and be constantly aware of what attacks are in play by the Red Team, and what responses are being undertaken by the Blue Team. In some attack scenarios it may be worthwhile allowing the Red Team to persist in an attack even if it has been detected and countered by the Blue Team, or is known to be unsuccessful and unlikely to lead to compromise. However, the overall efficiency can be increased and the cost of a Purple Team Pentest can be reduced by brokering conversations between the teams when attack vectors are stalled, irrelevant, already successful, or known to eventually become successful. For example, the Red Team are able to get a foothold on a compromised host and then proceed to bruteforce attack the credentials of an accessible internal database server. Once the Red Team have successfully started their brute-force attack it may be opportune to validate with the Blue Team that they have already been alerted to the attack in process and are initiating countermeasures. At that point in time, in order to speed up the testing and to progress with another approved attack scenario, a list of known credentials are passed directly to the Red Team and they may progress with a newly created test credential on that (newly) compromised host.
  • Triage and Finding Review - Most Red Team pentests will identify a number of security vulnerabilities and exploit paths that were missed by the Blue Team and will require vendor software patches or software development time to remediate. In a pure Red Team Pentest engagement, a “Final Report” would be created listing all findings – with a brief description of recommended and generic best practice fixes. In a Purple Team Pentest, rather than production of a vulnerability findings report, an end-of-pentest workshop should be held between the two teams. During this workshop each phase of the Red Team testing is reviewed – discoveries, detection, remediation, and mitigation – with an open Q&A dialogue between the teams and, at the conclusion of the workshop, a detailed remediation plan is created along with owner assignment.

The Future is Purple

While the methodologies used in Purple Team penetration testing are the same as those of a stand-alone Red Team Pentest, the business objectives and communication methods used are considerably different. Even though the Purple Team Pentest concept is relatively new, it is an increasingly important vehicle for increasing an organizations security stature and reducing overall costs.

The anticipated rewards from conducting a successful Purple Team pentest include increased Blue Team knowledge of threats and adversaries, muscle-memory threat response and mitigation, validation of playbook response to threats in motion, confidence in sophisticated attacker incident response, identification and enumeration of new vulnerabilities or attack vectors, and overall team-building.

As businesses become more aware of Purple Teaming concepts and develop an increased understanding of internal Blue Team capabilities and benefits, it is anticipated that many organizations will update their annual penetration testing requirements to incorporate Purple Team Pentest as a cornerstone of their overall information security and business continuity strategy.

-- Gunter Ollmann

Monday, November 28, 2016

Navigating the "Pentest" World

The demand for penetration testing and security assessment services worldwide has been growing year-on-year. Driven largely by Governance, Risk, and Compliance (GRC) concerns, plus an evolving pressure to be observed taking information security and customer privacy seriously, most CIO/CSO/CISO’s can expect to conduct regular “pentests” as a means of validating their organizations or product’s security.

An unfortunate circumstance of two decades of professional service oriented delivery of pentests is that the very term “penetration testing” now covers a broad range of security services and risk attributes – with most consulting firms provide a smorgasbord of differentiated service offerings – intermixing terms such as security assessment and pentest, and constructing hybrid testing methodologies.

For those newly tasked with having to find and retain a team capable of delivering a pentest, the prospect of having to decipher the lingo and identify the right service is often daunting – as failure to get it right is not only financially costly, but may also be career-ending if later proven to be inadequate.

What does today’s landscape of pentesting look like?

All penetration testing methodologies and delivery approaches are designed to factor-in and illustrate a threat represented by an attack vector or exploitation. A key differentiator between many testing methodologies lies in whether the scope is to identify the presence of a vulnerability, or to exploit and subsequently propagate an attack through that vulnerability. The former is generally bucketed in the assessment and audit taxonomy, while the latter is more commonly a definition for penetration testing (or an ethical hack).
The penetration testing market and categorization of services is divided by two primary factors – the level of detail that will be provided by the client, and the range of “hacker” tools and techniques that will be allowed as part of the testing. Depending upon the business drivers behind the pentest (e.g. compliance, risk reduction, or attack simulation), there is often a graduated-scale of services. Some of the most common terms used are:
  • Vulnerability Scanning
    The use of automated tools to identify hosts, devices, infrastructure, services, applications, and code snippets that may be vulnerable to known attack vectors or have a history of security issues and vulnerabilities.
  • Black-box Pentest
    The application of common attack tools and methodologies against a client-defined target or range of targets in which the pentester is tasked with identifying all the important security vulnerabilities and configuration failures of the scoped engagement. Typically, the penetration scope is limited to approved systems and windows of exploitation to minimize the potential for collateral damage. The client provides little information beyond the scope and expects the consultant to replicate the discovery and attack phases of an attacker who has zero insider knowledge of the environment. 
  • Gray-box Pentest
    Identical methodology to the Black-box Pentest, but with some degree of insider knowledge transfer. When an important vulnerability is uncovered the consultant will typically liaise with the client to obtain additional “insider information” which can be used to either establish an appropriate risk classification for the vulnerability, or initiate a transfer of additional information about the host or the data it contains (that could likely be gained by successfully exploiting the vulnerability), without having to risk collateral damage or downtime during the testing phase.
  • White-box Pentest (also referred to as Crystal-box Pentest)
    Identical tools and methodology to the Black-box Pentest, but the consultants are supplied with all networking documentation and details ahead of time. Often, as part of a White-box Pentest, the client will provide network diagrams and the results of vulnerability scanning tools and past pentest reports. The objective of this type of pentest is to maximize the consultants time on identifying new and previously undocumented security vulnerabilities and issues.
  • Architecture Review
    Armed with an understanding of common attack tools and exploitation vectors, the consultant reviews the underlying architecture of the environment. Methodologies often include active testing phases, such as network mapping and service identification, but may include third-party hosting and delivery capabilities (e.g. domain name registration, DNS, etc.) and resilience to business disruption attacks (e.g. DDoS, Ransomware, etc.). A sizable component of the methodology is often tied to the evaluation and configuration of existing network detection and protection technologies (e.g. firewall rules, network segmentation, etc.) – with configuration files and information being provided directly by the client.
  • Redteam Pentest
    Closely related to the Black-box pentest, the Redteam pentest mostly closely resembles a real attack. The scope of the engagement (targets and tools that can be used) is often greater than a Black-box pentest, and typically conducted in a manner to not alert the client’s security operations and incident response teams. The consultant will try to exploit any vulnerabilities they reasonably believe will provide access to client systems and, from a compromised device, attempt to move laterally within a compromised network – seeking to gain access to a specific (hidden) target, or deliver proof of control of the entire client network.
  • Code Review
    The consultant is provided access to all source code material and will use a mix of automated and manual code analysis processes to identify security issues, vulnerabilities, and weaknesses. Some methodologies will encompass the creation of proof-of-concept (PoC) exploitation code to manually confirm the exploitability of an uncovered vulnerability.
  • Controls Audit
    Typically delivered on-site, the consultant is provided access to all necessary systems, logs, policy-derived configuration files, reporting infrastructure, and data repositories, and performs an audit of existing security controls against a defined list of attack scenarios. Depending upon the scope of the engagement, this may include validation against multiple compliance standards and use a mix of automated, manual, and questionnaire-based evaluation techniques.
The Hybrid Pentest Landscape

In recent years the pentest landscape has evolved further with the addition of hybrid services and community-sourcing solutions. 
Overlapping the field of pentesting, there are three important additions:
  • Bug Bounty Programs
    Public bug bounty programs seek to crowdsource penetration testing skills and directly incentivize participants to identify vulnerabilities in the client’s online services or consumer products. The approach typically encompasses an amalgamation of Vulnerability Scanning and Black-box Pentest methodologies – but with very specific scope and limitations on exploitation depth. With (ideally) many crowdsourced testers, the majority of testing is repeated by each participant. The hope is that, over time, all low-hanging fruit vulnerabilities will be uncovered and later remediated. 
  • Purple Team Pentest
    This hybrid pentest combines Redteam and Blueteam (i.e. the client’s defense or incident response team) activities in to a single coordinated testing effort. The Redteam employs all the tools and tricks of a Redteam Pentest methodology, but each test is watch and responded to in real-time by the client’s Blueteam. As a collaborative pentest, there is regular communication between the teams (typically end of day calls) and synching of events. The objectives of Purple Team pentesting is both assess the capabilities of the Blueteam and to reduce the time typically taken to conduct a Redteam Pentest – by quickly validating the success or failure of various attack and exploitation techniques, and limiting the possibility of downtime failures of targeted and exploited systems.
  • Disaster Recovery Testing
    By combining a Whitebox Pentest with incident response preparedness testing and a scenario-based attack strategy, Disaster Recovery Testing is a hybrid pentest designed to review, assess, and actively test the organization's capability to respond and recover from common hacker-initiated threats and disaster scenarios.
Given the broad category of “pentest” and the different testing methodologies followed by security consulting groups around the globe, prospective clients of these services should ensure that they have a clear understanding of what their primary business objectives are. Compliance, risk reduction, and attack simulation are the most common defining characteristics driving the need for penetration testing – and can typically align with the breakdown of the various pentest service definitions.

[Update: First graph adapted from Patrick Thomas' tweet - https://twitter.com/coffeetocode/status/794593057282859008]

Sunday, July 10, 2016

The Future of Luxury Car Brands in a Self-Driving City

For the automotive manufacturing industry, the next couple of decades are going to be make or break for most of the well known brand names we're familiar with today. 

With the near term prospect of "self-driving" cars and city-level smart traffic routing (and monitoring) infrastructure fundamentally changing the way in which we drive, and the shift in city demographics that promotes a growing move away from wanting (or being able to afford) a personal vehicle, it should be clear to all that the motoring practices of the last century are on a trajectory to disappear pretty quickly.

As self-driving cars eventually negate the "love of driving" and city traffic routing and control systems begin to rigorously enforce variable speed limits, congestion charging, and overall traffic management, the personal car becomes more and more just another impersonal transport system. If that's likely the case (or even partially the case), then what does the future hold for the manufacturers of luxury cars?

Earlier this month I spent a week in Bavaria, Germany, visiting customers and prospects. The economies of cities like Stuttgart and Munich fundamentally revolve around the luxury automotive industry. Companies like BMW, Audi, and Porsche define the standard in personal vehicle luxury and generally lead the world in technical innovation (especially in safety features). Speaking with locals around Bavaria there is a very real fear that the next two decades could see the fall and eventual demise of these brands.

If the act of "driving" is completely replaced with computer control systems and the vehicle itself eventually becomes a commodity (because every vehicle performs the same way, travels at the same speeds, and is carefully governed by city traffic management systems), luxury vehicle "performance" is no longer a perceived value. As the mandated vehicle safety designs are achieved by all manufacturers and there's only a small percentage difference between the best and the worst (yet all getting "five stars"), advanced safety innovation no longer becomes a distinguishing factor. Finally, as Millennials (and the majority of city-bound Generation X and Y) give up the love, desire, and financial capability to own a personal vehicle - and instead seek "on-demand" public transport systems the likes that Uber and its kin will spawn - then "luxury" becomes a style choice without a premium.

Like those Bavarians I spoke with, these luxury car manufacturers are going to have to change dramatically if they are to continue to be the brands they are today. Despite all the technical innovation they've been renowned for over the last century, it does appear that they are late to the party and need to dramatically change their businesses in pretty short order.

As a BMW owner myself, I'm surprised at how far the company appears to be behind the global changes. I'd have thought that such a technically innovative company would have grasped the social and economic affects on luxury vehicle sales to city dwellers for the coming decade or two. While BMW (and other luxury car brands) have doubled down on vehicle performance, emission controls, renewable energy, and environmentally friendly design, it feels like they've been caught flat-footed in the innovation and desires of people (and city planners) to remove themselves from being the weakness behind the steering wheel... and the implication on all luxury vehicle brands.

I'm positive that the engineers at BMW and other traditionally innovative vehicle manufacturers have many relevant technologies tested and maybe shelved in their laboratories and around test tracks.

While I doubt that "luxury" will form less of a vehicles buying decision in the future - especially when the trend is towards fleet management of such vehicles (e.g. taxis, delivery, etc.) - I think that, for these companies to survive, they're going to have to become "technology companies".

Although late to the party, present-day luxury vehicle manufacturers can transform in to strong technology companies. For example, some opportunities could include:
  • With several decades of technical safety R&D innovation (e.g. collision avoidance, LADAR, automated parking, route guidance systems, land management, sleepy driver recognition, etc.) they already have the credentials and respect in the industry (and with consumers) as being the research leaders... so why not bundle up these safety features and license them under their brands. For example, the future Google self-driving car... music by Bose, safety by BMW.
  • As designers of engines (combustion, hybrid, and electric) they have decades of experience in design and performance. That could translate in to innovating city-wide refueling management platforms and systems.
  • "Smart Cities" are still mostly a desire rather than a reality. There is huge opportunity for proven technology companies to come in and define the rules, criteria, monitoring, and management of city-wide traffic control systems. Detailed knowledge of vehicle performance, capabilities, and safety controls whole be an ideal platform for building upon.
  • Regardless of just how many driver-less cars come to market over the coming decades, there are still going to be hundreds of millions of cars that were never built or designed to be "driver-less". There is an obvious requirement for supplemental or conversion kits for older vehicles - not just their own models.
The list above could be expanded considerably and I doubt that similar thoughts haven't also been discussed at various points in the last half-decade by the luxury brands themselves. However it would seem to me that now is a time of action.

It'll be very interesting to see how these luxury vehicle manufacturers reinvent themselves. If they have the funds now, then not only should they continue to innovate down safety technology paths, but they should probably be looking down the acquisition path... bringing into the fold new tech companies specializing in fleet and city vehicle management, taxi and courier management and control systems, city traffic monitoring and control systems, and maybe even a new generation of refueling station.

Saturday, July 9, 2016

Next Generation Weapons: The Eye Burner Rifle

The fantasy worlds of early 20th Century science fiction writers, in many ways, appear to be "now-ish" in terms of the technologies we'll wage war or police the civil population. Many of the weapons proposed a century ago were nuclear-based... well, perhaps "Atomic" was the more appropriate label at the time. Some authors pursued electric guns or "lightening" throwers, and by the mid-20th century the more common man-portable weapon systems were based upon high-powered laser systems.

When I think of new weapon systems... man-portable... and likely to be developed and employed within the coming quarter century, I think that many of the systems will integrate automatic target acquisition processes and coherent light - for"less lethal" confrontations. The term "less lethal" is of course relative and doesn't exclude weapon systems that are proficient at maiming and causing great pain or suffering.

One such system that, given current technological advances, lies within the finger tips of today's weapon designers could encompass the use of high intensity light, automated facial feature recognition, and "high-powered" laser light - and have a higher degree of target incapacitation than current personal small-arms have today.

The concept would be of a handheld configuration (similar size and dimensions of a rifle) that, when manually pointed in the direction of a target, bathes the target in a high intensity "white light" (giving the weapon system a range of say 50 meters) for a short period of time, at which point an embedded high-definition video device uses facial recognition processes to identify the physical eyes of the target currently "lit up", and subsequently automatically aligns a built-in high-powered laser with the targets eyes and fires. The laser, depending upon the power of the light source, would either temporarily or permanently blind the target.

A single trigger pull would bath the target with the main light function (which may temporarily disorient them anyway), but during that trigger pull the automated eye acquisition, eye targeting, and laser firing would happen in a fraction of a second (faster than a bullet could traverse the distance between shooter and target). I guess after the laser has successfully acquired the eyes and fired, the main light function would end... like a half-second burst of white light. To an external observer, the weapon user appeared to just fire a burst of white light at the head or torso of the target.

Obviously there are a lot of nuances to a "future" weapon like this. For example, would the target blink or close their eyes if the initial "white light" was directed at them? - At night, the answer is likely yes, however the facial recognition systems would still work and even a current "off-the-shelf" laser of the 5-20W range is strong enough to "burn through" the eyelids and damage the eyes. During the day it would obviously be easier... in fact perhaps the "white light" component is not required... instead the shooter merely targets the "head" and the rest of the system figures out the eyes and fires (or fries) the eyes of the target.

There are of course questions about ethics. But, compared to several ounces of hollow-point lead flying at several times the speed of sound, the option of permanent blindness is still a recoverable situation for the target.

[Wandering thoughts in SciFi]

Friday, January 29, 2016

Watching the Watchers Watching Your Network

It seems that this last holiday season didn’t bring much cheer or goodwill to corporate security teams. With the public disclosure of remotely exploitable vulnerabilities and backdoors in the products of several well-known security vendors, many corporate security teams spent a great deal of time yanking cables, adding new firewall rules, and monitoring their networks with extra vigilance.

It’s not the first time that products from major security vendors have been found wanting.

It feels as though some vendor’s host-based security defenses fail on a monthly basis, while network defense appliances fail less frequently – maybe twice per year. At least that’s what a general perusal of press coverage may lead you to believe. However, the reality is quite different. Most security vendors fix and patch security weaknesses on a monthly basis. Generally, the issues are ones that they themselves have identified (through internal SDL processes or the use of third-party code reviews and assessment) or they are issues identified by customers. And, every so often, critical security flaws will be “dropped” on the vendor by an independent researcher or security company that need to be fixed quickly.

Two decades ago, the terms “bastion host”, DMZ, and “firewall” pretty much summed up the core concepts of network security, and it was a simpler time for most organizations – both for vendors and their customers. The threat spectrum was relatively narrow, the attacks largely manual, and an organization’s online presence consisted of mostly static material. Yet, even then, if you picked up a book on network security you were instructed in no short order that you needed to keep your networks separate; one for the Internet, one for your backend applications, one for your backups, and a separate one for managing your security technology.

Since that time, many organizations have either forgotten these basic principles or have intentionally opted for riskier (yet cheaper) architectures and just hoping that their protection technologies are up to the task. Alas, as the events of December 2015 have shown us, every device added to a network introduces a new set of security challenges and weaknesses.

From a network security perspective, when looking at the architecture of critical defenses, there are four core principles:

  1. Devices capable of monitoring or manipulating network traffic should never have their management interfaces directly connected to the Internet. If these security devices need to be managed over the Internet it is critical that only encrypted protocols be used, multi-factor authentication be employed, and that approved in-bound management IP addresses be whitelisted at a minimum. 
  2. The management and alerting interfaces of security appliances must be on a “management” network – separated from other corporate and public networks. It should not be possible for an attacker who may have compromised a security device to leverage the management network to move laterally onto other guest systems or provide a route to the Internet. 
  3. Span ports and network taps that observe Internet and internal corporate traffic should by default only operate in “read-only” mode. A compromised security monitoring appliance should never be capable of modifying network traffic or communicating with the Internet from such an observation port. 
  4. Monitor your security products and their management networks. Security products (especially networking appliances such as core routers, firewalls, and malware defenses) will always be a high-value target to both external and internal attackers. These core devices and their management networks must be continuously monitored for anomalies and audited. 

In an age where state-sponsored reverse engineers, security research teams, and online protagonists are actively hunting for flaws and backdoors in the widely deployed products of major security vendors as a means of gaining privileged and secret access to their target’s networks, it is beyond prudent to revisit the core tenets of secure network architecture.

Corporate security teams and network architects should assume not only that new vulnerabilities and backdoors will be disclosed throughout the year, but that those holes may have been accessible and exploited for several months beforehand. As such, they should adopt a robust defense-in-depth strategy including “watchers watching watchers.”

Shodan's Shining Light

The Internet is chock full of really helpful people and autonomous systems that silently probe, test, and evaluate your corporate defenses every second of every minute of every hour of every day. If those helpful souls and systems aren’t probing your network, then they’re diligently recording and cataloguing everything they’ve found so others can quickly enumerate your online business or list systems like yours that are similarly vulnerable to some kind of attack or other.

Back in the dark ages of the Internet (circa the 20th century) everyone had to run their own scans to map the Internet in order to spot vulnerable systems on the network. Today, if you don’t want to risk falling foul of some antiquated hacking law in some country by probing IP addresses and shaking electronic hands with the services you encounter, you can easily find a helpful soul that’s figured it all out on your behalf and turn on the faucet of knowledge for a paltry sum.

One of the most popular services to shine light on and enumerate the darkest corners of the Internet is Shodan. It’s a portal-driven service through which subscribers can query its vast database of IP addresses, online applications and service banners that populate the Internet. Behind the scenes, Shodan’s multiple servers continually scan the Internet, enumerating and probing every device they encounter and recording the latest findings.

As an online service that diligently catalogues the Internet, Shodan behaves rather nicely. Servers that do the scanning aren’t overly aggressive and provide DNS information that doesn’t obfuscate who and what they are. Additionally, they are little more troublesome than Google in its efforts to map out Web content on the Internet.

In general, most people don’t identify what Google (or Microsoft, Yahoo or any other commercial search engine) does as bad, let alone illegal. But if you are familiar with the advanced search options these sites offer or read any number of books or blogs on “Google Dorks,” you’ll likely be more fearful of them than something with limited scope like Shodan.

Unfortunately, Shodan is increasingly perceived as a threat by many organizations. This might be due to its overwhelming popularity or its frequent citation amongst the infosec community and journalists as a source of embarrassing statistics. Consequently, security companies like Check Point have included alerts and blocking signatures in a vain attempt to thwart Shodan and its ilk.

On one hand, you might empathize with many organizations on the receiving end of a Shodan scan. Their Internet-accessible systems are constantly probed, their services are enumerated, and every embarrassing misconfiguration or unpatched service is catalogued and could be used against them by evil hackers, researchers and journalists.

In some realms, you’ll also hear that the bad guy competitors to Shodan (e.g. cyber criminals mapping the Internet for their own financial gain) are copying the scanning characteristics of Shodan so the target’s security and incident response teams assume it’s actually the good guys and ignore the threat.

On the other hand, with it being so easy to modify the scanning process – changing scan types, modifying handshake processes, using different domain names, and launching scans from a broader range of IP addresses – you’d be forgiven for thinking that it’s all a bit of wasted effort… about as useful as a “keep-off-the-grass” sign in Hyde Park.

Although “robots.txt” in its own way serves as a similarly polite request for commercial Web search scanners to not navigate and cache pages on a site, it is most often ignored by scanning providers. It also serves as a flashing neon arrow that directs hackers and security researchers to the more sensitive content.

It’s a sad indictment of current network security practices that a reputable security vendor felt the need and justification to add detection rules for Shodan scans and that their customer organizations may feel more protected for implementing them.

While the virtual “keep-off-the-grass” warning isn’t going to stop anyone, it does empower the groundskeeper to shout, “Get off my land!” (in the best Cornish accent they can muster) and feel justified in doing so. In the meantime, the plague of ever-helpful souls and automated systems will continue to probe away to their hearts content.

Friday, November 20, 2015

Battling Cyber Threats Using Lessons Learned 165 Years Ago

When it comes to protecting the end user, the information security community is awash with technologies and options. Yet, despite the near endless array of products and innovation focused on securing that end user from an equally broad and expanding array of threats, the end user remains more exposed and vulnerable than at any other period in the history of personal computing.

Independent of these protection technologies (or possibly because of them), we’ve also tried to educate the user in how best (i.e. more safely) to browse the Internet and take actions to protect themselves. With a cynical eye, it’s almost like a government handing out maps to their citizens and labeling streets, homes, and businesses that are known to be dangerous and shouldn’t be visited – because not even the police or military have been effective there.

Today we instruct our users (and at home, our children) to be careful what they click-on, what pages or sites they visit, what information they can share, and what files they should download. These instructions are not just onerous and confusing, more often than not they’re irrelevant – as, even after following them to the letter, the user can still fall victim.

The fact that a user can’t click on whatever they want, browse wherever they need to, and open what they’ve received, should be interpreted as a mile-high flashing neon sign saying “infosec has failed and continues to fail” (maybe reworded with a bunch of four-letter expletives for good measure too).
For decades now thousands of security vendors have brought to market technologies that, in effect, are predominantly tools designed to fill vulnerable and exploited gaps in the operating systems lying at the core of devices the end users rely upon. If we’re ever to make progress against the threat and reach the utopia of users being able to “carelessly” using the Internet, those operating systems must get substantially better.

In recent years, great progress has been made in the OS front – primarily smartphone OS’s. The operating systems running on our most pocket-friendly devices are considerably more secure than those we rely upon for our PC’s, notebooks, or servers at home or work. There’s a bunch of reasons why of course – and I’ll not get in to that here – but there’s still so much more that can be done.
I do believe that there are many lessons that can be learned from the past; lessons that can help guide future developments and technologies. Reaching back a little further in to the past than usual – way before the Internet, and way before computers – there are a couple of related events that could shine a brighter light on newer approaches to protecting the end user.

Back in 1850 a Hungarian doctor named Ignaz Semmelweis was working in the maternity clinic at the General Hospital in Vienna where he noted that many women in maternity wards were dying from puerperal fever - commonly known as childbed fever. He studied two medical wards in the hospital – one staffed by all male doctors and medical students, and the other by female midwifes – and counted the number of deaths in each ward. What he found was that death from childbirth was five times higher in the ward with the male doctors.

Dr. Semmulweis tested numerous hypothesis as to the root cause of the deadly difference – ranging from mothers giving birth on their sides versus their backs, through to the route priests traversed the ward and the bells they rang. It appears that his Eureka moment came after the death of a male pathologist who, upon pricking his finger while doing an autopsy on a woman who had died of childbed fever, had succumbed to the same fate (apparently being a pathologist in the mid-19th century was not conducive to a long life). Joining the dots, Dr. Semmulweis noted that the male doctors and medical students were doing autopsies while the midwifes were not, and that “cadaverous particles” (this is a period of time before germs were known) were being spread to those birthing mothers.

Dr. Semmulweis’ medical innovation? “Wash your hands!” The net result, after doctors and midwifes started washing their hands (in lime water, then later in chlorine), was that the rate of childbed fever dropped considerably.

Now, if you’re in the medical trade, washing your hands multiple times per day in chlorine or (by the late 1800’s) carbolic acid, you’ll note that it isn’t so good for your skin or hands.

In 1890 William Stewart Halsted of Johns Hopkins University asked the Goodyear Tire and Rubber Company if they could make a glove of rubber that could be dipped in carbolic acid in order to protect the hands of his nurses – and so was born the first sterilized medial gloves. The first disposable latex medical gloves were manufactured by Ansell and didn’t appear until 1964.

What does this foray in to 19th century medical history mean for Internet security I hear you say? Simple really, every time the end user needs to use a computer to access the Internet and do work, it needs to be clean/pristine. Whether that means a clean new virtual image (e.g. “wash your hands”) or a disposable environment that sits on top of the core OS and authorized application base (e.g. “disposable gloves”), the assumption needs to be that nothing the user encounters over the Internet can persist on the device they’re using after they’ve finished their particular actions.

This obviously isn’t a solution for every class of cyber threat out there, but it’s an 80% solution – just as washing your hands and wearing disposable gloves as a triage nurse isn’t going to protect you (or your patient) from every post-surgery ailment.

Operating system providers or security vendors that can seamlessly adopt and automatically procure a clean and pristine environment for the end user every time they need to conduct activities on or related to the Internet will fundamentally change the security game – altering the battle field for attackers and the tools of their trade.

Exciting times ahead.


-- Gunter