Friday, December 23, 2016

Body Worn Camera Technologies – Futures and Security

“Be careful what you wish for” is an appropriate adage for the flourishing use and advancement of body worn camera (BWC) technologies. As police forces around the world adapt to increased demands for accountability – where every decision, reaction, and word can be analyzed in post-event forensic fashion – the need and desire to equip each police or federal agent with a continuously recording camera has grown.

There are pros and cons to every technology – both from technical capability and societal changes. The impartial and continuous recording of an event or confrontation places new stresses on those whose job is to enforce the thousands of laws society must operate within on a daily basis, in the knowledge that each interpretation and action could be dissected in a court of law at some point in the future. Meanwhile, “offenders” must assume that each action – hostile or otherwise – could fall afoul of some hitherto unknown law in fully recorded technicolor.

Recently the National Institute of Justice released a market survey on Body Worn Camera Technologies. There are over 60 different BWCs specifically created for law enforcement use and the document provides information on the marketed capabilities of this relatively new class of technology.

The technological features of the current generation of BWCs are, overall, quite rudimentary - given limitations of battery power, processing capabilities, and network bandwidth. There is however a desire by the vendors to advance the technology substantially; not just in recording capability, but in areas such as facial recognition and cloud integration.

Today’s generation of BWCs truly are the 1.0 version of a policing platform that will evolve rapidly over the coming decade.

I’ve had a chance to look a little closer at the specifications and capabilities of today’s BWC solutions and have formulated some thoughts to how these BWC platforms will likely advance over the coming years (note that some capabilities already exist within specialized military units around the world – and will be easy additions to the BWC platform once the costs to produce reduce):
  1. Overcome the bandwidth problem to allow real-time streaming and remote analysis of the video date. As cellular capabilities increase and 4G/5G becomes cheaper and more reliable in metro centers, “live action” can be passed to law enforcement SOC (just like existing CCTV capabilities). In cases where such cellular technology isn’t reliable, or where having multiple law enforcement officers working in the same close geographic proximity, the likely use of mobile cellular towers (e.g. as a component of the police vehicle) to serve as the local node – offering higher definition and longer recording possibilities, and remote SOC “dial-in” to oversee operations with minimal bandwidth demands.
  2. Cloud integration of collected facial recognition data. As the video processing capabilities of the BWC improves, it will be possible to create the unique codification of faces that are being recorded. This facial recognition data could then be relayed to the cloud for matching against known offender databases, or for geographic tracking of individuals (without previously knowing their name – but could be matched with government-issued photo ID’s, such as driver license or passport images). While the law enforcement officer may not have immediately recognized the face or it may have been only a second’s passing glimpse, a centralized system could alert the officer to the persons presence. In addition, while an officer is questioning or detaining a suspect, facial recognition can be used to confirm their identity in real-time.
  3. BWC, visor, and SOC communication integration. As BWCs transition from a “passive recording” system in to a real-time integrated policing technology, it is reasonable to assume that advancements in visual alerting will be made – for example a tactical visor that presents information in real time to the law enforcement officer – overlaying virtual representations and meta-data on their live view of the situation. Such a technology advance would allow for rapid crowd scanning (e.g. identifying and alerting of wanted criminals passing through a crowd or mall), vehicles (e.g. license plate look-up), or notable item classification (e.g. the presence of a firearm vs replica toy).
  4. Broad spectrum cameras and processing. The cameras used with today’s BWC technology are typically limited to standard visible frequencies, with some offering low-light recording capabilities. It is reasonable to assume that a broader spectrum of frequency coverage will expand upon what can be recorded and determined using local or cloud based processing. Infrared frequency recording (e.g. enabling heat mapping) could help identify sick or ailing detainees (e.g. bird flu outbreak victim, hypothermic state of rescued person), as well as provide additional facial recognition capabilities independent of facial coverings (e.g. beard, balaclava, glasses) – along with improved capabilities in night-time recording or (when used with a visor or ocular accessory) for tracking a runaway.
  5. Health and anxiety measurement. Using existing machine learning and signal processing techniques it is possible to measure the heart rate variability (HRV) from a recorded video stream. As the per-unit compute power of BWC devices increase, it will be possible to accurately measure the heart rate of an individual merely by focusing on their face and relaying that to the law enforcement officer. Such a capability can be used to identify possible health issues with the individual, recent exertions, or anxiety-related stresses. Real-time HRV measurements could aid in determining whether a detainee is lying or needs medical attention. Using these machine learning techniques, HRV can be determined even if the subject is wearing a mask, or if only the back of the head is visible.
  6. Hidden weapon detection. Advanced signal processing and AI can be used to determine whether an object is hidden on a moving subject based of fabric movements. As a clothed person moves, the fabrics used in their clothing fold, slide, oscillate, and move in many different ways. AI systems can be harnessed to analyze frame-by-frame movements, identify hard points and layered stress points, and outline the shape and density of objects or garments hidden or obscured by the outer most visible layer of clothing. Pattern matching systems could (in real-time) determine the size, shape, and relative density of the weapon or other hidden element on the person. In its most basic form, the system could verbally alert the BWC user that the subject has a holstered gun under the left breast of their jacket, or a bowie knife taped to their right leg. With a more advanced BWC platform (as described in #3 above), a future visor may overlay the accumulated weapon and hard-point detection on the law enforcement officer’s view of the subject – providing a pseudo x-ray vision (but not requiring any active probing signals).

Given the state of current and anticipated advances in camera performance, Edge Computing capability, broadband increases, and smart-device inter-connectivity over the coming decade, it is reasonable to assume that BWC technology platform will incorporate most if not all of the above listed capabilities.

As video evidence from BWC becomes more important to successful policing, it is vital that a parallel path for data security, integrity, and validation of that video content be advanced.

The anti-tampering capabilities of BWC systems today are severely limited. Given the capabilities of current generation off-the-shelf video editing suites, manipulation of video can be very difficult if not impossible to detect. These video editing capabilities will continue to advance. Therefore, for trust in BWC footage to remain (and ideally grow), new classes of anti-tamper and frame-by-frame signing will be required – along with advanced digital chain of custody tracking.

Advances and commercialization block-chain technology would appear at first glance to be ideally suited to digital chain of custody tracking.

Wednesday, December 21, 2016

Edge Computing, Fog Computing, IoT, and Securing them All

The oft used term “the Internet of Things” (IoT) has expanded to encapsulate practically any device (or “thing”) with some modicum of compute power that in turn can connect to another device that may or may not be connected to the Internet. The range of products and technologies falling in to the IoT bucket is immensely broad – ranging from household refrigerators that can order and restock goods via Amazon, through to Smart City traffic flow sensors that feed navigation systems to avoid jams, and even implanted heart monitors that can send emergency updates via the patient’s smartphone to a cardiovascular surgeon on vacation in the Maldives.

The information security community – in fact, the InfoSec industry at large – has struggled and mostly failed to secure the “IoT”. This does not bode well for the next evolutionary advancement of networked compute technology.

Today’s IoT security problems are caused and compounded by some pretty hefty design limitations – ranging from power consumption, physical size and shock resistance, environmental exposure, cost-per-unit, and the manufacturers overall security knowledge and development capability.
The next evolutionary step is already underway – and exposes a different kind of threat and attack surface to IoT.

As each device we use or the components we incorporate in to our products or services become smart, there is a growing need for a “brain of brains”. In most technology use cases, it makes no sense to have every smart device independently connecting to the Internet and expecting a cloud-based system to make sense of it all and to control.

It’s simply not practical for every device to use the cloud the way smartphones do – sending everything to the cloud to be processed, having their data stored in the cloud, and having the cloud return the processed results back to the phone.

Consider the coming generation of automobiles. Every motor, servo, switch, and meter within the vehicle will be independently smart – monitoring the devices performance, configuration, optimal tuning, and fault status. A self-driving car needs to instantaneously process this huge volume of data from several hundred devices. Passing it to the cloud and back again just isn’t viable. Instead the vehicle needs to handle its own processing and storage capabilities – independent of the cloud – yet still be interconnected.

The concepts behind this shift in computing power and intelligence are increasingly referred to as “Fog Computing”. In essence, computing nodes closest to the collective of smart devices within a product (e.g. a self-driving car) or environment (e.g. a product assembly line) must be able to handle he high volumes of data and velocity of data generation, and provide services that standardize, correlate, reduce, and control the data elements that will be passed to the cloud. These smart(er) aggregation points are in turn referred to as “Fog Nodes”.
Source: Cisco
Evolutionary, this means that computing power is shifting to the edges of the network. Centralization of computing resources and processing within the Cloud revolutionized the Information Technology industry. “Edge Computing” is the next advancement – and it’s already underway.

If the InfoSec industry has been so unsuccessful in securing the IoT, what is the probability it will be more successful with Fog Computing and eventually Edge Computing paradigms?

My expectation is that securing Fog and Edge computing environments will actual be simpler, and many of the problems with IoT will likely be overcome as the insecure devices themselves become subsumed in the Fog.

A limitation of securing the IoT has been the processing power of the embedded computing system within the device. As these devices begin to report in and communicate through aggregation nodes, I anticipate those nodes to have substantially more computing power and will be capable of performing securing and validating the communications of all the dumb-smart devices.

As computing power shifts to the edge of the network, so too will security.

Over the years corporate computing needs have shifted from centralized mainframes, to distributed workstations, to centralized and public cloud, and next into decentralized Edge Computing. Security technologies and threat analytics have followed a parallel path. While the InfoSec industry has failed to secure the millions upon millions of IoT devices already deployed, the cure likely lies in the more powerful Fog Nodes and smart edges of the network that do have the compute power necessary to analyze threats and mitigate them.

That all said, Edge Computing also means that there will be an entirely new class of device isolated and exposed to attack. These edge devices will not only have to protect the less-smart devices they proxy control for, but will have to be able to protect themselves too.

Nobody ever said the life of an InfoSec professional was dull.

Wednesday, December 7, 2016

Sledgehammer DDoS Gamification and Future Bugbounty Integration

Monetization of DDoS attacks has been core to online crime way before the term cybercrime was ever coined. For the first half of the Internet’s life DDoS was primarily a mechanism to extort money from targeted organizations. As with just about every Internet threat over time, it has evolved and broadened in scope and objectives.

The new report by Forcepoint Security Labs covering their investigation of the Sledgehammer gamification of DDoS attacks is a beautiful example of that evolution. Their analysis paper walks through both the malware agents and the scoreboard/leaderboard mechanics of a Turkish DDoS collaboration program (named Sath-ı Müdafaa or “Surface Defense”) behind a group that has targeted organizations with political ties deemed inconsistent with Turkey’s current government.

In this most recent example of DDoS threat evolution, a pool of hackers is encouraged to join a collective of hackers targeting the websites of perceived enemies of Turkey’s political establishment.
Using the DDoS agent “Balyoz” (the Turkish word for “sledgehammer”), members of the collective are tasked with attacking a predefined list of target sites – but can suggest new sites if they so wish. In parallel, a scoreboard tracks participants use of the Balyoz attack tool – allocating points that can be redeemed against acquiring a stand-alone version of the DDoS tool and other revenue-generating cybercrime tools, for every ten minutes of attack they conducted.

As is traditional in the dog-eat-dog world of cybercrime, there are several omissions that the organizers behind the gamification of the attacks failed to pass on to the participants – such as the backdoor built in to the malware they’re using.

Back in 2010 I wrote the detailed paper “Understanding the Modern DDoS Threat” and defined three categories of attacker – Professional, Gamerz, and Opt-in. This new DDoS threat appears to meld the Professional and Opt-in categories in to a single political and money-making venture. Not a surprise evolutionary step, but certainly an unwanted one.

If it’s taken six years of DDoS cybercrime evolution to get to this hybrid gamification, what else can we expect?

In that same period of time we’ve seen ad hoc website hacking move from an ignored threat, to forcing a public disclosure discourse, to acknowledgement of discovery and remediation, and on to commercial bug bounty platforms.

The bug bounty platforms (such as Bugcrowd, HackerOne, Vulbox, etc.) have successfully gamified the low-end business of website vulnerability discovery – where bug hunters and security researchers around the world compete for premium rewards. Is it not a logical step that DDoS also make the transition to the commercial world?

Several legitimate organizations provide “DDoS Resilience Testing” services. Typically, through the use of software bots they spin up within public cloud infrastructure, DDoS-like attacks are launched at paying customers. The objectives of such an attack include the measurement and verification of the defensive capabilities of the targets infrastructure to DDoS attacks, to exercise and test the companies “blue team” response, and to wargame business continuity plans.

If we were to apply the principles of bug bounty programs to gamifying the commercial delivery of DDoS attacks, rather than a contrived limited-scope public cloud imitation, we’d likely have much more realistic testing capability – benefiting all participants. I wonder who’ll be the first organization to master scoreboard construction and incentivisation? I think the new bug bounty companies are agile enough and likely have the collective community following needed to reap the financial rewards of the next DDoS evolutionary step.

Thursday, December 1, 2016

NTP: The Most Neglected Core Internet Protocol

The Internet of today is awash with networking protocols, but at its core lies  a handful that fundamentally keep the Internet functioning. From my perspective, there is no modern Internet without DNS, HTTP, SSL, BGP, SMTP, and NTP.

Of these most important Internet protocols, NTP (Network Time Protocol) is the likely least understood and has the least attention and support. Until very recently, it was supported (part-time) by just one person - Harlen Stenn - "who had lost the root passwords to the machine where the source code was maintained (so that machine hadn't received security updates in many years), and that machine ran a proprietary source-control system that almost no one had access to, so it was very hard to contribute to".

Just about all secure communication protocols and server synchronization processes require that they have their internal clocks set the same. NTP is the protocol that allows all this to happen.

ICEI and CACR have gotten involved with supporting NTP and there are several related protocol advancements underway to increase security of such vital component of the Internet. NTS (Network Time Security), currently in draft version with the Internet Engineering Task Force (IETF), aims to give administrators a way to add security to NTP and promote secure time synchronization.

While there have been remarkably few exploitable vulnerabilities in NTP over the years, the recent growth of DDoS botnets (such as Mirai) utilizing NTP Reflection Attacks shone a new light on its frailties and importance.

Some relevant stories on the topic of how frail and vital NTP has become and whats being done to correct the problem can be found at:

Tuesday, November 29, 2016

The Purple Team Pentest

It’s not particularly clear whether a marketing intern thought he was being clever or a fatigued pentester thought she was being cynical when the term “Purple Team Pentest” was first thrown around like spaghetti at the fridge door, but it appears we’re now stuck with the term for better or worse.

Just as the definition of penetration testing has broadened to the point that we commonly label a full-scope penetration of a target’s systems with the prospect of lateral compromise and social engineering as a Red Team Pentest – delivered by a “Red Team” entity operating from a sophisticated hacker’s playbook. We now often acknowledge the client’s vigilant security operations and incident response team as the “Blue Team” – charged with detecting and defending against security threats or intrusions on a 24x7 response cycle.

Requests for penetration tests (Black-box, Gray-box, White-box, etc.) are typically initiated and procured by a core information security team within an organization. This core security team tends to operate at a strategic level within the business – advising business leaders and stakeholders of new threats, reviewing security policies and practices, coordinating critical security responses, evaluating new technologies, and generally being the go-to-guys for out-of-ordinary security issues. When it comes to penetration testing, the odds are high that some members are proficient with common hacking techniques and understand the technical impact of threats upon the core business systems.

These are the folks that typically scope and eventually review the reports from a penetration test – they are however NOT the “Blue Team”, but they may help guide and at times provide third-line support to security operations people. No, the nucleus of a Blue Team are the front-line personnel watching over SIEM’s, reviewing logs, initiating and responding to support tickets, and generally swatting down each detected threat as it appears during their shift.

Blue Teams are defensively focused and typically proficient at their operational security tasks. The highly-focused nature of their role does however often mean that they lack what can best be described as a “hackers eye view” of the environment they’re tasked with defending.

Traditional penetration testing approaches are often adversarial. The Red Team must find flaws, compromise systems, and generally highlight the failures in the targets security posture. The Blue Team faces the losing proposition of having to had already secured and remediated all possible flaws prior to the pentest, and then reactively respond to each vulnerability they missed – typically without comprehension of the tools or techniques the Red Team leveraged in their attack. Is it any wonder that Blue Teams hate traditional pentests? Why aren’t the Red Team consultants surprised that the same tools and attack vectors work a year later against the same targets?

A Purple Team Pentest should be thought of as a dynamic amalgamation of Red Team and Blue Team members with the purpose of overcoming communication hurdles, facilitating knowledge transfer, and generally arming the Blue Team with newly practiced skills against a more sophisticated attacker or series of attack scenarios.

How to Orchestrate a Purple Team Pentest Engagement

Very few organizations have their own internal penetration testing team and even those that do regularly utilize external consulting companies to augment that internal team to ensure the appropriate skills are on hand and to tackle more sophisticated pentesting demands.

A Purple Team Pentest almost always utilizes the services of an external pentest team – ideally one that is accomplished and experienced in Red Team pentesting.

Bringing together two highly skilled security teams – one in attack, the other in defense – and having them not only work together, but to also achieve all the stated goals of a Purple Team pentest, requires planning and leadership.

To facilitate a successful Purple Team Pentest, the client organization should consider the following key elements:

  • Scope & Objectives - Before reaching out and engaging with a Red Team provider, carefully define the scope and objectives of the Purple Team Pentest. Be specific as to what the organizations primary goals are and what business applications or operational facilities will be within scope. Since a key objective of conducting a Purple Team Pentest is to educate and better arm the internal Blue Team and to maximize the return on a Red Team’s findings, identify and list the gaps that need to be addressed in order to define success.
  • Blue Team Selection - Be specific in defining which pieces of the organization and which personnel constitute the “Blue Team”. Go beyond merely informing various security operations staff that they are now part of a Blue Team. It is critical that the members feel they are a key component in the company’s new defensive strategy. Educate them about the roles and responsibilities of what the Blue Team entails. Prior to engaging with a Red Team provider and launching a Purple Team Pentest, socialize and refine the scope and objectives of the proposed Purple Teaming engagement with the team directly.
  • Red Team Selection - It is important that the client select a Red Team that consists of experienced penetration testers. The greater the skills and experience of the Red Team members, the more they will be able to contribute to the Purple Team Pentest objectives. Often, in pure Red Team Pentest engagements, the consulting team will contain a mix of experienced and junior consultants – with the junior consultants performing much of the tool-based activities under the supervision of the lead consultant. Since a critical component of a Purple Team Pentest lies in the ability to communicate and educate a Blue Team to the attacker’s methodologies and motivations, junior-level consultants add little value to that dialogue. Clients are actively encouraged to review the resumes of the consultants proposed to constitute the Red Team in advance of testing.
  • Playbook Definition - Both sides of the Purple Teaming exercise have unique objectives and methodologies. Creation of a playbook in advance of testing is encouraged and so too is the sharing and agreement between the teams. This playbook loosely defines the rules of the engagement and is largely focused on environment stability (e.g. rules for patch management and rollout during the testing period) and defining exceptions to standard Blue Team responses (e.g. identifying but not blocking the inbound IP addresses associated with the Red Team’s C&C).
  • Arbitrator or Referee - Someone must be the technical “Referee” for the Purple Team Pentest. They need to be able to speak both Red Team and Blue Team languages, interpret and bridge the gap between them, manage the security workshops that help define and resolve any critical threat discoveries, and generally arbitrate according to the playbook (often adding to the playbook throughout the engagement). Ideally the arbitrator or referee for the engagement is not directly associated with, or a member of, either the Red or Blue teams.
  • Daily Round-table Reviews - Daily round-table discussions and reviews of Red Team findings are the center-piece of a successful Purple Team Pentest. Best conducted at the start of each day (mitigating the prospect of long tired days and possible overflow of working hours – curtailing discussion), the Red Team lays out the successes and failures of the previous days testing, while the Blue Team responds with what they detected and how they responded. The review facilitates the discussion of “What and Why” the Red Team members targeted, explain the “How” they proceeded, and allows the Blue Team to query and understand what evidence they may have collected to detect and thwart such attacks. For example, daily discussions should include discussions covering what traffic did the tool or methodology generate, where could that evidence have been captured, how could that evidence be interpreted, what responses would pose the biggest hurdle to the attacker?
  • Pair-down Deep Dives - Allowing members of the teams to “pair down” after the morning review to dive deeper in to the technical details and projected responses to a particular attack vector or exploitation is highly encouraged.
  • Evaluate Attack and Defense Success in Real-time - Throughout the engagement the “Arbitrator” should engage with both teams and be constantly aware of what attacks are in play by the Red Team, and what responses are being undertaken by the Blue Team. In some attack scenarios it may be worthwhile allowing the Red Team to persist in an attack even if it has been detected and countered by the Blue Team, or is known to be unsuccessful and unlikely to lead to compromise. However, the overall efficiency can be increased and the cost of a Purple Team Pentest can be reduced by brokering conversations between the teams when attack vectors are stalled, irrelevant, already successful, or known to eventually become successful. For example, the Red Team are able to get a foothold on a compromised host and then proceed to bruteforce attack the credentials of an accessible internal database server. Once the Red Team have successfully started their brute-force attack it may be opportune to validate with the Blue Team that they have already been alerted to the attack in process and are initiating countermeasures. At that point in time, in order to speed up the testing and to progress with another approved attack scenario, a list of known credentials are passed directly to the Red Team and they may progress with a newly created test credential on that (newly) compromised host.
  • Triage and Finding Review - Most Red Team pentests will identify a number of security vulnerabilities and exploit paths that were missed by the Blue Team and will require vendor software patches or software development time to remediate. In a pure Red Team Pentest engagement, a “Final Report” would be created listing all findings – with a brief description of recommended and generic best practice fixes. In a Purple Team Pentest, rather than production of a vulnerability findings report, an end-of-pentest workshop should be held between the two teams. During this workshop each phase of the Red Team testing is reviewed – discoveries, detection, remediation, and mitigation – with an open Q&A dialogue between the teams and, at the conclusion of the workshop, a detailed remediation plan is created along with owner assignment.

The Future is Purple

While the methodologies used in Purple Team penetration testing are the same as those of a stand-alone Red Team Pentest, the business objectives and communication methods used are considerably different. Even though the Purple Team Pentest concept is relatively new, it is an increasingly important vehicle for increasing an organizations security stature and reducing overall costs.

The anticipated rewards from conducting a successful Purple Team pentest include increased Blue Team knowledge of threats and adversaries, muscle-memory threat response and mitigation, validation of playbook response to threats in motion, confidence in sophisticated attacker incident response, identification and enumeration of new vulnerabilities or attack vectors, and overall team-building.

As businesses become more aware of Purple Teaming concepts and develop an increased understanding of internal Blue Team capabilities and benefits, it is anticipated that many organizations will update their annual penetration testing requirements to incorporate Purple Team Pentest as a cornerstone of their overall information security and business continuity strategy.

-- Gunter Ollmann

Monday, November 28, 2016

Navigating the "Pentest" World

The demand for penetration testing and security assessment services worldwide has been growing year-on-year. Driven largely by Governance, Risk, and Compliance (GRC) concerns, plus an evolving pressure to be observed taking information security and customer privacy seriously, most CIO/CSO/CISO’s can expect to conduct regular “pentests” as a means of validating their organizations or product’s security.

An unfortunate circumstance of two decades of professional service oriented delivery of pentests is that the very term “penetration testing” now covers a broad range of security services and risk attributes – with most consulting firms provide a smorgasbord of differentiated service offerings – intermixing terms such as security assessment and pentest, and constructing hybrid testing methodologies.

For those newly tasked with having to find and retain a team capable of delivering a pentest, the prospect of having to decipher the lingo and identify the right service is often daunting – as failure to get it right is not only financially costly, but may also be career-ending if later proven to be inadequate.

What does today’s landscape of pentesting look like?

All penetration testing methodologies and delivery approaches are designed to factor-in and illustrate a threat represented by an attack vector or exploitation. A key differentiator between many testing methodologies lies in whether the scope is to identify the presence of a vulnerability, or to exploit and subsequently propagate an attack through that vulnerability. The former is generally bucketed in the assessment and audit taxonomy, while the latter is more commonly a definition for penetration testing (or an ethical hack).
The penetration testing market and categorization of services is divided by two primary factors – the level of detail that will be provided by the client, and the range of “hacker” tools and techniques that will be allowed as part of the testing. Depending upon the business drivers behind the pentest (e.g. compliance, risk reduction, or attack simulation), there is often a graduated-scale of services. Some of the most common terms used are:
  • Vulnerability Scanning
    The use of automated tools to identify hosts, devices, infrastructure, services, applications, and code snippets that may be vulnerable to known attack vectors or have a history of security issues and vulnerabilities.
  • Black-box Pentest
    The application of common attack tools and methodologies against a client-defined target or range of targets in which the pentester is tasked with identifying all the important security vulnerabilities and configuration failures of the scoped engagement. Typically, the penetration scope is limited to approved systems and windows of exploitation to minimize the potential for collateral damage. The client provides little information beyond the scope and expects the consultant to replicate the discovery and attack phases of an attacker who has zero insider knowledge of the environment. 
  • Gray-box Pentest
    Identical methodology to the Black-box Pentest, but with some degree of insider knowledge transfer. When an important vulnerability is uncovered the consultant will typically liaise with the client to obtain additional “insider information” which can be used to either establish an appropriate risk classification for the vulnerability, or initiate a transfer of additional information about the host or the data it contains (that could likely be gained by successfully exploiting the vulnerability), without having to risk collateral damage or downtime during the testing phase.
  • White-box Pentest (also referred to as Crystal-box Pentest)
    Identical tools and methodology to the Black-box Pentest, but the consultants are supplied with all networking documentation and details ahead of time. Often, as part of a White-box Pentest, the client will provide network diagrams and the results of vulnerability scanning tools and past pentest reports. The objective of this type of pentest is to maximize the consultants time on identifying new and previously undocumented security vulnerabilities and issues.
  • Architecture Review
    Armed with an understanding of common attack tools and exploitation vectors, the consultant reviews the underlying architecture of the environment. Methodologies often include active testing phases, such as network mapping and service identification, but may include third-party hosting and delivery capabilities (e.g. domain name registration, DNS, etc.) and resilience to business disruption attacks (e.g. DDoS, Ransomware, etc.). A sizable component of the methodology is often tied to the evaluation and configuration of existing network detection and protection technologies (e.g. firewall rules, network segmentation, etc.) – with configuration files and information being provided directly by the client.
  • Redteam Pentest
    Closely related to the Black-box pentest, the Redteam pentest mostly closely resembles a real attack. The scope of the engagement (targets and tools that can be used) is often greater than a Black-box pentest, and typically conducted in a manner to not alert the client’s security operations and incident response teams. The consultant will try to exploit any vulnerabilities they reasonably believe will provide access to client systems and, from a compromised device, attempt to move laterally within a compromised network – seeking to gain access to a specific (hidden) target, or deliver proof of control of the entire client network.
  • Code Review
    The consultant is provided access to all source code material and will use a mix of automated and manual code analysis processes to identify security issues, vulnerabilities, and weaknesses. Some methodologies will encompass the creation of proof-of-concept (PoC) exploitation code to manually confirm the exploitability of an uncovered vulnerability.
  • Controls Audit
    Typically delivered on-site, the consultant is provided access to all necessary systems, logs, policy-derived configuration files, reporting infrastructure, and data repositories, and performs an audit of existing security controls against a defined list of attack scenarios. Depending upon the scope of the engagement, this may include validation against multiple compliance standards and use a mix of automated, manual, and questionnaire-based evaluation techniques.
The Hybrid Pentest Landscape

In recent years the pentest landscape has evolved further with the addition of hybrid services and community-sourcing solutions. 
Overlapping the field of pentesting, there are three important additions:
  • Bug Bounty Programs
    Public bug bounty programs seek to crowdsource penetration testing skills and directly incentivize participants to identify vulnerabilities in the client’s online services or consumer products. The approach typically encompasses an amalgamation of Vulnerability Scanning and Black-box Pentest methodologies – but with very specific scope and limitations on exploitation depth. With (ideally) many crowdsourced testers, the majority of testing is repeated by each participant. The hope is that, over time, all low-hanging fruit vulnerabilities will be uncovered and later remediated. 
  • Purple Team Pentest
    This hybrid pentest combines Redteam and Blueteam (i.e. the client’s defense or incident response team) activities in to a single coordinated testing effort. The Redteam employs all the tools and tricks of a Redteam Pentest methodology, but each test is watch and responded to in real-time by the client’s Blueteam. As a collaborative pentest, there is regular communication between the teams (typically end of day calls) and synching of events. The objectives of Purple Team pentesting is both assess the capabilities of the Blueteam and to reduce the time typically taken to conduct a Redteam Pentest – by quickly validating the success or failure of various attack and exploitation techniques, and limiting the possibility of downtime failures of targeted and exploited systems.
  • Disaster Recovery Testing
    By combining a Whitebox Pentest with incident response preparedness testing and a scenario-based attack strategy, Disaster Recovery Testing is a hybrid pentest designed to review, assess, and actively test the organization's capability to respond and recover from common hacker-initiated threats and disaster scenarios.
Given the broad category of “pentest” and the different testing methodologies followed by security consulting groups around the globe, prospective clients of these services should ensure that they have a clear understanding of what their primary business objectives are. Compliance, risk reduction, and attack simulation are the most common defining characteristics driving the need for penetration testing – and can typically align with the breakdown of the various pentest service definitions.

[Update: First graph adapted from Patrick Thomas' tweet -]

Sunday, July 10, 2016

The Future of Luxury Car Brands in a Self-Driving City

For the automotive manufacturing industry, the next couple of decades are going to be make or break for most of the well known brand names we're familiar with today. 

With the near term prospect of "self-driving" cars and city-level smart traffic routing (and monitoring) infrastructure fundamentally changing the way in which we drive, and the shift in city demographics that promotes a growing move away from wanting (or being able to afford) a personal vehicle, it should be clear to all that the motoring practices of the last century are on a trajectory to disappear pretty quickly.

As self-driving cars eventually negate the "love of driving" and city traffic routing and control systems begin to rigorously enforce variable speed limits, congestion charging, and overall traffic management, the personal car becomes more and more just another impersonal transport system. If that's likely the case (or even partially the case), then what does the future hold for the manufacturers of luxury cars?

Earlier this month I spent a week in Bavaria, Germany, visiting customers and prospects. The economies of cities like Stuttgart and Munich fundamentally revolve around the luxury automotive industry. Companies like BMW, Audi, and Porsche define the standard in personal vehicle luxury and generally lead the world in technical innovation (especially in safety features). Speaking with locals around Bavaria there is a very real fear that the next two decades could see the fall and eventual demise of these brands.

If the act of "driving" is completely replaced with computer control systems and the vehicle itself eventually becomes a commodity (because every vehicle performs the same way, travels at the same speeds, and is carefully governed by city traffic management systems), luxury vehicle "performance" is no longer a perceived value. As the mandated vehicle safety designs are achieved by all manufacturers and there's only a small percentage difference between the best and the worst (yet all getting "five stars"), advanced safety innovation no longer becomes a distinguishing factor. Finally, as Millennials (and the majority of city-bound Generation X and Y) give up the love, desire, and financial capability to own a personal vehicle - and instead seek "on-demand" public transport systems the likes that Uber and its kin will spawn - then "luxury" becomes a style choice without a premium.

Like those Bavarians I spoke with, these luxury car manufacturers are going to have to change dramatically if they are to continue to be the brands they are today. Despite all the technical innovation they've been renowned for over the last century, it does appear that they are late to the party and need to dramatically change their businesses in pretty short order.

As a BMW owner myself, I'm surprised at how far the company appears to be behind the global changes. I'd have thought that such a technically innovative company would have grasped the social and economic affects on luxury vehicle sales to city dwellers for the coming decade or two. While BMW (and other luxury car brands) have doubled down on vehicle performance, emission controls, renewable energy, and environmentally friendly design, it feels like they've been caught flat-footed in the innovation and desires of people (and city planners) to remove themselves from being the weakness behind the steering wheel... and the implication on all luxury vehicle brands.

I'm positive that the engineers at BMW and other traditionally innovative vehicle manufacturers have many relevant technologies tested and maybe shelved in their laboratories and around test tracks.

While I doubt that "luxury" will form less of a vehicles buying decision in the future - especially when the trend is towards fleet management of such vehicles (e.g. taxis, delivery, etc.) - I think that, for these companies to survive, they're going to have to become "technology companies".

Although late to the party, present-day luxury vehicle manufacturers can transform in to strong technology companies. For example, some opportunities could include:
  • With several decades of technical safety R&D innovation (e.g. collision avoidance, LADAR, automated parking, route guidance systems, land management, sleepy driver recognition, etc.) they already have the credentials and respect in the industry (and with consumers) as being the research leaders... so why not bundle up these safety features and license them under their brands. For example, the future Google self-driving car... music by Bose, safety by BMW.
  • As designers of engines (combustion, hybrid, and electric) they have decades of experience in design and performance. That could translate in to innovating city-wide refueling management platforms and systems.
  • "Smart Cities" are still mostly a desire rather than a reality. There is huge opportunity for proven technology companies to come in and define the rules, criteria, monitoring, and management of city-wide traffic control systems. Detailed knowledge of vehicle performance, capabilities, and safety controls whole be an ideal platform for building upon.
  • Regardless of just how many driver-less cars come to market over the coming decades, there are still going to be hundreds of millions of cars that were never built or designed to be "driver-less". There is an obvious requirement for supplemental or conversion kits for older vehicles - not just their own models.
The list above could be expanded considerably and I doubt that similar thoughts haven't also been discussed at various points in the last half-decade by the luxury brands themselves. However it would seem to me that now is a time of action.

It'll be very interesting to see how these luxury vehicle manufacturers reinvent themselves. If they have the funds now, then not only should they continue to innovate down safety technology paths, but they should probably be looking down the acquisition path... bringing into the fold new tech companies specializing in fleet and city vehicle management, taxi and courier management and control systems, city traffic monitoring and control systems, and maybe even a new generation of refueling station.

Saturday, July 9, 2016

Next Generation Weapons: The Eye Burner Rifle

The fantasy worlds of early 20th Century science fiction writers, in many ways, appear to be "now-ish" in terms of the technologies we'll wage war or police the civil population. Many of the weapons proposed a century ago were nuclear-based... well, perhaps "Atomic" was the more appropriate label at the time. Some authors pursued electric guns or "lightening" throwers, and by the mid-20th century the more common man-portable weapon systems were based upon high-powered laser systems.

When I think of new weapon systems... man-portable... and likely to be developed and employed within the coming quarter century, I think that many of the systems will integrate automatic target acquisition processes and coherent light - for"less lethal" confrontations. The term "less lethal" is of course relative and doesn't exclude weapon systems that are proficient at maiming and causing great pain or suffering.

One such system that, given current technological advances, lies within the finger tips of today's weapon designers could encompass the use of high intensity light, automated facial feature recognition, and "high-powered" laser light - and have a higher degree of target incapacitation than current personal small-arms have today.

The concept would be of a handheld configuration (similar size and dimensions of a rifle) that, when manually pointed in the direction of a target, bathes the target in a high intensity "white light" (giving the weapon system a range of say 50 meters) for a short period of time, at which point an embedded high-definition video device uses facial recognition processes to identify the physical eyes of the target currently "lit up", and subsequently automatically aligns a built-in high-powered laser with the targets eyes and fires. The laser, depending upon the power of the light source, would either temporarily or permanently blind the target.

A single trigger pull would bath the target with the main light function (which may temporarily disorient them anyway), but during that trigger pull the automated eye acquisition, eye targeting, and laser firing would happen in a fraction of a second (faster than a bullet could traverse the distance between shooter and target). I guess after the laser has successfully acquired the eyes and fired, the main light function would end... like a half-second burst of white light. To an external observer, the weapon user appeared to just fire a burst of white light at the head or torso of the target.

Obviously there are a lot of nuances to a "future" weapon like this. For example, would the target blink or close their eyes if the initial "white light" was directed at them? - At night, the answer is likely yes, however the facial recognition systems would still work and even a current "off-the-shelf" laser of the 5-20W range is strong enough to "burn through" the eyelids and damage the eyes. During the day it would obviously be easier... in fact perhaps the "white light" component is not required... instead the shooter merely targets the "head" and the rest of the system figures out the eyes and fires (or fries) the eyes of the target.

There are of course questions about ethics. But, compared to several ounces of hollow-point lead flying at several times the speed of sound, the option of permanent blindness is still a recoverable situation for the target.

[Wandering thoughts in SciFi]

Friday, January 29, 2016

Watching the Watchers Watching Your Network

It seems that this last holiday season didn’t bring much cheer or goodwill to corporate security teams. With the public disclosure of remotely exploitable vulnerabilities and backdoors in the products of several well-known security vendors, many corporate security teams spent a great deal of time yanking cables, adding new firewall rules, and monitoring their networks with extra vigilance.

It’s not the first time that products from major security vendors have been found wanting.

It feels as though some vendor’s host-based security defenses fail on a monthly basis, while network defense appliances fail less frequently – maybe twice per year. At least that’s what a general perusal of press coverage may lead you to believe. However, the reality is quite different. Most security vendors fix and patch security weaknesses on a monthly basis. Generally, the issues are ones that they themselves have identified (through internal SDL processes or the use of third-party code reviews and assessment) or they are issues identified by customers. And, every so often, critical security flaws will be “dropped” on the vendor by an independent researcher or security company that need to be fixed quickly.

Two decades ago, the terms “bastion host”, DMZ, and “firewall” pretty much summed up the core concepts of network security, and it was a simpler time for most organizations – both for vendors and their customers. The threat spectrum was relatively narrow, the attacks largely manual, and an organization’s online presence consisted of mostly static material. Yet, even then, if you picked up a book on network security you were instructed in no short order that you needed to keep your networks separate; one for the Internet, one for your backend applications, one for your backups, and a separate one for managing your security technology.

Since that time, many organizations have either forgotten these basic principles or have intentionally opted for riskier (yet cheaper) architectures and just hoping that their protection technologies are up to the task. Alas, as the events of December 2015 have shown us, every device added to a network introduces a new set of security challenges and weaknesses.

From a network security perspective, when looking at the architecture of critical defenses, there are four core principles:

  1. Devices capable of monitoring or manipulating network traffic should never have their management interfaces directly connected to the Internet. If these security devices need to be managed over the Internet it is critical that only encrypted protocols be used, multi-factor authentication be employed, and that approved in-bound management IP addresses be whitelisted at a minimum. 
  2. The management and alerting interfaces of security appliances must be on a “management” network – separated from other corporate and public networks. It should not be possible for an attacker who may have compromised a security device to leverage the management network to move laterally onto other guest systems or provide a route to the Internet. 
  3. Span ports and network taps that observe Internet and internal corporate traffic should by default only operate in “read-only” mode. A compromised security monitoring appliance should never be capable of modifying network traffic or communicating with the Internet from such an observation port. 
  4. Monitor your security products and their management networks. Security products (especially networking appliances such as core routers, firewalls, and malware defenses) will always be a high-value target to both external and internal attackers. These core devices and their management networks must be continuously monitored for anomalies and audited. 

In an age where state-sponsored reverse engineers, security research teams, and online protagonists are actively hunting for flaws and backdoors in the widely deployed products of major security vendors as a means of gaining privileged and secret access to their target’s networks, it is beyond prudent to revisit the core tenets of secure network architecture.

Corporate security teams and network architects should assume not only that new vulnerabilities and backdoors will be disclosed throughout the year, but that those holes may have been accessible and exploited for several months beforehand. As such, they should adopt a robust defense-in-depth strategy including “watchers watching watchers.”

Shodan's Shining Light

The Internet is chock full of really helpful people and autonomous systems that silently probe, test, and evaluate your corporate defenses every second of every minute of every hour of every day. If those helpful souls and systems aren’t probing your network, then they’re diligently recording and cataloguing everything they’ve found so others can quickly enumerate your online business or list systems like yours that are similarly vulnerable to some kind of attack or other.

Back in the dark ages of the Internet (circa the 20th century) everyone had to run their own scans to map the Internet in order to spot vulnerable systems on the network. Today, if you don’t want to risk falling foul of some antiquated hacking law in some country by probing IP addresses and shaking electronic hands with the services you encounter, you can easily find a helpful soul that’s figured it all out on your behalf and turn on the faucet of knowledge for a paltry sum.

One of the most popular services to shine light on and enumerate the darkest corners of the Internet is Shodan. It’s a portal-driven service through which subscribers can query its vast database of IP addresses, online applications and service banners that populate the Internet. Behind the scenes, Shodan’s multiple servers continually scan the Internet, enumerating and probing every device they encounter and recording the latest findings.

As an online service that diligently catalogues the Internet, Shodan behaves rather nicely. Servers that do the scanning aren’t overly aggressive and provide DNS information that doesn’t obfuscate who and what they are. Additionally, they are little more troublesome than Google in its efforts to map out Web content on the Internet.

In general, most people don’t identify what Google (or Microsoft, Yahoo or any other commercial search engine) does as bad, let alone illegal. But if you are familiar with the advanced search options these sites offer or read any number of books or blogs on “Google Dorks,” you’ll likely be more fearful of them than something with limited scope like Shodan.

Unfortunately, Shodan is increasingly perceived as a threat by many organizations. This might be due to its overwhelming popularity or its frequent citation amongst the infosec community and journalists as a source of embarrassing statistics. Consequently, security companies like Check Point have included alerts and blocking signatures in a vain attempt to thwart Shodan and its ilk.

On one hand, you might empathize with many organizations on the receiving end of a Shodan scan. Their Internet-accessible systems are constantly probed, their services are enumerated, and every embarrassing misconfiguration or unpatched service is catalogued and could be used against them by evil hackers, researchers and journalists.

In some realms, you’ll also hear that the bad guy competitors to Shodan (e.g. cyber criminals mapping the Internet for their own financial gain) are copying the scanning characteristics of Shodan so the target’s security and incident response teams assume it’s actually the good guys and ignore the threat.

On the other hand, with it being so easy to modify the scanning process – changing scan types, modifying handshake processes, using different domain names, and launching scans from a broader range of IP addresses – you’d be forgiven for thinking that it’s all a bit of wasted effort… about as useful as a “keep-off-the-grass” sign in Hyde Park.

Although “robots.txt” in its own way serves as a similarly polite request for commercial Web search scanners to not navigate and cache pages on a site, it is most often ignored by scanning providers. It also serves as a flashing neon arrow that directs hackers and security researchers to the more sensitive content.

It’s a sad indictment of current network security practices that a reputable security vendor felt the need and justification to add detection rules for Shodan scans and that their customer organizations may feel more protected for implementing them.

While the virtual “keep-off-the-grass” warning isn’t going to stop anyone, it does empower the groundskeeper to shout, “Get off my land!” (in the best Cornish accent they can muster) and feel justified in doing so. In the meantime, the plague of ever-helpful souls and automated systems will continue to probe away to their hearts content.