Tuesday, December 22, 2020

Attesting to the Security of Data-in-Use

The pace at which new confidential computing solutions are penetrating enterprise security architectures and data protection strategies appears to be catching security leaders off balance. COVID-19-accelerated digital transformation saw years’ worth of cloud migration, “zero trust” management and online collaboration tool rollouts squeezed into a few short months. Solutions engineering and security teams that thought they’d have a couple years to learn and master the next set of security- and privacy-preserving technologies are suddenly playing catch-up in the newly “cloudified” enterprise.

Having already mastered and commodified “data-at-rest” and “data-in-transit” security, security leaders are under pressure to support companies’ adoption of confidential computing technologies and newly enabled trusted execution enclave (TEE) services. If 2020 represented a step function for digital transformation and cloud adoption for businesses, 2021 will be the year of rapid, measurable “data-in-use” security and privacy. 

As the list of new and pending TEE-enabled products and services from major public cloud providers grows, where should CISOs and security architects begin? For most organizations, the two most influential confidential computing building blocks will be enclave attestation and enclave-enabled relational databases.

Whether the organization is planning on utilizing in-house or cloud computing builds atop Intel SGX or AMD SEV chip architectures (or Arm, NVIDA, etc. in the future), attestation lays at the heart of confidential computing trust. Enclave provisioning and trust will quickly become as fundamental to enterprise security as identity management, certificate management and key management.


Enclave attestation services are designed to verify and validate that the confidential computing workload is provisioned and executed securely in a TEE environment. Remote attestation services will necessarily vary in environmental specifics, but they generally validate the integrity — the root of trust, hardware status, firmware status, security patch status, etc. — of the TEE (hardware or virtualized) before releasing sensitive data into the enclave. It is therefore the attestation service’s responsibility to cryptographically ensure that the underlying hardware and firmware is not in a vulnerable state before use, that the workload assigned to the enclave is transferred securely and, upon execution, that the enclave remains secure and the workload’s computing functions and output were not tampered with in any way. In essence, confidential computing attestation tells you if the results that your code generated within the TEE are trustworthy.

It won’t be long before CISSP study materials include attestation mechanics with an “Alice and Bob” explanation, like those used to teach data flows that constitute asymmetric cryptography and key exchange mechanics. For most InfoSec professionals, a basic understanding of enclave attestation will be adequate for traffic routing and troubleshooting. A deeper understanding will be required for architects and DevOps teams tasked with deploying and trusting new confidential compute workloads.

In parallel to incorporating enclave attestation service design into new business application architectures, security and privacy leaders will need to leverage enclave-enabled relational databases — especially if they’re to meet toughening regulatory requirements linked to customer data privacy. 

In the database world, balancing privacy and “data-at-rest” encryption with data utility and business application performance has been a delicate, often compromising, affair. Encrypted column data (for example, customer names, addresses and blood types) logically makes it more difficult to perform searches and match records. The use of deterministic encryption techniques (for instance, encryption that always generates the same encrypted value for any plain text value) is vulnerable to some predictive attack vectors. However, randomized encryption techniques are more secure but make most common query types impossible.

To protect sensitive data from malware and high privileged unauthorized users of the database server, traditional non-TEE data encryption processes protect the data by encrypting it on the client side. This means disallowing the data or corresponding cryptographic keys from appearing in plaintext inside the database engine. When deterministic encryption is used, the only operations the database engine can perform are equity comparisons. All other operations, including cryptographic operations (initial data encryption or key rotation) or rich computations (for example, pattern matching), are not supported inside the database. Users need to move their data outside the database to perform these operations on the client side. These actions are completed as a secure software agent to transport encrypted column data to a remote host or system to decrypt and perform query actions upon the data, then re-encrypt data that needs to be written back to the protected column.

For operations teams tasked with regulatory data discovery, labeling and protection throughout the enterprise, the mechanics of securing client agents and shuffling encrypted data between systems —temporarily duplicating data in the process — is inefficient and burdensome. TEE-enabled database services ensure the encrypted data remains within the system, allowing computations on plaintext data inside the secure enclave with no way to view data or code inside the enclave from the outside (even with a debugger). In addition, rich computations, such as operations on encrypted columns, are possible, and cryptographic operations on sensitive data, like initial data encryption or rotating a column encryption key, are performed within the enclave and do not require moving the data outside the database.

As confidential compute services become ubiquitous, enclave attestation and enclave-enabled relational database technologies will be fundamental building blocks for post-COVID-19 business application design and delivery. CISOs and their security teams need to quickly master these technologies if they’re to successfully partner with in-house development teams and secure “data-in-use.”

-- Gunter Ollmann

First Published: SecurityWeek - December 22, 2020

Tuesday, November 3, 2020

Securing Data-in-Use With Confidential Computing

Confidential Computing Capabilities and Services Will be a Competitive Differentiator for Cloud Providers

Most organizations should have mastered by now the first two pillars of data protection – securing data-at-rest and data-in-transit – across their hybrid enterprise environments. The third data protection pillar – securing data-in-use (i.e., protecting and encrypting data in use while in memory and during computation) – has been elusive, but is in the process of being addressed through the transformational motion commonly referred to as confidential computing. 

Technology business leaders are ideally pursuing transformation plans that assume ubiquitous confidential computing availability and data-in-use security will be a cloud-native default within five years.

For many organizations, completing their digital transformation journey has been conditional on being able to categorically ensure that absolutely no one – not a trusted system administrator, the OS developer, the cloud provider, law enforcement, malicious insiders, or an attacker armed with powerful zero-day exploits – can ever secretly access or manipulate the data and intellectual property they entrust to the cloud. Consequently, as the third pillar of data security, confidential computing will increasingly be a prerequisite for any cloud-deployed business application.


The technologies, platforms, and architectures that enable confidential computing have evolved at an astounding pace – especially when compared with the decades it has taken for data-at-rest encryption to evolve from password-protected ZIP files in the early 1990s to today’s enabled-by-default hardware-based encryption locked to the physical compute system, or the continued effort to transition data-in-transit defaults from HTTP to secure HTTPS (preferably using TLS v1.3).

The global pandemic has not held back public cloud advancements and new service offerings in confidential computing. Virtualization infrastructure for confidential computing built atop hardware-based trusted execution environments (TEEs) on servers that implement Intel Software Guard Extensions (Intel SGX) are generally available along with previews of confidential VMs using hardware-based TEE on servers supporting AMD’s secure encrypted virtualization (AMD SEV) extension. In parallel, confidential computing options have begun extending across cloud services to embrace Kubernetes confidential nodes, always encrypted SQL databases, confidential machine learning interfaces, HSM key management, and IoT edge compute.

It can be difficult for security leaders to keep pace with underlying hardware advances and their applicability. For example, the memory integrity protection features of Intel SGX are well suited for highly security-sensitive but small workloads, while AMD SEV is useful for “lift and shift” of existing complex or legacy applications and services without necessarily refactoring existing code. Meanwhile, Intel’s trust domain extensions (Intel TDX) will enable hardware isolated virtual machines (called trust domains), AMD’s secure encrypted virtualization-encrypted state (SEV-ES) will help ensure that guest VMs are encrypted and attestable once they stop running, and many more hardware advances from Intel, AMD, Arm, NVIDIA, etc. that help mitigate new and potentially intrusive memory, compute, and attestation threats will become available via the major cloud providers in the coming year or two.

Clearly, confidential computing is in a transitional period as these new hardware-enabled solutions get adopted and deployed by major cloud providers and their most advanced customers. 

While it is easy to get lost in the hardware-based security feature advances the silicon providers are delivering, the long-term assumption security leaders should be planning for is that the physical infrastructure will verifiably guarantee that enclaved processes, memory, and the data they hold or manipulate will be secure from any and all prying eyes – in particular, the cloud and software stack providers – and that all cloud services (from leading public cloud providers) will operate in a secure data-in-use mode by default. It is reasonable to assume that within five years the term “confidential compute” will become superfluous and an assumed native component of all cloud services.

In the meantime, confidential computing capabilities and services will be a competitive differentiator for the large cloud providers. 

As the underlying hardware advances and public cloud providers extend data-in-use security, integrity, and attestation capabilities across their customer-available services, business technology leaders will need to assess each cloud service individually and assume some short period of cloud-specific lock-in for the custom applications their own business will engineer.

Opportunities ranging from anti-money-laundering to customer analytics within the financial services industries, privacy-preserving collaborative disease diagnostics and drug development within healthcare services, joint intelligence analysis and anti-corruption across government agencies, etc., are a snippet of newly achievable privacy and confidentiality-preserving solutions from organizations adopting cutting edge confidential computing-enabled cloud solutions.

Some enterprises may be content to wait until data-in-use security becomes ubiquitous to complete their digital transformation. While confidential compute capabilities are expanding and new secure cloud services are “fluid” in their evolution, there is a clear window for technology-agile businesses to innovate and partner with their cloud provider to bring new categories of secure products to market ahead of both competitors and regulators.

-- Gunter Ollmann

First Published: SecurityWeek - November 3, 2020

Thursday, September 17, 2020

Enterprise Threat Visibility Versus Real-World Operational Constraints

The phrase “assume breach” has been transformational to enterprise security investment and defensive strategy for a few years but may now be close to retirement. 

When the vast majority of information security expenditure was focused on impermeable perimeter defenses and reactive response to evidence-based compromise, it served as a valuable rallying cry for organizations to tool their enterprise for insider-threat detection, adopt zero-trust network segmentation, and pursue widespread deployment of multifactor authentication systems and conditional access controls.

Sizable investments in enterprise-wide visibility should have reversed the much older adage “a defender needs to be right all the time, while the attacker needs to be right only once” into something like “an attacker needs to be invisible all the time, while the defender needs them to slip up only once.” Unfortunately, security operations and threat-hunting teams have found that instead of automatically spotting needles in a haystack, they must now manage haystacks of needles—if they’re properly equipped. For under-resourced security teams (which appears the majority), advances in enterprise-wide visibility have in the best case added hundreds of daily alerts to their never-completed to-do lists.


As security budgets have morphed, a higher percentage of spend has been allocated to increasing visibility on the premise that more threats will be preemptively detected, blocked, and mitigated.

An appropriate analogy for the situation would be installing dozens of video cameras in and around your home with overlapping fields of view and relying on that as the primary alerting mechanism for preventing break-ins. The primary assumption is that someone will be continually monitoring all those video feeds, will recognize the build up and execution of the break-in, and can initiate a response to stop the thief. 

The consequences of such a strategy (by way of continuing the analogy) are pretty obvious:

  1. Because 24/7 monitoring is expensive, automated detection is required. Automatic detection comes at the cost of high false-positive rates and baseline tuning; in home CCTV terms, ignoring the rabbits, golf balls, and delivery men that cross a field of vision, while desensitizing movement thresholds and setting up hot zones for alerting. Even rarish false positive events such as lighting strikes during a storm or the shadow of a passing airplane are unfortunately enough to fill an inbox or message tray and result in wariness delays and wasted investigative cycles. To counter the problem, use at least two disparate and independent detection technologies to detect and confirm the threat (for example, CCTV movement zones and a break-glass sensor).
  2. Automatic detection without an automatic response limits value to post-break-in cleanup and triage—not prevention. Because of potential false positives, automatic responses also need to be reversible throughout the period of alert response. If CCTV movement and break-glass sensors are triggered, perhaps an automatic request for a patrol car visit is initiated. Meanwhile the original alert recipient can review footage and cancel the callout if it was clearly a false positive (e.g., the neighbor’s kids kicked a ball over the fence and broke a window).
  3. Balance between detection and prevention is critical and will change over time. 24/7 CCTV monitoring may serve as a key detection capability, but locking all external doors with deadbolts shouldn’t be neglected. Deadbolted doors won’t stop the future threat of a $50 miniature drone flying down the chimney and retrieving the spare front-door key laying on the kitchen table. Prevention investments tend to be threat reactive, while modern detection technologies tend to be increasingly successful in identifying behavioral anomalies.

“Assume breach” served its purpose in changing the ways organizations thought about and invested in their security technologies (and operational programs). As with many well-intentioned initiatives, the security pendulum may have swung a little too far and now needs a balanced redressing.

Although I think cloud-SIEM and the advanced machine intelligence platforms being wedded to it will eventually meet most organizations’ 24/7 visibility and detection needs, SecOps teams will continue to battle against both alert fatigue and posture fatigue. The phrase I’d like to see the industry focus on for the next five years is “automatically mitigated.”

-- Gunter Ollmann

First Published: SecurityWeek - September 17, 2020

Tuesday, July 21, 2020

Security Posture Fatigue

As SecOps Teams Increasingly Take on Proactive Risk Reduction, Posture Fatigue Will Grow 

Security operations teams are once again feeling overwhelmed and under pressure. Although advances in cloud SIEM and the fusion of AI alerts, events, and logs has enabled SecOps teams to finally get ahead of common threats and automate much of the day-to-day repetitive investigative work, the rapidly expanding footprint of the digital enterprise has opened the door to a new headache—security posture fatigue.

Many high-performing SecOps teams will inform you that their threat hunting has evolved from searching for a needle in a haystack to managing haystacks of needles. Consequently, the desire to add yet another threat detection tool to an environment that generates yet another alert that needs to be investigated and actioned isn’t high on their product purchase wish list. Organizations are driving hard to consolidate threat detection and protection capabilities by reducing the number of vendors and products and pursuing integrated suite solutions where they can—reducing overall alert noise and triage time.


Although threat detection and response are being tamed, SecOps teams continue to battle enterprise sprawl. Business units and departments are adding new workloads in an increasingly diverse range of environments—public cloud, private cloud, corporate WAN, third-party SaaS platforms, manufacturing floors, CI/CD pipelines, etc.—each of which require a mix of tailored and ad hoc security configuration management, posture monitoring, and policy configuration. As a result, it has become increasingly difficult for SecOps teams and CISO organizations to answer basic questions such as “where are all my assets?” “are we compliant?” and “are we vulnerable to last week’s headline attack?”

To tackle this problem, security policy compliance and posture management is increasingly becoming a centralized function. 

Each environment an enterprise operates and does business in requires tooling for security posture management and risk reduction, and for the past decade, the number of tools that can provide posture metadata, risk assessments, and security policy lapses has grown.

The broad mix of work environments, a wide variety of security posture management products (some of which are decades old), and fragmented tool capabilities has not only resulted in an inundation of security posture alerts but added new dimensions and complexity to risk-reduction orchestration and policy enforcement—causing posture fatigue as SecOps teams are overwhelmed with new and disparate datasets.

More modern work environments have proved to have more capable tools for security configuration management and remedy orchestration. For example, Cloud Security Posture Management (CSPM) has become the poster child for what is possible when managing modern enterprise production environments—and showcases what is technically possible in day-to-day security posture and risk management.

There is increased pressure on vendors to modernize many of the products used in older (but still critical) enterprise environments. Integrated Risk Management (IRM), Enterprise Risk Management (ERM), Vulnerability Assessment Management (VAM), Security Configuration Management (SCM), Application Performance Management (APM), etc., are product categories ripe for consolidation across workload environments as the requirement to “sort the wheat from the chaff” of posture lapses grows.

There is a lot to be learned from how CSPM has advanced the visibility and manageability of security posture management and business risk reduction of enterprise workloads within public cloud environments. The challenge ahead is to gain similar capabilities across the full estate of enterprise operating environments.

As SecOps teams increasingly take on proactive risk reduction, their vocabulary expands from security threats to include posture lapses, and posture fatigue will grow.

-- Gunter Ollmann

First Published: SecurityWeek - July 21, 2020

Tuesday, June 9, 2020

Navigating the Rapid Digital Shift: Ticket on the Bus, Not the Whole Bus

Global Companies’ Evaluation of Cybersecurity Solutions Selection Has Been Steadily Changing 

If it wasn’t already obvious to cybersecurity sales teams, there’s been a sea change for large organizations evaluating and buying new security products to protect their businesses. Responding to COVID-19, transformation plans that enable “work from home” such as Zero Trust identity and access management have been greatly accelerated, while technology refreshes and other capital-intensive plans are being pushed back.

Now, several months into this new operations paradigm, there may be added credence to the adage “in for a penny, in for a pound.” 


Many large companies have successfully navigated the digital shift to most of their workforce working remotely, finding the transition less difficult than first envisaged and achieving higher productivity than anticipated. Because such companies have resolved long-held internal conflicts over the security and integrity of cloud-based business operations, many of those postponed capital-intensive projects are being reviewed with a cloud-enabled, subscription-based lens.

This has several ramifications for cybersecurity vendors—particularly the specialized boutiques and innovative startups looking to quickly capitalize on new security opportunities.

Global companies’ evaluation of cybersecurity solutions selection has been steadily changing over the past couple years. The rapid digital shift of recent months has reinforced the need for change. 

I’d like to offer advice to vendors attempting to reach out and position their new cybersecurity products.

  1. “I’ll buy a ticket, not the whole bus.” For decades, startups have looked to the largest companies as the Golden Goose and focused great energies in selling into them. The premise being that by solving a critical problem for them at a very high premium, that will cover the costs of developing an actual solution that can be sold broadly—e.g., the sale will fund my company’s product development. Although there may be a few cases where only a custom-tuned solution is required, many large businesses now prefer to buy a close-enough solution off the rack and work with the vendor as an advisor—not an investor. CISOs are looking at the sustainable list price of the solution and will purchase at a discounted level proportional to their deployment’s scale.
  2. “Cost projection is critical.” Although highly versatile and scalable, cloud-based services billing can be difficult to predict—especially if the cybersecurity solution requires multiple third-party and cloud-provider SaaS dependencies. Security owners and budget holders are requiring vendors to provide accurate billing forecast and tiered discount models for the complete solution—models that include all dependent service costs (e.g., log storage analytics, container management). Vendors need to remove as much calculus from the pricing as possible and be prepared for billed services to be pared back if overly optimistic projections exceed the planned budget. Cost discussions have replaced those about cloud solutions prices.
  3. “Features must be pre-integrated.” If the product is a feature (which, let’s face it, almost all new startup products are!), recognize it as a feature and don’t position it as a partial solution. As a feature product, integration with the solutions businesses already use is a prerequisite, and sales representatives should lead with the integration and interoperability first. CISOs are looking to shrink their attack surface and simplify the portfolio of products and vendors they rely on, and are increasingly reluctant to take on the task of brokering partnership between vendors as a prerequisite for extracting new protection value. Feature products benefit greatly by being enabled from within a solution provider’s product or marketplace.

On a related note, with the surge to execute day-to-day business operations remotely with a diverse and globally distributed workforce, cybersecurity buying decisions will increasingly factor accessibility, usability, and inclusiveness in solution design and operability. Vendors will be steered toward cloud-standardized accessibility interfaces—enabling visually impaired employees to use screen readers or dexterity-limited users to employ voice-to-text controls—to perform their analysis.

These changes are not unique to the largest enterprise businesses and are trickling down to other educated cybersecurity buyers feeling the same buying pain. Forewarned is forearmed.

-- Gunter Ollmann

First Published: SecurityWeek - June 9, 2020

Tuesday, May 5, 2020

Tackling the SDLC With Machine Learning

Businesses’ digital transformations continue to show that being relative and competitive are directly tied to the ability to develop and harness software. As the CEO of Microsoft, Satya Nadella, oft says—“every company is now a software company.”

Software flaws that lead to unintentional data leakage, cause breaches, or jeopardize public health or the environment are not only costly but may be terminal to a company’s future. Integrity and security of the software and the development processes behind them have therefore become a critical component of every organization’s success. It is a core reason CISOs are increasingly partnering with DevOps leaders and vigilantly modernizing secure development lifecycle (SDLC) processes to embrace new machine learning (ML) approaches. 

Automated application security testing is a key component of modern SDLC practices and can economically uncover many bugs and potential security flaws with relative ease. Application security testing embraces a broad range of complementary techniques and tooling—such as static application security testing (SAST), dynamic application security testing (DAST), interactive application security testing (IAST), and runtime application self-protection (RASP). Current best practice security advice recommends a mix of tools from this alphabet soup to mechanically flag bugs and vulnerabilities to mitigate the consequences of unresolved bugs that make it to production systems.

A troublesome consequence of this approach lies with the volume of identified software flaws and the development team’s ability to corroborate the flaw’s risk (and subsequent prioritization). It’s also a problem manifest in organizations that operate bug bounty programs and need to triage bug researchers’ voluminous submissions. Even mature, well-oiled SDLC businesses battle automated triage and prioritization of bugs that flow from application security testing workflows—for example, Microsoft’s 47,000 developers generate nearly 30,000 bugs a month.


To better label and prioritize bugs at scale, new ML approaches are being applied and the results have been very promising. In Microsoft’s case, data scientists developed a process and ML model that correctly distinguishes between security and non security bugs 99 percent of the time and accurately identifies critical, high-priority security bugs 97 percent of the time.

For bugs and vulnerabilities outside automated application security testing apparatus and SDLC processes—such as customer- or researcher-reported bugs—additional difficulties in using content-rich submissions for training ML classifier systems can include reports with passwords, personally identifiable information (PII), or other types of sensitive data. A recent publication “Identifying Security Bug Reports Based Solely on Report Titles and Noisy Data” highlights that appropriately trained ML classifiers can be highly accurate even when preserving confidential information and restricted to using only the title of the bug report.

CISOs should stay informed of innovations in this area. According to Coralogix, an average developer creates 70 bugs per 1,000 lines of code and fixing a bug takes 30 times longer than writing a line of code. 

By correctly identifying security bugs from what is increasingly an overwhelming pile of bugs generated by automated application testing tools and customer-reported flaws, businesses can properly prioritize their development teams’ fix workflow and further reduce application risks to their organization, customers, and partners.

Although much research and innovation are underway in training ML classifier systems to triage security bugs and improve processes encapsulated in modern SDLC, it will be a while before organizations can purchase off-the-shelf, integrated solutions. 

CISOs and DevOps security leaders should be alert to new research publications and what “state of the art” is, and press their automated application software testing tool suppliers to advance their solutions to intelligently and correctly label security bugs apart from the daily chaff.

-- Gunter Ollmann

First Published: SecurityWeek - May 5, 2020

Tuesday, March 31, 2020

Retooling Cyber Ranges

Cloud-based Cyber Ranges Will Change the Future of Training and Certifying Security and DevOps Professionals

A half-decade ago, with much fanfare, cyber ranges were touted as a revolutionary pivot for cybersecurity professionals’ training. Many promises and investments were made, yet the revolution has been slow coming. What may have been a slow start appears to be picking up speed and, with the accelerated adoption of work-from-home business practices, may finally come of age.

The educational premise behind almost all cyber range training platforms is largely unchanged from decades-old war-gaming and capture the flag—nothing beats hands-on practice in refining attack and defense strategies or building responder muscle memory. Carefully scripted threat scenarios guide the training program—often gamifying the experience with mission scores and leaderboards. Many of the interfaces and scenario scene-setting often appear like they came from the imagination of developers who grew up on a diet of 1990’s video games like Command & Conquer; the militaristic adversary overtone is strong yet adds positively to the immersive experience for users.

For many years, gamified security training has required significant infrastructure investment by the provider—investments capable of replicating the complex environments of their customers and the apparatus to generate realistic network traffic. Like the customers that subscribe, cyber-range platforms are undergoing their own digital transformation and moving to the cloud—ephemeral virtual environments, dynamic scaling to the number of participants, global anytime delivery, etc., are all obvious advantages to building and running cyber ranges within the public cloud.


What may be less obvious is how cloud-based cyber ranges will change the future of training and certifying security and DevOps professionals.

Some of the changes underway (and maybe a couple years down the road for mainstream availability) that excite me include:

  • At-home cyber-range training and hands-on mastery of operational security tasks and roles. Past cyber-range infrastructure investments necessitated classroom-based training or regional traveling roadshows. Cloud-based cyber ranges can remove the physical classroom and scheduling constraints—offering greater flexibility for employees to advance practical skills at their own pace and balance time investments against other professional and personal commitments. I’m particularly encouraged with the prospect of delivering a level field for growing and assessing the practical skills and operational experiences of security professionals coming from more diverse backgrounds.
  • Train against destructive scenarios within your own business environment. As businesses run more of their critical systems within the cloud, it becomes much easier to temporarily spin up a clone, mirror, or duplicate of that environment and use it as the basis for potentially destructive training scenarios. Cyber ranges that apply threat scenarios and gamify the training regime for users across the replicated workloads of their customers significantly increase the learning value and response applicability to the business.
  • Shift-left for security mastery within DevOps. Cyber range environments and the scenarios they originally embraced focused on security incident responders and SOC operators—the traditional Blue Team members. With security becoming a distributed responsibility, there is a clear need to advance from security awareness to hands-on experience and confidence for a broader range of cyber-professional. Just as SIEM operations have been a staple of cyber ranges, a new generation of cyber-range platforms will “shift left” to replicate the complex CI/CD environments of their customers—enabling DevOps teams to practice responding to zero-day bugs in their own code and cascading service interruptions, for example.

It will be interesting to see how enterprise SOC leaders will embrace SecOps teams that trained and certified via cyber ranges at home. I’m sure many CISOs will miss the ability to escort senior executives, investors, and business partners around a room filled with security professionals diligently staring at screens of graphs and logs, and a wall of door-sized screens showing global pew-pew animated traffic flows. 

There is a difference between a knowledge certificate and the confidence that comes with hands-on experience—and that confidence applies not only to the employee, but to their chain of command.

The coming of age for cyber ranges is both important and impactful. It is important that we can arm a greater proportion and more diverse range of cyber-professionals with the hands-on practical experience to tackle real business threats. It is impactful because cyber-range scenarios provide real insights into an organization’s capabilities and resilience against threats, along with the confidence to tackle them when they occur.

-- Gunter Ollmann

First Published: SecurityWeek - March 31, 2020

Tuesday, March 3, 2020

Advancing DevSecOps Into the Future

If DevOps represents the union of people, process, and technology to continually provide value to customers, then DevSecOps represents the fusion of value and security provided to those same customers. The philosophy of integrating security practices within DevOps is obviously sensible (and necessary), but by attaching a different label perhaps we are likely admitting that, despite best efforts, this “fusion” is more of an emulsification.

DevSecOps incorporates discrete security elements and capabilities throughout the development process; “security as code” is the hymn recited by development and security operations teams alike. But when you look closer, the security elements of DevSecOps are discrete, like the tiny immiscible spheres of oil suspended within a tasty vinaigrette — incorporated rather than invisibly entwined within the fabric of DevOps.

Today’s DevSecOps can largely be divided into two core functions: the automated checking and gated prevention of known and potential security flaws throughout the continual integration and continual deployment (CI/CD) workflow, and the operational monitoring and response to security-imbued telemetry generated by the deployment and surrounding protection technologies.

Rightly, we cocoon the applications that flow from our CI/CD workflows with further layers of discrete security tooling to monitor, alert, and ideally protect against broad categories of threats — threats that may be more economically and reliably prevented from outside than within the workflows. Those layers of security almost always operate independently from the application they are defending. This needs to change if we’re to “level up” security and roll DevSecOps back into DevOps.

Although security operations (SecOps) teams are becoming vastly more efficient at managing and responding to the alerts generated by their perimeter, server, and behavioral defense systems, there is a need to incorporate this same telemetry, response workflows, and decision-making into both the CI/CD workflow and the application itself if businesses are to successfully battle advancing threats such as Adversarial AI, data lake tainting, and behavioral poisoning. 

Too many DevSecOps workflows depend upon humans being in them. They’re the “bump in the wire,” and when adversaries switch to newer automated or AI-enabled attack and exploitation modes, system compromise and data breaches will (repeatedly) occur before fixes can be created, defenses tweaked, and patches applied.


The future lies in moving beyond the independent operations of “secure the code” and “protect the app,” and into the realm of self-defending applications.

It sounds grandiose, but there are some core elements and opportunities to progress toward applications that can defend themselves.

  • Telemetry from the security technologies that cocoon the application need to be available and consumable to the application and the CI/CD workflow.
  • Applications must know when external security tools and monitors suspect or alert when attacked and be capable of responding if advantageous to do so. For example, an application may be capable of natively securely parsing a fund transfer request, but by knowing that a WAF had identified and blocked the previous 12 HTTP POST submissions due to malicious SQL injection payloads for the same session in the past 500 milliseconds, it could leverage the information in handling this 13th transfer and user session — perhaps by deceiving the attacker with a fake and evidentiary traceable response.
  • Security technologies need to standardize on nomenclatures, severity, and impact for both threats and behaviors. The new generation of cloud-based SIEM, through normalization of data connectors and telemetry, is capable of providing a degree of (vendor-specific) standardization and is primed for being the source of real-time security telemetry for CI/CD and application consumption. Application development frameworks need to understand this nomenclature and, ideally, come pre-armed with libraries and functions to respond with best practices.
  • Increased AI adoption and fusion within the CI/CD workflow can accelerate the pace at which workflows can respond to security telemetry. For example, a server-based security agent identifies a memory overflow and subsequent unwanted process startup, while the SIEM is able to reconstruct the session sequence to highlight the transaction string (0-day exploit). An intelligent and automated CI/CD process should be able to use that information to identify the vulnerable code and correct the logic flaw or bug, and proceed with an update to the live application with a fix — without developer involvement.

Security responsibility must, and will continue to, “shift left.” To enable that, security telemetry needs to be both accessible and incorporated into the application and the DevOps workflow, and the developers themselves must be comfortable and knowledgeable in integrating the information. Better developer tooling — such as secure coding languages and frameworks, accessible best-practice libraries and functions, and smart in-line developer guidance and correctors — will help close the gap.

Rapid advancement of AI and ML technologies and incorporation into the CI/CD workstream will be able to increase the pace of security integration and secure deployment. There is still much work to be done, and subsequently there are great opportunities for innovative companies to add significant value to the process. 

In the meantime, CISOs and DevOps leaders should press hard on technologies and processes that remove the human speed bumps from the CI/CD workflow. Adversaries are advancing at a fast pace in their development of fully automated and autonomous attack engines. Soon, defense and response will be measured in milliseconds, not in days and weeks as it is now.

-- Gunter Ollmann

First Published: SecurityWeek - March 3, 2020

Tuesday, February 4, 2020

Changing the Disclosure Shame Culture

For Cyber-defense to Progress, We Must Break Through the Cultural Barrier of Breach Disclosure Shame

Although we repeatedly hear that cyber adversaries have an upper hand due to the sharing and rapid dissemination of tools, techniques, and intelligence among like-minded attackers, the hard-earned lessons gained by defenders are tightly closeted — most often under a shroud of shame and reluctantly disclosed, if ever. For cyber-defense to progress, we must break through the cultural barrier of disclosure shame.

Despite most enterprises adopting an “assumed breach” approach to securing their business, the successes and investments that lead to uncovering breaches are too often thoroughly undermined by the perception of having failed to preemptively protect the environment.

Multiple longstanding movements aid the sharing of selective artifacts of an attack – most often those that were successfully thwarted or captured using generic blocking technologies. These artifacts (e.g. malware and phishing samples) and their associated telemetry (e.g. detonation logs) are useful from a threat intelligence perspective and are increasingly consumed with greater agility by both investigative and blocking protection systems, but they can’t communicate the important dimensions needed to help prevent the next novel threat or attack vector. Missing is the technical biopsy of the entire chain of events that resulted in a system compromise – in particular, what defensive or detection apparatus worked and what didn’t.


Security teams gain snippets of insight from defensive failures through public breach disclosures or the investigative reporting that follows large-scale and brand-name hacks. The stigma of past public disclosures causes most companies to go dark when a breach is detected and to resurface months later only after satisfying themselves that similar weaknesses have been internally dealt with – through technology or leadership change. That shroud of darkness is arguably a critical time in which disseminating details is the most valuable to other defenders around the globe.

In closed-door, invite-only forums, there is more willingness to share additional information about security failures – in more detail and in a timelier manner – but they are infrequent and highly localized. In fact, there are many parallels with how TV portrays an Alcoholics Anonymous meeting – e.g., “My name is Beth and I’ve been breached for 6 months …” – with an aura of shame, acknowledgement of past missteps, and hope for future well-being.

New scoring systems coming to market make it easier for organizations to both understand and monitor changes in their own enterprise security ecosystem. At the moment there are as many defense scoring systems as there are vendors that include them, but I believe that they’ll consolidate rapidly this year – most likely following the lead of the largest public cloud providers. It is exciting to meet with CISOs and other security leaders, openly comparing their scores and sharing tips on how they’re looking to improve them. I had not realized that gamification could be such a blessing to defenders.

Although defense scoring lowers the barrier to sharing defensive success insights, it does not yet address the insights gained from learning from others’ failures and the stigma of a breach.

Upon “going dark” after a breach detection, the security products vendors used within the compromised environment are similarly shut out – at precisely the time they can potentially add the most value to both the victim and the wider defensive ecosystem. It is in vendors’ best interest to leverage both their engineering and security research teams to promptly dissect and understand failures in their detection apparatus or missed capabilities in defending any chained or sequenced attack – and CISOs should leverage that deep expertise to complement their internal efforts as soon as they can.

With today’s complex and rapidly changing ecosystem of layered defenses, suite integrations, data connectors, automated response orchestration, policy configurations, and hybrid environments, breach response to a new threat or attack technique is rarely distilled down to adding a new detection signature or firewall rule. 

I thoroughly recommend a war room approach, with technical representatives from the vendors of the security products the organization deployed and had anticipated would directly or indirectly discover and protect against the overall threat. Those vendors should be charged with both optimizing existing product capabilities (that may have been misconfigured, new, or poorly understood) within the compromised environment and, if needed, the coordination and acceleration of engineered updates or feature capabilities to prevent any repeated and related attack. Leverage the R&D expertise of your security vendors – you’ve probably already paid for it!

It should not be a blame game (unless product inadequacies really are to blame!) – rather, the collective team should identify optimal routes to earlier detection and prevention, both short term and long term.

Bringing trusted vendors into the breach equation early on should accelerate a stable and robust threat response. 

The stigma of a breach can be shared with vendors and any associated public shame lessens with rapid threat response. The story of how a CISO and her vendors collectively and dynamically responded to a new threat, and how that knowledge was timely shared and incorporated into their products for all to benefit, is an incredibly strong one.

-- Gunter Ollmann

First Published: SecurityWeek - February 4, 2020

Tuesday, January 14, 2020

The Changing Face of Cloud Threat Intelligence

As public cloud providers continue to elevate their platforms’ default enterprise protection and compliance capabilities to close gaps in their portfolio or suites of in-house integrated security products, CISOs are increasingly looking to the use and integration of threat intelligence as the next differentiator within cloud security platforms.

Whether thinking in terms of proactive or retroactive security, the incorporation (and production) of timely and trusted threat intelligence has been a core tenant of information security strategy for multiple decades — and is finally undergoing its own transformation for the cloud.

What began as lists of shared intelligence covering infectious domains, phishing URLs, organized crime IP blocks, malware CRCs and site classifications, etc., has broadened and become much richer —  encompassing inputs such as streaming telemetry and trained detection classifiers, through to contributing communities of detection signatures and incident response playbooks. 


Cloud-native security suites from the major public cloud providers are striving to use threat intelligence in ways that have been elusive to traditional security product regimes. Although the cloud can, has and will continue to collect and make sense out of this growing sea of raw and semi-processed threat intelligence, newer advances lie in the progression and application of actionable intelligence. 

The elastic nature of public cloud obviously provides huge advancements in terms of handling “internet-scale” datasets — making short work of correlation between all the industry-standard intelligence feeds and lists as they are streamed. For example, identifying new phishing sites without any user being the first victim, by correlating streams of new domain name registrations (from domain registrars) with authoritative DNS queries (from global DNS providers), together with IP reputation lists, past link and malware detonation logs, and continuous search engine crawler logs, in near real time.

Although the cloud facilitates the speed in which correlation can be made and the degree of confidence placed in each intelligence nugget, differentiation lies in the ability to take action. CISOs have grown to expect the mechanics of enterprise security products to guarantee protection against known and previously reported threats. Going forward, those same CISOs anticipate cloud providers to differentiate their protection capabilities through their ability to turn “actionable” into “actioned” and, preferably, into “preemptively protected and remedied.”

Some of the more innovative ways in which “threat intelligence” is materializing and being transformed for cloud protection include:

  • Fully integrated protection suites. In many ways the term “suite” has become archaic as the loose binding of vendor-branded and discrete threat-specific products has transformed into tightly coupled and interdependent protection engines that span the entire spectrum of both threats and user interaction — continually communicating and sharing metadata — to arrive at shared protection decisions through a collective intelligence platform.
  • Conditional controls. Through an understanding of historical threat vectors, detailed attack sequencing and anomaly statistics, new cloud protection systems continually calculate the probability that an observed sequence of nonhostile user and machine interactions is potentially an attack and automatically direct actions across the protection platform to determine intent. As confidence of intent grows, the platform takes conditional and disruptive steps to thwart the attack without disrupting the ongoing workflow of the targeted user, application or system. 
  • Step back from threat normalization. Almost all traditional protection technologies and security management and reporting tools require threat data to be highly structured through normalization (i.e., enforcing a data structure typically restricted to the most common labeled attributes). By dropping the harsh confines of threat data normalization, richer context and conclusions can be drawn from the data — enabling deep learning systems to identify and classify new threats within the environments they may watch over.
  • Multidimensional reputations. Blacklists and whitelists may have been the original reputational sources for threat determination, but the newest systems not only determine the relative reputational score of any potential device or connection, they may also predict the nature and timing of threat potential in the near future — preemptively enabling time-sensitive switching of context and protection actions.
  • Threat actor asset tracking. Correlating between hundreds or thousands of continually updated datasets and combined with years of historical insight, new systems allow security analysts to track the digital assets of known threat actors in near real time — labeling dangerous corners of the internet and preemptively disarming crime sites.

With the immense pressure to move from detection to protection and into the realm of preemptive response, threat intelligence is fast becoming a differentiator for cloud operators — but one that doesn’t naturally fit previous sharing models — as they become built-in capabilities of the cloud protection platforms themselves.

As the mechanics of threat protection continue to be commoditized, higher value is being placed on standards such as timeliness of response and economics of disruption. In a compute world where each action can be viewed and each compute cycle is billed in fractions of a cent, CISOs are increasingly cognizant of the value deep integration of threat intelligence can bring to cloud protection platforms and bottom-line operational budgets.

-- Gunter Ollmann

First Published: SecurityWeek - January 14, 2020