Tuesday, April 30, 2019

To Reach SIEM’s Promise, Take a Lesson From World War II

With two of the largest public cloud providers having launched their cloud Security Information and Event Management (SIEM) products and an inevitability that the remainder of the top 5 cloud providers will launch their own permutations some time this year, 2019 is clearly the year of the cloud SIEM.

For an on-premises technology that has been cursed with a couple decades of over-promising, under-achieving, and eye-watering cost escalation, modernizing SIEM into a cloud native security technology is a watershed moment for the InfoSec community.

The promise of finally being able to analyze all the logs, intelligence, and security data of an enterprise in real-time opens the door to many great and obvious things. We can let the SIEM vendors shout about all the obvious defensive value cloud SIEM brings. Instead, I’d like to focus on a less obvious but arguably more valuable long-term contribution that a fully capable cloud SIEM brings to enterprise defense.

Assuming an enterprise invests in bringing all their network logs, system events, flow telemetry, and security events and alerts together into the SIEM, businesses will finally be able to track threats as they propagate in an environment. Most importantly, they’ll be able to easily identify and map the “hotspots” of penetration and compromise, and remedy accordingly.

A unified view will also allow analysts and security professionals to pinpoint the spots where compromises remain hidden from peering eyes. As enterprises strive to deploy and manage an arsenal of threat detection, configuration management, and incident response tools in increasingly dynamic environments, visibility and coverage wax and wane with each employee addition, wireless router hook-up, application installation, or SaaS business connection. Those gaps, whether temporary or permanent, tend to attract an unfair share of compromise and harm.

In World War II, a gentleman by the name of Abraham Wald was a member of Columbia University’s Statistical Research Group (SRG). One problem SRG was tasked with was examining the distribution of damage to returning aircraft and advise on how to minimize bomber losses to enemy fire. A premise of the research was that the areas of bombers that were most damaged and therefore susceptible to flak should be redesigned and made more robust. Wald noted that such a study was biased to only aircrafts that survived their missions and, if you were to assume that damage was more uniformly distributed to all aircrafts, those that returned had actually been hit in the less vulnerable parts. By mapping the damage done to the surviving aircraft, the “undamaged” areas represented the most vulnerable parts of the aircrafts that didn’t survive to return.


Wald’s revelations and work were seminal in the early days of Operational Research – a discipline of applying advanced analytical methods to help make better decisions. I expect cloud SIEM and the integration of AI systems to usher Operational Research and its associated disciplines into the information security sector. Securing an enterprise is a highly complex and dynamic problem and, because Operational Research is focused on optimizing solutions for complex decision-making problems, it is well suited to finding solutions that balance the multi-faceted aspects of business continuity and risk.

As we’re in the early days for cloud SIEM, I’ve yet to see much in the area of employing native AI to address the cold-spots in enterprise threat visibility. The focus to-date is applying AI in threat hunting and automating the reconstruction of kill chain associated with an in-progress attack and supplementing that visualization with related threat intelligence and historical data artifacts.

Putting on a forecasting hat, I expect much of the immediate adoption and growth of cloud SIEM will be driven by desire to realize the promises of on-premises SIEM, in particular, using supervised-learning systems to automate the detection and mitigation of the threats that have pestered security operations teams for twenty-plus years. Infusing SIEM natively on the cloud provider’s platform also creates end to end visibility into security related events inside a business’ environment and pieces in valuable intelligence from the cloud provider’s operations – thereby harnessing the “cloud effects” of collective intelligence and removing the classic requirement for a “patient zero” to initiate an informed response.

What I hope is, once engineering teams have matured those hunting and mitigation capabilities by weaving in AI decision systems and real-time data processing, the “science” of information security can finally come up for air and move forward.

Leveraging the inherent power and scale of public cloud for real-time analytics of enterprise security data at streaming rates means that we’re at the cusp of finally calculating the ROI of each security technology deployed inside an enterprise. That alone should have many CISOs and CFOs jumping for joy. With all the enterprise security data flowing to one place, the cloud SIEM also becomes the anchor for IT operations – such as tracking the “meantime between failures” (MTBF) of protected systems, providing robustness metrics for software assets and system updates, and surfacing the latent risks of the environments being monitored.

75 years may separate War World II from cloud SIEM, but we’re on the cusp of being able to apply the hard-earned learnings from Abraham Wald in our latest adversarial conflict – the cyberwar.

-- Gunter Ollmann

First Published: SecurityWeek - April 30, 2019

Tuesday, April 9, 2019

Get Ready for the First Wave of AI Malware

While viruses and malware have stubbornly stayed as a top-10 “things I lose sleep over as a CISO,” the overall threat has been steadily declining for a decade. Unfortunately, WannaCry, NotPetya, and an entourage of related self-propagating ransomware abruptly propelled malware back up the list and highlighted the risks brought by modern inter-networked business systems and the explosive growth of unmanaged devices.

The damage wrought by these autonomous (not yet AI-powered) threats should compel CISOs to contemplate the defenses to counter such a sophisticated adversary.


The threat of a HAL-9000 intelligence directing malware from afar is still the realm of fiction, so too is the prospect of an uber elite hacker collective that has been digitized and shrunken down to an email-sized AI package filled with evil and rage. However, over the next two to three years, I see six economically viable and “low hanging fruit” uses for AI infused malware – all focused on optimizing efficiency in harvesting valuable data, targeting specific users, and bypassing detection technologies.

  • Removing the reliance upon frequent C&C communications – Smart automation and basic logic processing could be employed to automatically navigate a compromised network, undertake non-repetitive and selective exploitation of desired target types and, upon identification and collection of desired data types, perform a one-off data push to a remote service controlled by the malware owner. While not terribly magical, such AI-powered capabilities would not only undermine all perimeter blacklist and enforcement technologies, but also sandboxing and behavioral analysis detection.
  • Use of data labeling and classification capabilities to dynamically identify and capture the most interesting or valuable data –  Organizations use these types of data classifiers and machine learning (ML) to label and protect valuable data assets. But attackers can exploit the same search efficiencies to find the most valuable business data being touched by real users and systems and to reduce the size of data files for stealthy exfiltration. This enables attackers to sidestep traffic anomaly detection technologies as well as common deception and honeypot solutions.
  • Use of cognitive and conversational AI to monitor local host email and chat traffic and to dynamically impersonate the user – The malware’s AI could insert new conversational content into email threads and ongoing chats with the objective of socially engineering other employees into disclosing secrets or prompting them to access malicious content. Since most email and chat security solutions focus on in-bound and egress content, internal communication inspection is rare. Additionally, conversational AI is advancing quickly enough to make socially engineering IT helpdesk and support staff into disclosing secrets or making temporary configuration a high probability.
  • Use of speech to text translation AI to capture user and work environment secrets –Through a physical microphone, the AI component could convert all discussions within range of the compromised device to text. In addition, some environments may enable the AI to successfully capture the keystrokes of nearby systems and deduce what keys are being pressed. Such an approach also allows hackers to be more selective of what secrets to capture, further minimizing the volume of data that must be egressed from the business, which then reduces the odds of triggering network-based detection technologies.
  • Use embedded cognitive AI in applications to selectively trigger malicious payloads – Since it is possible for cognitive AI systems to not only recognize a specific face or voice, but also determine their race, sex, and age, it is therefore possible for a malware author to be very specific in who they choose to target. Such malware may only be malicious for the CFO of the company or may only manifest itself if the interactive user is a pre-teen female. Because the trigger mechanism is embedded within complex AI, it becomes almost impossible for automated or manual investigation processes to determine the criteria for initiating the malicious behaviors.
  • Capture the behavioral characteristics and traits of system users – AI learning systems could observe the unique cadence, timbre, and characteristics of the users typing, mouse movements, vocabulary, misspellings, etc. and create a portable “bio-profile” of the user. Such “bio-profiles” could then be reused by attackers to bypass the current generation of advanced behavioral monitoring systems that are increasingly deployed in high security zones.

These AI capabilities are commercially available today. Collectively or singularly, each AI capability can be embedded as code within malicious payloads.

Because deep neural networks, cognitive AI, and trained machine language classifiers are incredibly complex to decipher, the trigger mechanism for malicious behaviors may be deeply buried and impossible to uncover through reverse engineering practices.

The baseline for defending against these attacks will lie in ensuring all parts of the organization are visible and continually monitored. In addition, CISOs need to invest in tooling that brings speed and automation to threat discovery through AI-powered detection and response.

As malware writers harness AI for cybercrime, the security industry must push forward with a new generation of dissection and detonation technologies to prepare for this coming wave. A couple promising areas for implementing defensive AI include threat intelligence mining and autonomous response (more on this later).

-- Gunter Ollmann

First published: SecurityWeek - April 9, 2019