Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Thursday, December 9, 2021

You May Not Have Asked, But The SOC Evolution Answered Anyways

Let’s get the obvious out of the way: The attack surface is growing exponentially and diversely.

Bigger shark, same small boat

The environments, platforms, services, regions and time zones that constitute modern enterprise operations and drive digital transformation for business continue to require increasing specialization and expertise beyond current in-house capabilities. Through a security lens, enterprise attack surfaces are expanding beyond the business’ ability to protect.

Meanwhile, global hiring and retention of security expertise continues to be a weak spot, and direct access to specialized security knowledge and experience is becoming increasingly difficult and costly. And while all that is going on, the volume, duration, pace and sophistication of attacks continues to increase and require significant acceleration in SOC response times and durability — and subsequent autonomous response systems.

Saying we’re in a conundrum is vastly understating things.

The security industry is at the gate of a forced SOC evolution, and as you can see, pressure is coming from all directions to drive that change.

The more things change, the more they stay the same

Plenty has happened that has tried to look like an evolution. For the last decade the security industry that powers SOCs has fixated on automation as the key to alleviating some of the pressures. But after a decade, have things really changed?

SOAR was a brief shining light that has come and mostly gone, having been absorbed back into SIEM, as the legacy SIEM vendors acquired dedicated SOAR vendors to make up for their shortcomings in human workflow automation. This didn’t solve much, as analysts were more or less left in the lurch. They faced the same automation integration challenges, only now they’re locked into a single vendor (where previously an “independent” SOAR offered the prospect of multi-vendor connectors and flexibility to operate independently of SIEM lock-in).

And that’s not the end of our automation woes, either. Automation, on its best day, is still too playbook-oriented. To get things done, experts have to essentially write scripts for each new system, connector and application in an enterprise. If we had set out to create librarians out of analysts, that’s an area our industry could say it had actually achieved success in.

But in all seriousness, we’re caught in a linear script development cycle and automation hasn’t yielded the reduction in analyst workloads that we so desperately need.

I’d like to get off the ride, please

So how do we break the cycle? I can identify two major breakthroughs that will move the needle forward for the SOC evolution.

First, the successful implementation and use of AI “smart” orchestration systems within the SOC.

I’m sure many SOC analysts and CISO’s are jaded from past promises, but the reality is that AI and ML approaches have matured significantly over the last year, and have reached the inflection point of their “hockey stick” usefulness trajectory and the value they can bring. I think as an industry it’s time we start to move past our fear of turning on automated response and protection capabilities that are powered by this new generation of AI and ML. By embracing it, SOCs will become much more effective at detection, which will lead to a reduction in the number of distinct alerts and false positives (put that in the win column for reducing analyst workloads).

Second breakthrough: The ability to tap a global community of contributors via marketplace ecosystems, or more simply put, sharing is caring.

Detection-as-code, policy-as-code, blah-as-code has redefined content development and vendor-proprietary product-dependent content. Platform-independent content (ranging from alerts, threat detection, playbooks, etc.) is rapidly and readily available from a global array of sources, and availability will continue to increase. The ability to tap a global pool of expertise is more prevalent than ever and it feels like the gig economy is finally coming to the security world via the SOC. I think this would have surprised many people just a few years ago, but in the wise words of one Jim Carrey — “desperation is a necessary ingredient to learning anything.”

I don’t care how, I want it now

Well, you can’t have it…yet. But you can start. Both “smart” machine-intelligence and content marketplaces directly address the pressure points previously mentioned, but the industry is still in early stages of the SOC evolution. Right now organizations have to take a look at their SOC and decide how they’re going to reorganize and prioritize to discover and implement the people, tools and partners they’ll need to usher in the evolution.

There are some philosophical hurdles to be overcome, but I believe business needs will drive the pace of change. It used to be the case that penetration testing was in-house only, then extended to trusted vendors managed under restrictive agreements, and on to industry-accredited providers, and now businesses can tap broad communities of bug-bounty-based individual contractors and cloud-based automated attack simulators. If we managed those industry changes, I’m pretty sure we can manage the same for incident response and investigation.

-- Gunter Ollmann

First Published: Medium - December 9, 2021

Tuesday, January 14, 2020

The Changing Face of Cloud Threat Intelligence

As public cloud providers continue to elevate their platforms’ default enterprise protection and compliance capabilities to close gaps in their portfolio or suites of in-house integrated security products, CISOs are increasingly looking to the use and integration of threat intelligence as the next differentiator within cloud security platforms.

Whether thinking in terms of proactive or retroactive security, the incorporation (and production) of timely and trusted threat intelligence has been a core tenant of information security strategy for multiple decades — and is finally undergoing its own transformation for the cloud.

What began as lists of shared intelligence covering infectious domains, phishing URLs, organized crime IP blocks, malware CRCs and site classifications, etc., has broadened and become much richer —  encompassing inputs such as streaming telemetry and trained detection classifiers, through to contributing communities of detection signatures and incident response playbooks. 


Cloud-native security suites from the major public cloud providers are striving to use threat intelligence in ways that have been elusive to traditional security product regimes. Although the cloud can, has and will continue to collect and make sense out of this growing sea of raw and semi-processed threat intelligence, newer advances lie in the progression and application of actionable intelligence. 

The elastic nature of public cloud obviously provides huge advancements in terms of handling “internet-scale” datasets — making short work of correlation between all the industry-standard intelligence feeds and lists as they are streamed. For example, identifying new phishing sites without any user being the first victim, by correlating streams of new domain name registrations (from domain registrars) with authoritative DNS queries (from global DNS providers), together with IP reputation lists, past link and malware detonation logs, and continuous search engine crawler logs, in near real time.

Although the cloud facilitates the speed in which correlation can be made and the degree of confidence placed in each intelligence nugget, differentiation lies in the ability to take action. CISOs have grown to expect the mechanics of enterprise security products to guarantee protection against known and previously reported threats. Going forward, those same CISOs anticipate cloud providers to differentiate their protection capabilities through their ability to turn “actionable” into “actioned” and, preferably, into “preemptively protected and remedied.”

Some of the more innovative ways in which “threat intelligence” is materializing and being transformed for cloud protection include:

  • Fully integrated protection suites. In many ways the term “suite” has become archaic as the loose binding of vendor-branded and discrete threat-specific products has transformed into tightly coupled and interdependent protection engines that span the entire spectrum of both threats and user interaction — continually communicating and sharing metadata — to arrive at shared protection decisions through a collective intelligence platform.
  • Conditional controls. Through an understanding of historical threat vectors, detailed attack sequencing and anomaly statistics, new cloud protection systems continually calculate the probability that an observed sequence of nonhostile user and machine interactions is potentially an attack and automatically direct actions across the protection platform to determine intent. As confidence of intent grows, the platform takes conditional and disruptive steps to thwart the attack without disrupting the ongoing workflow of the targeted user, application or system. 
  • Step back from threat normalization. Almost all traditional protection technologies and security management and reporting tools require threat data to be highly structured through normalization (i.e., enforcing a data structure typically restricted to the most common labeled attributes). By dropping the harsh confines of threat data normalization, richer context and conclusions can be drawn from the data — enabling deep learning systems to identify and classify new threats within the environments they may watch over.
  • Multidimensional reputations. Blacklists and whitelists may have been the original reputational sources for threat determination, but the newest systems not only determine the relative reputational score of any potential device or connection, they may also predict the nature and timing of threat potential in the near future — preemptively enabling time-sensitive switching of context and protection actions.
  • Threat actor asset tracking. Correlating between hundreds or thousands of continually updated datasets and combined with years of historical insight, new systems allow security analysts to track the digital assets of known threat actors in near real time — labeling dangerous corners of the internet and preemptively disarming crime sites.

With the immense pressure to move from detection to protection and into the realm of preemptive response, threat intelligence is fast becoming a differentiator for cloud operators — but one that doesn’t naturally fit previous sharing models — as they become built-in capabilities of the cloud protection platforms themselves.

As the mechanics of threat protection continue to be commoditized, higher value is being placed on standards such as timeliness of response and economics of disruption. In a compute world where each action can be viewed and each compute cycle is billed in fractions of a cent, CISOs are increasingly cognizant of the value deep integration of threat intelligence can bring to cloud protection platforms and bottom-line operational budgets.

-- Gunter Ollmann

First Published: SecurityWeek - January 14, 2020

Tuesday, October 8, 2019

Cloud is Creating Security and Network Convergence

Network Security Expertise is Needed More Than Ever Inside Security Operations Centers and on DevOps Teams

Digital transformation forces many changes to a business as it migrates to the public cloud. One of the most poorly examined is the convergence of network and security administration tasks and responsibilities in the public cloud.


On premises, the division between roles is pretty clear. The physical nature of networking infrastructure – the switches, routers, firewall appliances, network taps, WiFi hubs, and miles upon miles of cable – makes it easy to separate responsibilities. If it has power, stuff connects to it, it routes packets, and weighs more than 5 pounds, it probably belongs to the networking team.

In the cloud, where network connectivity features are defined by policies and code, the network is ephemeral. More importantly, the network is a security boundary – protecting services, applications, and data.

For many organizations, an early steppingstone in their digital transformation is virtualizing all their on-premises applications, infrastructure, and administrative and monitoring processes. Operating almost entirely within an Infrastructure-as-a-Service (IaaS) mode, previously favored network vendors provide virtual machine (VM) versions of their on-premises networking and security appliances – effectively making the transition to public cloud the equivalent of shifting to a new co-hosting datacenter.

This early stage takes very little advantage of public cloud. VMs remain implanted in statically defined networking architectures and old-style network monitoring remains largely the same. However, as organizations embrace continuous integration and continuous delivery (CI/CD), DevOps, serverless functions, and other cloud-native services, the roles of network and security administrator converge rapidly. At that point, network topology ceases to be the grid that servers and applications must snap to. Instead, leveraging the software defined network (SDN) nature of the cloud, the network becomes ephemeral – continuously defined, created, and disposed of in code.

With zero trust running core to modern CI/CD and DevOps security practices in the cloud, SDN has become a critical framework for protecting data, identities, and access controls.

Today, a cloud security architect, security analyst, or compliance officer cannot fulfill their security responsibilities without being a cloud network expert too. And, vice versa, a systems architect or network engineer cannot bring value to cloud operations without being comfortable wearing size 15 cloud security shoes.

For networking professionals transitioning to the cloud, I offer the following advice:

  • Partner extensively with your peers on the security team – they too are a transformation and are destined to become network experts.
  • Plan to transition from VM infested IaaS environments as fast as possible to cloud-native services which are easier to understand, manage, and deploy.
  • Become familiar with the portal management experience of each new network (security) service, but plan on day-to-day management being at the command line.
  • Brush up your scripting language expertise and get comfortable with code management tools. In a CI/CD workplace GitHub and its ilk are where the real action happens.
  • Throw out the old inhibitions of consuming valuable network bandwidth with event logs and streaming service health telemetry. In the age of cloud SIEM, data is king and storage is cheap, and trouble-shooting ephemeral network problems requires both in abundance.
  • Forget thumbing through network security books to learn. Training is all online. Watch the cloud provider’s workshop videos and test the lessons in real-time online.

With so many cloud critical controls existing at the network layer, network security expertise is needed more than ever inside security operations centers and on DevOps teams.

The faster in-house network administrators can transition to becoming public cloud network security engineers, architects, or analysts, the faster their organizations can implement digital transformation.

-- Gunter Ollmann

First Published: SecurityWeek - October 8, 2019

Monday, July 22, 2019

Digital Transformation Makes the Case for Log Retention in Cloud SIEMs

As organizations pursue their digital transformation dreams, they’ll migrate from on-premises SIEM to cloud-based SIEM. In the process of doing so, CISOs are taking a closer look at their previous security incident and event log retention policies, and revisiting past assumptions and processes.

For organizations needing to maintain a smorgasbord of industry compliance and regulatory requirements, overall event log retention will range from one year through to seven. Many organizations find that a minimum of one year meets most mandated requirements but err on the side of retaining between three to four years – depending on what their legal counsel advises.

With public cloud, data retention spans many different options, services, and price points. Backups, blob storage, “hot” access, “cold” access, etc. – there are endless ways to store and access security events and logs. With cloud storage dropping in price year-on-year, it’s cheap and easy to just store everything forever – assuming there’s no rush or requirement to inspect the stored data. But hot data, more expensive than the cold option, gives defenders the quick access they need for real-time threat hunting. Keeping data hot for SIEM use is inevitably one of the more expensive data storage options. A balance needs to be struck between having instant access to SIEM for queries and active threat hunting, and long-term regulatory-driven storage of event and log data. Can an optimal storage balance be achieved?


Widely available public threat reports for the last couple of years provide a “mean-time” to breach discovery ranging from 190 to 220 days and a breach containment window of between 60 to 100 days. Therefore, keeping 220 days of security event logs “hot” and available in a cloud SIEM would statistically only help with identifying half of an organization’s breaches. Obviously, a higher retention period makes sense – especially for organizations with less mature or less established security operations capabilities.

However, a sizable majority of SIEM-discoverable threats and correlated events are detectable in a much shorter timeframe – and rapidly detecting these breaches naturally makes it considerably more difficult for an adversary to maintain long-time persistence. For example, automatically piecing together the kill chain for an email phishing attack that led to a malware installation, that phoned home to a malicious C&C, which had then brute-forced the administrative access to a high value server is almost trivial for cloud SIEM (assuming appropriate logging was enabled). Nowadays, such a scenario (or permutation of that scenario) likely accounts for near half of all enterprise network breaches.

My advice to organizations new to cloud SIEM is to begin with a rolling window of one year’s worth of event logs while measuring both the frequency of breaches and time to mitigate. All older event logs can be stored using cheaper cloud storage options and needn’t be immediately available for threat hunting.

Depending on the security operations teams’ capacity for mitigating the events raised by cloud SIEM, it may be financially beneficial to reduce the rolling window if the team is overwhelmed with unresolvable events. I’d be hesitant to reduce that rolling window. Instead, I would recommend CISOs with under-resourced teams find and engage a managed security services provider to fill that skills gap.

A question then arises as to the value of retaining multiple years of event logs. Is multi-year log retainment purely a compliance tick-box?

While day-to-day cloud SIEM operations may focus on a one-year rolling window, it can be beneficial to organize a twice-annual threat hunt against several years of event logs using the latest available threat intelligence and indicator of compromise (IoC) information as seeds for investigation. These periodic events have two objectives: reduce your average monthly cloud SIEM operating costs (by temporarily loading and unloading the historic data) and allow teams to change mode and “deep dive” into a broader set of data while looking for “low and slow” compromises. If an older breach is detected, incrementally older event logs could be included in the quest to uncover the origin point of an intruder’s penetration or full spectrum of records accessed.

Caution over infinite event log retention may be warranted, however. If the breached organization only has a couple years of logs, versus being able to trace breach inception to, say, four years earlier, their public disclosure to customers may sound worse to some ears (including regulators). For example, disclosing “we can confirm customers over the last two years are affected” is a weaker disclosure than “customers since July 4th 2015 are affected”. Finding the sweet-spot in log retention needs to be a board-level decision.

Having moved to cloud SIEM, CISOs also need to decide what logs should be included and what log settings should be used.

Ideally, all event logs should be passed to the cloud SIEM. That is because the AI and log analytics systems powering threat detection and automated response thrive on data. Additionally, inclusion of logs from the broadest spectrum of enterprise devices and applications will help reduce detection times and remove potential false positives, which increase overall confidence in the system’s recommendations.

Most applications and networked appliances allow for different levels of logging, including scaling from error messages to alerts and error messages through to errors, warnings, status messages, and debugging information. In general, the greater the detail in the event logs, the greater the value they bring to cloud SIEM. In this way, upgrading from “normal” to “verbose” log settings can offer several threat response advantages – particularly when it comes to handling misconfigurations and criticality determination.

The symbiotic development of cloud SIEM and cloud AI innovation continues at an astounding pace. While cloud SIEM may be new for most organizations, its ability to harness the innate capabilities of public cloud are transforming security operations. Not only are threats being uncovered quicker and responses managed more efficiently, but continual advancements in the core AI makes the technology more valuable while costs of operating SIEM and storing data in the cloud continue to drop. This makes it possible for companies to make pragmatic use of the intelligent cloud by operating on a one-year window of hot data while getting value out of older data, stored cold, on twice a year threat hunts.

-- Gunter Ollmann

First Published: SecurityWeek - July 22, 2019

Tuesday, June 11, 2019

The Symbiosis Between Public Cloud and MSSPs

To the surprise of many, public cloud appears to be driving a renaissance in adoption and advancement of managed security service providers (MSSP).

For several years, the major public cloud providers have settled upon a regular rhythm of rolling out new security features for inclusion in their workload management tooling – adding new detections and alerting capabilities that, for want of a better description, are designed to help subscribers clean up an expanding corpus of horrible little mistakes that expose confidential information or make it easy for an attacker to abuse valuable resources and steal intellectual property. To my mind, this incremental rollout of embedded security features represents perhaps the single most valuable advantage of moving to the cloud.


Many of these security features are simple and non-intrusive. For example, they could alert the subscriber that they just created a publicly accessible data storage device that is using a poor administrator password, or that they’re about to spin up a virtual machine (VM) that hasn’t been patched or updated in nine months. Moving beyond alerts, the cloud security tooling could also propose (or force – if enforcing compliance mandate) that both a stronger password be used and that multi-factor authentication be applied by clicking a button or, in the case of a dated VM, auto-patch the OS and installing an updated security suite on the image. 

Getting these security basics done right and applied consistently across millions of subscribers and tens of millions of workloads has, year over year, proved that businesses operating in the public cloud are more secure than those that are solely on-premises. Combining the cloud’s security benefits with MSSP solutions unlocks even greater value, the most common of which are:

Small and medium businesses (SMB), prior to moving to the cloud, were lucky to have a couple of IT support staff who probably between them managed three or four security technologies (e.g. anti-virus, firewall, VPN, and an anti-phishing gateway). Upon moving to the cloud, the IT team are presented with 20+ default running security services and another 50+ security product options available within a single clicks reach, and are simply overwhelmed by the volume of technology presented to them and the responsibility of managing such a diverse portfolio of security products.

The move to the cloud is not the flick of a switch, but a journey. The company’s in-house security team must continue to support the legacy on-premises security technology while learning and mastering an even larger set of cloud-based security options and technologies. These teams are stretched too thin and cannot afford the time to “retrain” for the cloud.

Businesses embracing DevOps strive to optimize value and increase the pace of innovation in the cloud. Operationalizing a DevOps culture typically requires the business to re-orient their internal security team and have them master SecDevOps. As in-house security expertise focuses on SecDevOps, daily security operational tasks and incident response require additional resourcing.

Locating, hiring, and retaining security talent is becoming more difficult – especially for SMBs. Companies moving to the cloud typically either hire new security expertise to carry the organization into the cloud or retrain their smartest and most valuable in-house security talent to try to backfill those “legacy” security roles.

Traditionally, MSSPs value lay in their ability to manage a portfolio of security products that they sold to and installed into their customers’ environments. To ensure service level quality and depth of knowledge, the most successful MSSPs would be highly selective and optimize the portfolio of security products they could support.

As their customers move workloads to the public cloud, larger MSSPs are retraining their technical teams in the cloud-native security offerings from the top public cloud providers. In tandem, the MSSPs are updating their internally developed SOC, NOC, and incident handling tools to embrace the default public cloud provider’s APIs and security products. 

At the same time, MSSPs, appear to be doing better with hiring and retaining security expertise than SMBs. Not only are they able to pay higher salaries but, perhaps more importantly, they’re able to provide the career development paths not present in smaller businesses through a diverse spectrum of security challenges spread over multiple customer environments. 

The parallel growth of default public cloud security capabilities and MSSP adoption offers a solution for the dearth of entry level information security personnel and access to experienced incident responders. Combining cloud efficiencies with MSSP delivery creates advanced capabilities beyond that on-premises only defense can achieve.

Smart MSSPs are embracing cloud operations for their own optimizations and service delivery. Many are taking advantage of the built-in AI and elastic compute capabilities to provide more advanced and personalized security services to customers – without needing to scale their pool of human experts. In this way businesses embracing the efficiencies of the public cloud and on-demand security expertise gain a critical advantage in working around the shortage of security professionals.

Today we have less horses from a century ago and consequently less trained farriers but more qualified welders. As businesses move to the cloud and embrace MSSP, this will make it possible to deliver advanced capabilities that help fill entry level security requirements which account for the majority of security vacancies around the world. As result, existing defenders can work on higher level problems, enabling companies to cover more ground.

-- Gunter Ollmann

First Published: SecurityWeek - June 11, 2019

Tuesday, April 30, 2019

To Reach SIEM’s Promise, Take a Lesson From World War II

With two of the largest public cloud providers having launched their cloud Security Information and Event Management (SIEM) products and an inevitability that the remainder of the top 5 cloud providers will launch their own permutations some time this year, 2019 is clearly the year of the cloud SIEM.

For an on-premises technology that has been cursed with a couple decades of over-promising, under-achieving, and eye-watering cost escalation, modernizing SIEM into a cloud native security technology is a watershed moment for the InfoSec community.

The promise of finally being able to analyze all the logs, intelligence, and security data of an enterprise in real-time opens the door to many great and obvious things. We can let the SIEM vendors shout about all the obvious defensive value cloud SIEM brings. Instead, I’d like to focus on a less obvious but arguably more valuable long-term contribution that a fully capable cloud SIEM brings to enterprise defense.

Assuming an enterprise invests in bringing all their network logs, system events, flow telemetry, and security events and alerts together into the SIEM, businesses will finally be able to track threats as they propagate in an environment. Most importantly, they’ll be able to easily identify and map the “hotspots” of penetration and compromise, and remedy accordingly.

A unified view will also allow analysts and security professionals to pinpoint the spots where compromises remain hidden from peering eyes. As enterprises strive to deploy and manage an arsenal of threat detection, configuration management, and incident response tools in increasingly dynamic environments, visibility and coverage wax and wane with each employee addition, wireless router hook-up, application installation, or SaaS business connection. Those gaps, whether temporary or permanent, tend to attract an unfair share of compromise and harm.

In World War II, a gentleman by the name of Abraham Wald was a member of Columbia University’s Statistical Research Group (SRG). One problem SRG was tasked with was examining the distribution of damage to returning aircraft and advise on how to minimize bomber losses to enemy fire. A premise of the research was that the areas of bombers that were most damaged and therefore susceptible to flak should be redesigned and made more robust. Wald noted that such a study was biased to only aircrafts that survived their missions and, if you were to assume that damage was more uniformly distributed to all aircrafts, those that returned had actually been hit in the less vulnerable parts. By mapping the damage done to the surviving aircraft, the “undamaged” areas represented the most vulnerable parts of the aircrafts that didn’t survive to return.


Wald’s revelations and work were seminal in the early days of Operational Research – a discipline of applying advanced analytical methods to help make better decisions. I expect cloud SIEM and the integration of AI systems to usher Operational Research and its associated disciplines into the information security sector. Securing an enterprise is a highly complex and dynamic problem and, because Operational Research is focused on optimizing solutions for complex decision-making problems, it is well suited to finding solutions that balance the multi-faceted aspects of business continuity and risk.

As we’re in the early days for cloud SIEM, I’ve yet to see much in the area of employing native AI to address the cold-spots in enterprise threat visibility. The focus to-date is applying AI in threat hunting and automating the reconstruction of kill chain associated with an in-progress attack and supplementing that visualization with related threat intelligence and historical data artifacts.

Putting on a forecasting hat, I expect much of the immediate adoption and growth of cloud SIEM will be driven by desire to realize the promises of on-premises SIEM, in particular, using supervised-learning systems to automate the detection and mitigation of the threats that have pestered security operations teams for twenty-plus years. Infusing SIEM natively on the cloud provider’s platform also creates end to end visibility into security related events inside a business’ environment and pieces in valuable intelligence from the cloud provider’s operations – thereby harnessing the “cloud effects” of collective intelligence and removing the classic requirement for a “patient zero” to initiate an informed response.

What I hope is, once engineering teams have matured those hunting and mitigation capabilities by weaving in AI decision systems and real-time data processing, the “science” of information security can finally come up for air and move forward.

Leveraging the inherent power and scale of public cloud for real-time analytics of enterprise security data at streaming rates means that we’re at the cusp of finally calculating the ROI of each security technology deployed inside an enterprise. That alone should have many CISOs and CFOs jumping for joy. With all the enterprise security data flowing to one place, the cloud SIEM also becomes the anchor for IT operations – such as tracking the “meantime between failures” (MTBF) of protected systems, providing robustness metrics for software assets and system updates, and surfacing the latent risks of the environments being monitored.

75 years may separate War World II from cloud SIEM, but we’re on the cusp of being able to apply the hard-earned learnings from Abraham Wald in our latest adversarial conflict – the cyberwar.

-- Gunter Ollmann

First Published: SecurityWeek - April 30, 2019

Friday, September 21, 2018

The Security Talent Gap is Misunderstood and AI Changes it All

Despite headlines now at least a couple years old, the InfoSec world is still (largely) playing lip-service to the lack of security talent and the growing skills gap.

The community is apt to quote and brandish the dire figures, but unless you're actually a hiring manager striving to fill low to mid-level security positions, you're not feeling the pain - in fact there's a high probability many see problem as a net positive in terms of their own employment potential and compensation.

I see today's Artificial Intelligence (AI) and the AI-based technologies that'll be commercialized over the next 2-3 years as exacerbating the problem - but also offering up a silver-lining.

I've been vocal for decades that much of the professional security industry is and should be methodology based. And, by being methodology based, be reliably repeatable; whether that be bug hunting, vulnerability assessment, threat hunting, or even incident response. If a reliable methodology exists, and the results can be consistently verified correct, then the process can be reliably automated. Nowadays, that automation lies firmly in the realm of AI - and the capabilities of these newly emerged AI security platforms are already reliably out-performing tier-one (e.g. 0-2 years experience) security professionals.

In some security professions (such as auditing & compliance, penetration testing, and threat hunting) AI-based systems are already capable of performing at tier-two (i.e. 2-8 years experience) levels for 80%+ of the daily tasks.


On one hand, these AI systems alleviate much of the problem related to shortage and global availability of security skills at the lower end of the security professional ladder. So perhaps the much touted and repeated shortage numbers don't matter - and extrapolation of current shortages in future open positions is overestimated.

However, if AI solutions consume the security roles and daily tasks equivalency of 8-year industry veterans, have we also created an insurmountable chasm for resent graduates and those who wish to transition and join the InfoSec professional ladder?

While AI is advancing the boundaries of defense and, frankly, an organizations ability to detect and mitigate threats has never been better (and will be even better tomorrow), there are still large swathes of the security landscape that AI has yet to solve. In fact many of these new swathes have only opened up to security professionals because AI has made them available.

What I see in our AI Security future is more of a symbiotic relationship.

AI's will continue to speed up the discovery and mitigation of threats, and get better and more accurate along the way. It is inevitable that tier-two security roles will succumb and eventually be replaced by AI. What will also happen is that security professional roles will change from the application of tools and techniques into business risk advisers and supervisors. Understanding the business, communicating with colleagues in other operational facets, and prioritizing risk response, are the intangibles that AI systems will struggle with.

In a symbiotic relationship, security professionals will guide and communicate these operations in terms of business needs and risk. Just as Internet search engines have replaced the voluminous Encyclopedia Britannica and Encarta, and the Dewey Decimal system, Security AI is evolving to answer any question a business may raise about defending their organization - assuming you ask the right question, and know how to interpret the answer.

With regards to the skills shortage of today - I truly believe that AI will be the vehicle to close that gap. But I also think we're in for a paradigm change in who we'll be welcoming in to our organizations and employing in the future because of it.

I think that the primary beneficiaries of these next generation AI-powered security professional roles will not be recent graduates. With a newly level playing field, I anticipate that more weathered and "life experienced" people will assume more of these roles.

For example, given the choice between a 19 year-old freshly minted graduate in computer science, versus a 47 year-old woman with 25 years of applied mechanical engineering experience in the "rust belt" of the US,... those life skills will inevitably be more applicable to making risk calls and communicating them to the business.

In some ways the silver-lining may be the middle-America that has suffered and languished as technology has moved on from coal mining and phone-book printing. It's quite probable that it will become the hot-spot for newly minted security professionals - leveraging their past (non security) professional experiences, along with decades of people or business management and communication skills - and closing the missing security skills gap using AI.

-- Gunter

Saturday, April 21, 2012

Crimeware Immunity via Cloud Virtualization

There's a growing thought recently that perhaps remote terminal emulators and fully virtualized cloud-baseddesktops are the way to go if we're ever to overcome the crimeware menace.

In essence, what people are saying is that because their normal system can be compromised so easily, and that criminals can install malicious software capable of monitoring and manipulating done on the victims computer, that perhaps we'd be better off if the computer/laptop/iPad/whatever was more akin to a dumb terminal that simply connected to a remote desktop instance - i.e. all the vulnerable applications and data are kept in the cloud, rather than on the users computer itself.

It's not a particularly novel innovation - with various vendors having promoted this or related approaches for a couple of decades now - but it is being vocalized more frequently than ever.

Personally, I think it is a useful approach in mitigating much of today's bulk-standard malware, and certainly some of the more popular DIY crimeware packs.

Some of the advantages to this approach include:
  1. The user's personal data isn't kept on their local machine. This means that should the device be compromised for whatever reason, this information couldn't be copied because it doesn't exist on the user's personal device.
  2. So many infection vectors target the Web browser. If the Web browser exists in the cloud, then the user's device will be safe - hopefully implying that whoever's hosting the cloud-based browser software is better at patch management than the average Joe.
  3. Security can be centralized in the cloud. All of the host-based and network-based defenses can be run by the cloud provider - meaning that they'll be better managed and offer a more extensive array of cutting-edge protection technologies.
  4. Any files downloaded, opened or executed, are done so within the cloud - not on the local user's device. This means that any malicious content never makes it's way down to the user's device, so it could never get infected.
That sounds pretty good, and it would successfully counter the most common flaws that criminals exploit today to target and compromise their victims. However, like all proposed security strategies, it's not a silver bullet to the threat. If anything, it alters the threat landscape in a way that may be more advantageous for the more sophisticated criminals. For example, here are a couple of likely weaknesses with this approach:
  1. The end device is still going to need an operating system and network access. As such it will remain exposed to network-level attacks. While much of the existing cybercrime ecosystem has adopted "come-to-me" infection vectors (e.g. spear phishing, drive-by-download, etc.), the "old" network-based intrusion and automated worm vectors haven't gone away and would likely rear their ugly heads as the criminals make the switch back in response to cloud-based terminal hosting.
    As such, the device would still be compromised and it would be reasonable to expect that the criminal would promote and advance their KVM capabilities (i.e. remote keyboard, video and mouse monitoring). This would allow them to not only observe, but also inject commands as if they were the real user. Net result for the user and the online bank or retailer is that fraud is just as likely and probably quite a bit harder to spot (since they'd loose visibility of what the end device actually is - with everything looking like the amorphous cloud provider).
  2. The bad guys go where the money is. If the data is where they make the money, then they'll go after the data. If the data exists within the systems of the cloud provider, then that what the bad guys will target. Cloud providers aren't going to be running any more magical application software than the regular home user, so they'll still be vulnerable to new software flaws and 0-day exploitation. This time though, the bad guys would likely be able to access a lot more data from a lot more people in a much shorter period of time.
    Yes, I'd expect the cloud providers to take more care in securing that data and have more robust systems for detecting things that go astray, but I also expect the bad guys to up their game too. And, based upon observing the last 20 years of cybercrime tactics and attack history, I think it's reasonable to assume that the bad guys will retain the upper-hand and be more innovative in their attacks than the defenders will.
I do think that, on average, more people would be more secure if they utilized cloud-based virtual systems. In the sort-term, that security improvement would be quite good. However, as more people adopted the same approach and shifted to the cloud, more bad guys would be forced to alter their attack tools and vectors.

I suspect that the bad guys would quickly be able to game the cloud systems and eventually obtain a greater advantage than they do today (mostly because of the centralized control of the data and homogeneity of the environment). "United we stand, divided we fall" would inevitably become "united we stand, united we fall."

Monday, September 7, 2009

Ollmann speaking at the ZISC Workshop

This week I'll be in Zurich speaking at the ETH ZISC workshop on Security in Virtualized Environments and Cloud Computing.

The title of my talk is "Not Every Cloud has a Silver Lining" - and it's meant to be a fun (but insightful) look at the biggest and baddest cloud computing environments currently in existence - the botnets.

If you happen to be in Zurich on Thursday morning, by all means, please drop by for the talk. The workshop runs Thursday to Friday.

Need more details on what I'm covering? Below is the abstract...

What’s the largest cloud computing infrastructure in existence today? I’ll give you a hint. It consists of arguably 20 million hosts distributed over more than 100 countries and your computer may actually already be part of it whether you like it or not. It’s not under any single entities control, it’s sphere of influence is unregulated, and its operators have no qualms about sharing or selling your deepest cyber secrets.

The answer is botnets. They’re the largest cloud computing infrastructure out there and they’re only getting bigger and more invasive. Their criminal operators have had well over a decade to perfect their cloud management capabilities, and there’s a lot to learn from their mastery.

This session will look at the evolution of globe-spanning botnets. How does their command and control hierarchy really work? How are malicious activities coordinated? How are botnets seeded and nurtured? And how do they make their cloud invulnerable to shutdown?