Showing posts with label virtualization. Show all posts
Showing posts with label virtualization. Show all posts

Tuesday, November 3, 2020

Securing Data-in-Use With Confidential Computing

Confidential Computing Capabilities and Services Will be a Competitive Differentiator for Cloud Providers

Most organizations should have mastered by now the first two pillars of data protection – securing data-at-rest and data-in-transit – across their hybrid enterprise environments. The third data protection pillar – securing data-in-use (i.e., protecting and encrypting data in use while in memory and during computation) – has been elusive, but is in the process of being addressed through the transformational motion commonly referred to as confidential computing. 

Technology business leaders are ideally pursuing transformation plans that assume ubiquitous confidential computing availability and data-in-use security will be a cloud-native default within five years.

For many organizations, completing their digital transformation journey has been conditional on being able to categorically ensure that absolutely no one – not a trusted system administrator, the OS developer, the cloud provider, law enforcement, malicious insiders, or an attacker armed with powerful zero-day exploits – can ever secretly access or manipulate the data and intellectual property they entrust to the cloud. Consequently, as the third pillar of data security, confidential computing will increasingly be a prerequisite for any cloud-deployed business application.


The technologies, platforms, and architectures that enable confidential computing have evolved at an astounding pace – especially when compared with the decades it has taken for data-at-rest encryption to evolve from password-protected ZIP files in the early 1990s to today’s enabled-by-default hardware-based encryption locked to the physical compute system, or the continued effort to transition data-in-transit defaults from HTTP to secure HTTPS (preferably using TLS v1.3).

The global pandemic has not held back public cloud advancements and new service offerings in confidential computing. Virtualization infrastructure for confidential computing built atop hardware-based trusted execution environments (TEEs) on servers that implement Intel Software Guard Extensions (Intel SGX) are generally available along with previews of confidential VMs using hardware-based TEE on servers supporting AMD’s secure encrypted virtualization (AMD SEV) extension. In parallel, confidential computing options have begun extending across cloud services to embrace Kubernetes confidential nodes, always encrypted SQL databases, confidential machine learning interfaces, HSM key management, and IoT edge compute.

It can be difficult for security leaders to keep pace with underlying hardware advances and their applicability. For example, the memory integrity protection features of Intel SGX are well suited for highly security-sensitive but small workloads, while AMD SEV is useful for “lift and shift” of existing complex or legacy applications and services without necessarily refactoring existing code. Meanwhile, Intel’s trust domain extensions (Intel TDX) will enable hardware isolated virtual machines (called trust domains), AMD’s secure encrypted virtualization-encrypted state (SEV-ES) will help ensure that guest VMs are encrypted and attestable once they stop running, and many more hardware advances from Intel, AMD, Arm, NVIDIA, etc. that help mitigate new and potentially intrusive memory, compute, and attestation threats will become available via the major cloud providers in the coming year or two.

Clearly, confidential computing is in a transitional period as these new hardware-enabled solutions get adopted and deployed by major cloud providers and their most advanced customers. 

While it is easy to get lost in the hardware-based security feature advances the silicon providers are delivering, the long-term assumption security leaders should be planning for is that the physical infrastructure will verifiably guarantee that enclaved processes, memory, and the data they hold or manipulate will be secure from any and all prying eyes – in particular, the cloud and software stack providers – and that all cloud services (from leading public cloud providers) will operate in a secure data-in-use mode by default. It is reasonable to assume that within five years the term “confidential compute” will become superfluous and an assumed native component of all cloud services.

In the meantime, confidential computing capabilities and services will be a competitive differentiator for the large cloud providers. 

As the underlying hardware advances and public cloud providers extend data-in-use security, integrity, and attestation capabilities across their customer-available services, business technology leaders will need to assess each cloud service individually and assume some short period of cloud-specific lock-in for the custom applications their own business will engineer.

Opportunities ranging from anti-money-laundering to customer analytics within the financial services industries, privacy-preserving collaborative disease diagnostics and drug development within healthcare services, joint intelligence analysis and anti-corruption across government agencies, etc., are a snippet of newly achievable privacy and confidentiality-preserving solutions from organizations adopting cutting edge confidential computing-enabled cloud solutions.

Some enterprises may be content to wait until data-in-use security becomes ubiquitous to complete their digital transformation. While confidential compute capabilities are expanding and new secure cloud services are “fluid” in their evolution, there is a clear window for technology-agile businesses to innovate and partner with their cloud provider to bring new categories of secure products to market ahead of both competitors and regulators.

-- Gunter Ollmann

First Published: SecurityWeek - November 3, 2020

Friday, November 20, 2015

Battling Cyber Threats Using Lessons Learned 165 Years Ago

When it comes to protecting the end user, the information security community is awash with technologies and options. Yet, despite the near endless array of products and innovation focused on securing that end user from an equally broad and expanding array of threats, the end user remains more exposed and vulnerable than at any other period in the history of personal computing.

Independent of these protection technologies (or possibly because of them), we’ve also tried to educate the user in how best (i.e. more safely) to browse the Internet and take actions to protect themselves. With a cynical eye, it’s almost like a government handing out maps to their citizens and labeling streets, homes, and businesses that are known to be dangerous and shouldn’t be visited – because not even the police or military have been effective there.

Today we instruct our users (and at home, our children) to be careful what they click-on, what pages or sites they visit, what information they can share, and what files they should download. These instructions are not just onerous and confusing, more often than not they’re irrelevant – as, even after following them to the letter, the user can still fall victim.

The fact that a user can’t click on whatever they want, browse wherever they need to, and open what they’ve received, should be interpreted as a mile-high flashing neon sign saying “infosec has failed and continues to fail” (maybe reworded with a bunch of four-letter expletives for good measure too).
For decades now thousands of security vendors have brought to market technologies that, in effect, are predominantly tools designed to fill vulnerable and exploited gaps in the operating systems lying at the core of devices the end users rely upon. If we’re ever to make progress against the threat and reach the utopia of users being able to “carelessly” using the Internet, those operating systems must get substantially better.

In recent years, great progress has been made in the OS front – primarily smartphone OS’s. The operating systems running on our most pocket-friendly devices are considerably more secure than those we rely upon for our PC’s, notebooks, or servers at home or work. There’s a bunch of reasons why of course – and I’ll not get in to that here – but there’s still so much more that can be done.
I do believe that there are many lessons that can be learned from the past; lessons that can help guide future developments and technologies. Reaching back a little further in to the past than usual – way before the Internet, and way before computers – there are a couple of related events that could shine a brighter light on newer approaches to protecting the end user.

Back in 1850 a Hungarian doctor named Ignaz Semmelweis was working in the maternity clinic at the General Hospital in Vienna where he noted that many women in maternity wards were dying from puerperal fever - commonly known as childbed fever. He studied two medical wards in the hospital – one staffed by all male doctors and medical students, and the other by female midwifes – and counted the number of deaths in each ward. What he found was that death from childbirth was five times higher in the ward with the male doctors.

Dr. Semmulweis tested numerous hypothesis as to the root cause of the deadly difference – ranging from mothers giving birth on their sides versus their backs, through to the route priests traversed the ward and the bells they rang. It appears that his Eureka moment came after the death of a male pathologist who, upon pricking his finger while doing an autopsy on a woman who had died of childbed fever, had succumbed to the same fate (apparently being a pathologist in the mid-19th century was not conducive to a long life). Joining the dots, Dr. Semmulweis noted that the male doctors and medical students were doing autopsies while the midwifes were not, and that “cadaverous particles” (this is a period of time before germs were known) were being spread to those birthing mothers.

Dr. Semmulweis’ medical innovation? “Wash your hands!” The net result, after doctors and midwifes started washing their hands (in lime water, then later in chlorine), was that the rate of childbed fever dropped considerably.

Now, if you’re in the medical trade, washing your hands multiple times per day in chlorine or (by the late 1800’s) carbolic acid, you’ll note that it isn’t so good for your skin or hands.

In 1890 William Stewart Halsted of Johns Hopkins University asked the Goodyear Tire and Rubber Company if they could make a glove of rubber that could be dipped in carbolic acid in order to protect the hands of his nurses – and so was born the first sterilized medial gloves. The first disposable latex medical gloves were manufactured by Ansell and didn’t appear until 1964.

What does this foray in to 19th century medical history mean for Internet security I hear you say? Simple really, every time the end user needs to use a computer to access the Internet and do work, it needs to be clean/pristine. Whether that means a clean new virtual image (e.g. “wash your hands”) or a disposable environment that sits on top of the core OS and authorized application base (e.g. “disposable gloves”), the assumption needs to be that nothing the user encounters over the Internet can persist on the device they’re using after they’ve finished their particular actions.

This obviously isn’t a solution for every class of cyber threat out there, but it’s an 80% solution – just as washing your hands and wearing disposable gloves as a triage nurse isn’t going to protect you (or your patient) from every post-surgery ailment.

Operating system providers or security vendors that can seamlessly adopt and automatically procure a clean and pristine environment for the end user every time they need to conduct activities on or related to the Internet will fundamentally change the security game – altering the battle field for attackers and the tools of their trade.

Exciting times ahead.


-- Gunter

Saturday, April 21, 2012

Crimeware Immunity via Cloud Virtualization

There's a growing thought recently that perhaps remote terminal emulators and fully virtualized cloud-baseddesktops are the way to go if we're ever to overcome the crimeware menace.

In essence, what people are saying is that because their normal system can be compromised so easily, and that criminals can install malicious software capable of monitoring and manipulating done on the victims computer, that perhaps we'd be better off if the computer/laptop/iPad/whatever was more akin to a dumb terminal that simply connected to a remote desktop instance - i.e. all the vulnerable applications and data are kept in the cloud, rather than on the users computer itself.

It's not a particularly novel innovation - with various vendors having promoted this or related approaches for a couple of decades now - but it is being vocalized more frequently than ever.

Personally, I think it is a useful approach in mitigating much of today's bulk-standard malware, and certainly some of the more popular DIY crimeware packs.

Some of the advantages to this approach include:
  1. The user's personal data isn't kept on their local machine. This means that should the device be compromised for whatever reason, this information couldn't be copied because it doesn't exist on the user's personal device.
  2. So many infection vectors target the Web browser. If the Web browser exists in the cloud, then the user's device will be safe - hopefully implying that whoever's hosting the cloud-based browser software is better at patch management than the average Joe.
  3. Security can be centralized in the cloud. All of the host-based and network-based defenses can be run by the cloud provider - meaning that they'll be better managed and offer a more extensive array of cutting-edge protection technologies.
  4. Any files downloaded, opened or executed, are done so within the cloud - not on the local user's device. This means that any malicious content never makes it's way down to the user's device, so it could never get infected.
That sounds pretty good, and it would successfully counter the most common flaws that criminals exploit today to target and compromise their victims. However, like all proposed security strategies, it's not a silver bullet to the threat. If anything, it alters the threat landscape in a way that may be more advantageous for the more sophisticated criminals. For example, here are a couple of likely weaknesses with this approach:
  1. The end device is still going to need an operating system and network access. As such it will remain exposed to network-level attacks. While much of the existing cybercrime ecosystem has adopted "come-to-me" infection vectors (e.g. spear phishing, drive-by-download, etc.), the "old" network-based intrusion and automated worm vectors haven't gone away and would likely rear their ugly heads as the criminals make the switch back in response to cloud-based terminal hosting.
    As such, the device would still be compromised and it would be reasonable to expect that the criminal would promote and advance their KVM capabilities (i.e. remote keyboard, video and mouse monitoring). This would allow them to not only observe, but also inject commands as if they were the real user. Net result for the user and the online bank or retailer is that fraud is just as likely and probably quite a bit harder to spot (since they'd loose visibility of what the end device actually is - with everything looking like the amorphous cloud provider).
  2. The bad guys go where the money is. If the data is where they make the money, then they'll go after the data. If the data exists within the systems of the cloud provider, then that what the bad guys will target. Cloud providers aren't going to be running any more magical application software than the regular home user, so they'll still be vulnerable to new software flaws and 0-day exploitation. This time though, the bad guys would likely be able to access a lot more data from a lot more people in a much shorter period of time.
    Yes, I'd expect the cloud providers to take more care in securing that data and have more robust systems for detecting things that go astray, but I also expect the bad guys to up their game too. And, based upon observing the last 20 years of cybercrime tactics and attack history, I think it's reasonable to assume that the bad guys will retain the upper-hand and be more innovative in their attacks than the defenders will.
I do think that, on average, more people would be more secure if they utilized cloud-based virtual systems. In the sort-term, that security improvement would be quite good. However, as more people adopted the same approach and shifted to the cloud, more bad guys would be forced to alter their attack tools and vectors.

I suspect that the bad guys would quickly be able to game the cloud systems and eventually obtain a greater advantage than they do today (mostly because of the centralized control of the data and homogeneity of the environment). "United we stand, divided we fall" would inevitably become "united we stand, united we fall."