Monday, October 6, 2014

If Compliance were an Olympic Sport

First published on the NCC Group blog - 6th October 2014...

It probably won’t raise any eyebrows to know that for practically every penetration tester, security researcher, or would-be hacker I know, nothing is more likely to make their eyes glaze over and send them to sleep faster than a discussion on Governance, Risk, and Compliance (i.e. GRC); yet the dreaded “C-word” (Compliance) is a core tenant of modern enterprise security practice.

Security professionals that come from an “attacker” background often find that their contention with Compliance is that it represents the lowest hurdle – with some vehemently arguing that too many security standards appear to be developed by committee and only reach fruition through consensus on the minimum criteria. Meanwhile, there is continuous pressure for businesses to master their information system security practices and reach an acceptable compliance state.

Compliance, against public standards, has been the norm for the majority of brand-name businesses for over a decade now, and there’s been a general pull-through elevation of security performance (or should that be appreciation?) for other businesses riding the coat-tails of the big brands. But is it enough?

When I think of big businesses competing against each other in any industry vertical sector, I tend to draw parallels with international sporting events – particularly the Olympic Games. In my mind, each industry vertical is analogous to a different sporting event. Just as athletes may specialise in the marathon or the javelin, businesses may specialise in financial services or vehicle assembly,with each vertical - each sport - requiring a different level of specialisation and training.

While professional athletes may target the Olympic Games as the ultimate expression of their career, they must first navigate their way through the ranks and win at local events and races. In order to achieve success they must, of course, also train relentlessly. And, as a former sporting coach of mine used to say, “the harder you train, the easier you’ll succeed.”

I see compliance as a training function for businesses. Being fully compliant is like spending three hours a day running circuits around the track in preparation for being a marathon runner. Compliance with a security policy or standard isn’t about winning the race, it’s about making sure you’re prepared and are ready to run the race when its time to do so.

That said, not all compliance policies or standards are equal. For example, I only half-heartedly jest when I say that PCI compliance is the sporting equivalent of being able to tie your shoe-laces. Although it’s not much in the grand scheme of security, and while it’s not going to help you win any races, it’s one less thing to trip you up.

If compliance standards represent the various training regimes that an organisation could choose to follow, then “best practices” may as well be interpreted as the hiring of a professional coach; for it’s the coach’s responsibility to optimise the training, review the latest intelligence and scientific breakthroughs, and to push the athlete on to ever greater success.

In the world of information security, striving to meet (and exceed) industry best practices allows an organisation to counter a much broader range of attacks, to be better prepared for more sophisticated threats and to be more successful and efficient when recovering from the unexpected. It’s like elevating your sporting preparedness from limping in to 64th place in the local high school 5k run due to a cramp in your left leg, to being fit and able to represent your country at the Olympic Games.

My advice to organisations that don’t want to find themselves listed in some future breach report, or to watch their customers migrate to competitors because of yet another embarrassing security incident, or trip over their untied shoe-laces, is to move beyond the C-word and adopt best practices. Constant commitment and adherence to best security practices goes a long way to removing unnecessary risk from a business.

However, take caution. “Best practice” in security isn’t a static goal. The coach’s playbook is always developing. As the threat landscape evolves and a litany of new technologies allow you to interface and interact with clients and customers in novel and productive ways, best practices in security will also evolve and improve over time as new exercises and techniquesare added to the roster.

Improve the roster and develop the playbook and you’re sure beat those looming threats and push your business and customer service through the finish line.

The Pillars of Trust on the Internet

As readers may have seen recently, I've moved on from IOActive and joined NCC Group. Here is my first blog under the new company... first published September 15th 2014...

The Internet of today in many ways resembles the lawless Wild West of yore. There are the land-rushes as corporations and innovators seek new and fertile grounds, over yonder there are the gold-diggers panning for nuggets in the flow of big data, and crunching under foot are the husks of failed businesses and discarded technology.

For many years various star-wielding sheriffs have tried to establish a brand of law and order over the Internet, but for every step forward a menagerie of robbers and scoundrels have found new ways to pick-pocket and harass those trying to earn a legitimate crust. Does it really have to continue this way?

Over the years I’ve seen many technologies invented and embraced with the goal of thwarting the attackers and miscreants that inhabit the Internet.

I’m sure I’m not alone in the feeling that with each new threat (or redefinition of a threat) that comes along someone volunteers another “solution” that’ll provide temporary relief; yet we continue to find ourselves in a never-ending swatting match with the tentacles of cyber crime.

With so many threats to be faced and a slew of jargon to wade through, it shouldn’t be surprising to readers that most organisations (and their customers) often appear baffled and bewildered when they become victims of cyber crime – whether that is directly or indirectly.

While the newspapers and media outlets may discuss the scale of stolen credit cards from the latest batch of mega-breaches and strive to provide common sense (and utterly ignored) advice on password sophistication and how to be mindful of what we’re clicking on, the dynamics of the attack are easily glossed over and subsequently lost to those that are in the best position to mitigate the threat.

The vast majority of successful breaches begin with deception, and depend upon malware. The deception tactics usually take the form of social engineering – such as receiving an email pretending to be an invoice from a trusted supplier – with the primary objective being the installation of a malicious payload.

The dynamics of the trickery and the exploits used to install the malware are ingeniously varied but, all too often, it’s the capabilities of the malware that dictate the scope and persistence of the breach.

While there exist a plethora of technologies that can layered one atop another like some gargantuan wedding cake to combat each tactic, tool, or subversive technique the cyber criminal may seek to employ in their exploitation of a system, doing so successfully is as difficult as attempting to stack a dozen feral cats – and just as likely to leave you scratched and scarred.

In the past I’ve publicly talked about the paradigm change in the way organisations have begun to approach breaches… to accept that they will happen repeatedly and to prioritise on the rapid (and near instantaneous) detection and automated remediation of the compromised systems, rather than waste valuable cycles analysing yesterday’s malware or exploits, or churning over attribution possibilities.

But I think there’s a second paradigm change underway – one which doesn’t attempt to change the entire Internet, but instead focuses on mitigating the deception tactics used by the attackers at the root and creating a safe and trusted environment to conduct business within.

I think the time has come to build (rather than give lip-service to) a safe corner of the Internet and expand from there. It’s the reason I’ve come and joined NCC Group. It is my hope and aspiration that the Domain Services division will provide that anchor point, that Rock of Gibraltar, that technical credibility and wherewithal necessary to regain trust in doing business over the Internet once again.

A core tenant to building a trusted and safe platform for business has to start with the core building blocks of the Internet. Domain Name System (DNS) and Domain registration lie at the very heart of the Internet and yet, from a security perspective, they’ve been largely neglected as a means to neutering the most common and vile social engineering vectors of attack.

Couple tight control of domain registration and DNS with perpetual threat monitoring and scanning, merge it with vigilant policing of secure configuration policies and best practices (not some long-in-the-tooth consensus-strained minimum standards of a decade ago), and you have the pillars necessary to elevate a corner of the Internet beyond the reach of the general lawlessness that’s plaguing business today. And that’s before we get really innovative.

It wasn’t guns or graves that tamed the West of yore, it was the juggernaut of technology that began with railway lines and the telegraph. The mechanisms for restoring business trust in the Internet are now in play. Exciting times lay ahead.

Thursday, July 31, 2014

Smart homes still not "smarter than a fifth-grader"

Smart Home technologies continue to make their failures headline news. Only yesterday did the BBC ran the story "Smart home kit proves easy to hack, says HP study" laying out a litany of vulnerabilities and weaknesses uncovered in popular internet-connected home gadgetry by HP's Fortify security division. If nothing else the story proves that household vulnerabilities are now worthy of attention - no matter how late HP and the BBC are to the party.


As manufacturers try to figure out how cram internet connectivity in to their (formerly) inanimate appliance and turn it in something you can manage from your iPad while flying from Atlanta to Seattle over the in-air WiFi system, you've got to wonder "do we deserve this?"

I remember a study done several years ago about consumer purchasing of Blu-ray players. The question seeking an answer at the time was why were some brands of player outselling others when they were all the same price point and did the same thing? Was brand loyalty or familiarity a critical factor? The answer turned out to be much simpler. The Blu-ray player with the highest sales simply had a longer list of "functions" than the competitors. If all the boxes for the players list 50 carefully bullet-listed pieces of techno-jargon and the other box listed 55 - then obviously that one had to be better, even if the consumer had no understanding of what more than a dozen of those bullets even meant.

In many ways both the manufacturers and consumers of Smart Home technologies continue to fall in to that same trap. Choosing a new LCD HiDef TV is mostly about long lists of word-soup techno-babble, but that babble now extends into all the new features your replacement TV can do via the Internet now. How did we ever survive before we could issue a command via the TV (hidden 5 levels deep in menus and after 3 agonizing minutes of waiting for the various apps to initialize) in order to make the popcorn machine switch from unsalted to salted butter?

For as much thought as goes in to the buying decision over one long list of features against another, the manufacturers of Smart Home devices appear to exhibit about the same effort in securing the features they're trying to cram in. That is to say, very little.

In some ways it's not even the product engineering teams that are at fault. It's more than likely they've been honing their product for 20+ years from an electrical engineering perspective. But now they've been forced to find someway of wedging a TCP/IP stack in to the device and construct a mobile Web app for its remote management. They aren't software engineers, they certainly aren't cyber-security engineers, and you can bet they've never had to adhere to a Security Development Lifecycle (SDL).

How to I characterize the state of Smart Home device security today? I think Richard O'Brien summed it up best in his play The Rocky Horror Picture Show - Let's do the timewarp again!!! The overall state of Smart Home security today is as if we've jumped back 20 years in time to Windows 95.

Wednesday, March 26, 2014

A Bigger Stick To Reduce Data Breaches

On average I receive a postal letter from a bank or retailer every two months telling me that I’ve become the unfortunate victim of a data theft or that my credit card is being re-issued to prevent against future fraud. When I quiz my friends and colleagues on the topic, it would seem that they too suffer the same fate on a reoccurring schedule. It may not be that surprising to some folks. 2013 saw over 822 million private records exposed according to the folks over at DatalossDB – and that’s just the ones that were disclosed publicly.

It’s clear to me that something is broken and it’s only getting worse. When it comes to the collection of personal data, too many organizations have a finger in the pie and are ill equipped (or prepared) to protect it. In fact I’d question why they’re collecting it in the first place. All too often these organizations – of which I’m supposedly a customer – are collecting personal data about “my experience” doing business with them and are hoping to figure out how to use it to their profit (effectively turning me in to a product). If these corporations were some bloke visiting a psychologist, they’d be diagnosed with a hoarding disorder. For example, consider what criteria the DSM-5 diagnostic manual uses to identify the disorder:

  • Persistent difficulty discarding or parting with possessions, regardless of the value others may attribute to these possessions.
  • This difficulty is due to strong urges to save items and/or distress associated with discarding.
  • The symptoms result in the accumulation of a large number of possessions that fill up and clutter active living areas of the home or workplace to the extent that their intended use is no longer possible.
  • The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
  • The hoarding symptoms are not due to a general medical condition.
  • The hoarding symptoms are not restricted to the symptoms of another mental disorder.

Whether or not the organizations hording personal data know how to profit from it or not, it’s clear that even the biggest of them are increasingly inept at protecting it. The criminals that are pilfering the data certainly know what they’re doing. The gray market for identity laundering has expanded phenomenally since I talked about at Blackhat in 2010.

We can moan all we like about the state of the situation now, but we’ll be crying in the not too distant future when statistically we progress from being a victim to data loss, to being a victim of (unrecoverable) fraud.

The way I see it, there are two core components to dealing with the spiraling problem of data breaches and the disclosure of personal information. We must deal with the “what data are you collecting and why?” questions, and incentivize corporations to take much more care protecting the personal data they’ve been entrusted with.

I feel that the data hording problem can be dealt with fairly easily. At the end of the day it’s about transparency and the ability to “opt out”. If I was to choose a role model for making a sizable fraction of this threat go away, I’d look to the basic component of the UK’s Data Protection Act as being the cornerstone of a solution – especially here in the US. I believe the key components of personal data collection should encompass the following:

  • Any organization that wants to collect personal data must have a clearly identified “Data Protection Officer” who not only is a member of the executive board, but is personally responsible for any legal consequences of personal data abuse or data breaches.
  • Before data can be collected, the details of the data sought for collection, how that data is to be used, how long it would be retained, and who it is going to be used by, must be submitted for review to a government or legal authority. I.e. some third-party entity capable of saying this is acceptable use – a bit like the ethics boards used for medical research etc.
  • The specifics of what data a corporation collects and what they use that data for must be publicly visible. Something similar to the nutrition labels found on packaged foods would likely be appropriate – so the end consumer can rapidly discern how their private data is being used.
  • Any data being acquired must include a date of when it will be automatically deleted and removed.
  • At any time any person can request a copy of any and all personal data held by a company about themselves.
  • At any time any person can request the immediate deletion and removal of all data held by a company about themselves.

If such governance existed for the collection and use of personal data, then the remaining big item is enforcement. You’d hope that the morality and ethics of corporations would be enough to ensure they protected the data entrusted to them with the vigor necessary to fight off the vast majority of hackers and organized crime, but this is the real world. Apparently the “big stick” approach needs to be reinforced.

A few months ago I delved in to how the fines being levied against organizations that had been remiss in doing all they could to protect their customer’s personal data should be bigger and divvied up. Essentially I’d argue that half of the fine should be pumped back in to the breached organization and used for increasing their security posture.

Looking at the fines being imposed upon the larger organizations (that could have easily invested more in protecting their customers data prior to their breaches), the amounts are laughable. No noticeable financial pain occurs, so why should we be surprised if (and when) it happens again. I’ve become a firm believer that the fines businesses incur should be based upon a percentage of valuation. Why should a twenty-billion-dollar business face the same fine for losing 200,000,000 personal records as a ten-million-dollar business does for losing 50,000 personal records? If the fine was something like two-percent of valuation, I can tell you that the leadership of both companies would focus more firmly on the task of keeping yours and mine data much safer than they do today. 

-- Gunter Ollmann

First Published: IOActive Blog - March 26, 2014

Thursday, February 6, 2014

An Equity Investor’s Due Diligence

Information technology companies constitute the core of many investment portfolios nowadays. With so many new startups popping up and some highly visible IPO’s and acquisitions by public companies egging things on, many investors are clamoring for a piece of the action and looking for new ways to rapidly qualify or disqualify an investment ; particularly so when it comes to hottest of hot investment areas – information security companies. 

Over the years I’ve found myself working with a number of private equity investment firms – helping them to review the technical merits and implications of products being brought to the market by new security startups. In most case’s it’s not until the B or C investment rounds that the money being sought by the fledgling company starts to get serious to the investors I know. If you’re going to be handing over money in the five to twenty million dollar range, you’re going to want to do your homework on both the company and the product opportunity. 

Over the last few years I’ve noted that a sizable number of private equity investment firms have built in to their portfolio review the kind of technical due diligence traditionally associated with the formal acquisition processes of Fortune-500 technology companies. It would seem to me that the $20,000 to $50,000 price tag for a quick-turnaround technical due diligence report is proving to be valuable investment in a somewhat larger investment strategy. 

When it comes to performing the technical due diligence on a startup (whether it’s a security or social media company for example), the process tends to require a mix of technical review and tapping past experiences if it’s to be useful, let alone actionable, to the potential investor. Here are some of the due diligence phases I recommend, and why:

  1. Vocabulary Distillation – For some peculiar reason new companies go out of their way to invent their own vocabulary as descriptors of their value proposition, or they go to great lengths to disguise the underlying processes of their technology with what can best be described as word-soup. For example, a “next-generation big-data derived heuristic determination engine” can more than adequately be summed up as “signature-based detection”. Apparently using the word “signature” in your technology description is frowned upon and the product management folks avoid the use the word (however applicable it may be). Distilling the word soup is a key component of being able to compare apples with apples.
  2. Overlapping Technology Review – Everyone wants to portray their technology as unique, ground-breaking, or next generation. Unfortunately, when it comes to the world of security, next year’s technology is almost certainly a progression of the last decade’s worth of invention. This isn’t necessarily bad, but it is important to determine the DNA and hereditary path of the “new” technology (and subcomponents of the product the start-up is bringing to market). Being able to filter through the word-soup of the first phase and determine whether the start-up’s approach duplicates functionality from IDS, AV, DLP, NAC, etc. is critical. I’ve found that many start-ups position their technology (i.e. advancements) against antiquated and idealized versions of these prior technologies. For example, simplifying desktop antivirus products down to signature engines – while neglecting things such as heuristic engines, local-host virtualized sandboxes, and dynamic cloud analysis.
  3. Code Language Review – It’s important to look at the languages that have been employed by the company in the development of their product. Popular rapid prototyping technologies like Ruby on Rails or Python are likely acceptable for back-end systems (as employed within a private cloud), but are potential deal killers to future acquirer companies that’ll want to integrate the technology with their own existing product portfolio (i.e. they’re not going to want to rewrite the product). Similarly, a C or C++ implementation may not offer the flexibility needed for rapid evolution or integration in to scalable public cloud platforms. Knowing which development technology has been used where and for what purpose can rapidly qualify or disqualify the strength of the company’s product management and engineering teams – and help orientate an investor on future acquisition or IPO paths.
  4. Security Code Review – Depending upon the size of the application and the due diligence period allowed, a partial code review can yield insight in to a number of increasingly critical areas – such as the stability and scalability of the code base (and consequently the maturity of the development processes and engineering team), the number and nature of vulnerabilities (i.e. security flaws that could derail the company publicly), and the effort required to integrate the product or proprietary technology with existing major platforms.
  5. Does it do what it says on the tin? – I hate to say it, but there’s a lot of snake oil being peddled nowadays. This is especially so for new enterprise protection technologies. In a nut-shell, this phase focuses on the claims being made by the marketing literature and product management teams, and tests both the viability and technical merits of each of them. Test harnesses are usually created to monitor how well the technology performs in the face of real threats – ranging from the samples provided by the companies user acceptance team (UAT) (i.e. the stuff they guarantee they can do), through to common hacking tools and tactics, and on to a skilled adversary with key domain knowledge.
  6. Product Penetration Test – Conducting a detailed penetration test against the start-up’s technology, product, or service delivery platform is always thoroughly recommended. These tests tend to unveil important information about the lifecycle-maturity of the product and the potential exposure to negative media attention due to exploitable flaws. This is particularly important to consumer-focused products and services because they are the most likely to be uncovered and exposed by external security researchers and hackers, and any public exploitation can easily set-back the start-up a year or more in brand equity alone. For enterprise products (e.g. appliances and cloud services) the hacker threat is different; the focus should be more upon what vulnerabilities could be introduced in to the customers environment and how much effort would be required to re-engineer the product to meet security standards.

Obviously there’s a lot of variety in the technical capabilities of the various private equity investment firms (and private investors). Some have people capable of sifting through the marketing hype and can discern the actual intellectual property powering the start-ups technology – but many do not. Regardless, in working with these investment firms and performing the technical due diligence on their potential investments, I’ve yet to encounter a situation where they didn’t “win” in some way or other. A particular favorite of mine is when, following a code review and penetration test that unveiled numerous serious vulnerabilities, the private equity firm was still intent on investing with the start-up but was able use the report to negotiate much better buy-in terms with the existing investors – gaining a larger percentage of the start-up for the same amount.

-- Gunter Ollmann

First Published: IOActive Blog - February 6, 2014