Showing posts with label Risk. Show all posts
Showing posts with label Risk. Show all posts

Tuesday, September 10, 2019

Stop Using CVSS to Score Risk

The mechanics of prioritizing one vulnerability’s business risk over another has always been fraught with concern. What began as securing business applications and infrastructure from full-disclosure bugs a couple of decades ago, has grown to encompass vaguely referenced flaws in insulin-pumps and fly-by-wire aircraft with lives potentially hanging in the balance.

The security industry has always struggled to “score” the significance of the threat posed by a newly discovered vulnerability and recent industry practices have increased pressure on how this should be done.

With the growth of bug bounty programs and vertical industry specialization at boutique security consultancies, vulnerability discoveries with higher severity often translate directly into greater financial reward for the discoverers. As such, there is immense pressure to increase both the significance and perceived threat posed by the vulnerability. In a growing number of cases, marketing teams will conduct world-wide campaigns to alert, scare, and drive business to the company.

It’s been close to 25 years since the first commercial vulnerability scanners started labeling findings in terms of high, medium, and low severity. Even back then, security professionals stumbled by confusing severity with “risk.”

At the turn of the last century as companies battled millennium bugs, the first generation of professional penetration testing consultancies started to include factors such as “exploitability,” “likelihood of exploitation,” and “impact of exploitation” in to their daily reports and end-of-engagement reports as way of differentiating between vulnerabilities with identical severity levels. Customers loved the additional detail, yet the system of scoring was highly dependent on the skills and experience of the consultant tabulating and reporting the results. While the penetration testing practices of 20 years ago have been rebranded Red Teaming and increasingly taken in-house, risk scoring vulnerabilities remains valuable – but continues to be more art than science.

Perhaps the most useful innovation in terms of qualifying the significance of a new vulnerability (or threat) has been the Common Vulnerability Scoring System (CVSS). It’s something I feel lucky to have contributed to and helped drive across products when I led X-Force at Internet Security Systems (acquired by IBM in 2006). As the (then) premier automated scanner and managed vulnerability scanning vendor, the development and inclusion of CVSS v1 scoring back in 2005 changed the industry – and opened up new contentions in the quantitative weighting of vulnerability features that are still wrestled with today in CVSS version 3.1.


CVSS is intended to summarize the severity of vulnerabilities in the context of the software or device – not the systems that are dependent upon the software or device. As a result, it worries me deeply when I hear that CVSS scores are wrongly being used to score the risk a vulnerability poses to an organization, device manufacturer, or end user.

That misconception was captured recently in an article arguing that vulnerability scoring flaws put patients’ lives at risk. On one hand, the researchers point out that though the CVSS score for their newly disclosed vulnerability was only middling (5.8 out of 10), successful exploitation could enable an attacker to adjust medicine dosage levels and potentially kill a patient. And, on the other hand, medical device manufacturers argue that because the score was relatively low, the vulnerability may not require an expedited fix and subsequent regulatory alerting.

As far as CVSS in concerned, both the researchers and medical device vendor were wrong. CVSS isn’t, and should never be used as, a risk score.

Many bright minds over two decades have refined CVSS scoring elements to make it more accurate and useful as a severity indicator, but have stalled in searching for ways to stretch environmental factors and the knock-on impacts of a vulnerability into quantifiable elements for determining “risk.” Today, CVSS doesn’t natively translate to a risk score – and it may never because every industry assesses risk differently and each business has its own risk factor qualifications that an external party won’t know.

I would caution any bug hunter, security analyst, software vendor, or device manufacturer to not rely on CVSS as the pointy end of the stick for prioritizing remediation. It is an important variable in the risk calculation – but it is not an adequate risk qualifier by itself.

-- Gunter Ollmann

First Published: SecurityWeek - September 10, 2019

Monday, October 6, 2014

If Compliance were an Olympic Sport

First published on the NCC Group blog - 6th October 2014...

It probably won’t raise any eyebrows to know that for practically every penetration tester, security researcher, or would-be hacker I know, nothing is more likely to make their eyes glaze over and send them to sleep faster than a discussion on Governance, Risk, and Compliance (i.e. GRC); yet the dreaded “C-word” (Compliance) is a core tenant of modern enterprise security practice.

Security professionals that come from an “attacker” background often find that their contention with Compliance is that it represents the lowest hurdle – with some vehemently arguing that too many security standards appear to be developed by committee and only reach fruition through consensus on the minimum criteria. Meanwhile, there is continuous pressure for businesses to master their information system security practices and reach an acceptable compliance state.

Compliance, against public standards, has been the norm for the majority of brand-name businesses for over a decade now, and there’s been a general pull-through elevation of security performance (or should that be appreciation?) for other businesses riding the coat-tails of the big brands. But is it enough?

When I think of big businesses competing against each other in any industry vertical sector, I tend to draw parallels with international sporting events – particularly the Olympic Games. In my mind, each industry vertical is analogous to a different sporting event. Just as athletes may specialise in the marathon or the javelin, businesses may specialise in financial services or vehicle assembly,with each vertical - each sport - requiring a different level of specialisation and training.

While professional athletes may target the Olympic Games as the ultimate expression of their career, they must first navigate their way through the ranks and win at local events and races. In order to achieve success they must, of course, also train relentlessly. And, as a former sporting coach of mine used to say, “the harder you train, the easier you’ll succeed.”

I see compliance as a training function for businesses. Being fully compliant is like spending three hours a day running circuits around the track in preparation for being a marathon runner. Compliance with a security policy or standard isn’t about winning the race, it’s about making sure you’re prepared and are ready to run the race when its time to do so.

That said, not all compliance policies or standards are equal. For example, I only half-heartedly jest when I say that PCI compliance is the sporting equivalent of being able to tie your shoe-laces. Although it’s not much in the grand scheme of security, and while it’s not going to help you win any races, it’s one less thing to trip you up.

If compliance standards represent the various training regimes that an organisation could choose to follow, then “best practices” may as well be interpreted as the hiring of a professional coach; for it’s the coach’s responsibility to optimise the training, review the latest intelligence and scientific breakthroughs, and to push the athlete on to ever greater success.

In the world of information security, striving to meet (and exceed) industry best practices allows an organisation to counter a much broader range of attacks, to be better prepared for more sophisticated threats and to be more successful and efficient when recovering from the unexpected. It’s like elevating your sporting preparedness from limping in to 64th place in the local high school 5k run due to a cramp in your left leg, to being fit and able to represent your country at the Olympic Games.

My advice to organisations that don’t want to find themselves listed in some future breach report, or to watch their customers migrate to competitors because of yet another embarrassing security incident, or trip over their untied shoe-laces, is to move beyond the C-word and adopt best practices. Constant commitment and adherence to best security practices goes a long way to removing unnecessary risk from a business.

However, take caution. “Best practice” in security isn’t a static goal. The coach’s playbook is always developing. As the threat landscape evolves and a litany of new technologies allow you to interface and interact with clients and customers in novel and productive ways, best practices in security will also evolve and improve over time as new exercises and techniquesare added to the roster.

Improve the roster and develop the playbook and you’re sure beat those looming threats and push your business and customer service through the finish line.

Tuesday, April 30, 2013

Fact or Fiction: Is Huawei a Risk to Critical Infrastructure?

 How much of a risk does a company like Huawei or ZTE pose to U.S. national security? It’s a question that’s been on many peoples lips for a good year now. Last year the U.S. House of Representatives Permanent Select Committee on Intelligence warned American companies to “use another vendor”, and earlier in that year the French senator and former defense secretary Jean-Marie Bockel recommended a “total prohibition in Europe of core routers and other sensitive IT equipment coming from China.” In parallel discussions, the United Kingdom, Australia and New Zealand (to name a few) have restricted how Huawei operates within their borders.

Last week Eric Xu – executive vice-president and one of the triumvirate jointly running Huawei – stunned analysts when he told them that Huawei was “not interested in the U.S. market any more.”

Much of the analysis has previously focused upon Huawei’s sizable and influential position as the world’s second largest manufacturer of network routers and switching technology – a critical ingredient for making the Internet and modern telecommunications work – and the fact that it is unclear as to how much influence (or penetration) the Chinese government has in the company and its products. The fear is that at any point now or in the future, Chinese military leaders could intercept or disrupt critical telecommunications infrastructure – either as a means of aggressive statecraft or as a component of cyber warfare.

As someone who’s spent many years working with the majority of the world’s largest telecommunication companies, ISP’s, and cable providers, I’ve been able to observe firsthand the pressure being placed upon these critical infrastructure organizations to seek alternative vendors and/or replace any existing Huawei equipment they may already have deployed. In response, many of the senior technical management and engineers at these organizations have reached out to me and to IOActive to find out how true these rumors are. I use the term “rumor” because, while reports have been published by various government agencies, they’re woefully lacking in technical details. You may have also seen François Quentin (chairman of the board of Huawei France) claiming that the company is a victim of “rumors”.

In the public technical arena, there are relatively few bugs and vulnerabilities being disclosed in Huawei equipment. For example, if you search for CVE indexed vulnerabilities you’ll uncover very few. Compared to the likes of Cisco, Juniper, Nokia, and most of the other major players in routing and switching technology, the number of public disclosures is miniscule. But this is likely due to a few of the following reasons:

  • The important Huawei equipment isn’t generally the kind of stuff that security researchers can purchase off Ebay and poke around with at home for a few hours in a quest to uncover new bugs. They’re generally big-ticket items. (This is the same reason why you’ll see very few bugs publicly disclosed in Cisco’s or Nokia’s big ISP-level routers and switches).
  • Up until recently, despite Huawei being such a big player internationally, they haven’t been perceived as such to the English-speaking security researcher community – so have traditionally garnered little interest from bug hunters.
  • Most of the time when bugs are found and exploitable vulnerabilities are discovered, they occur during a paid-for penetration test or security assessment, and therefore those findings belong to the organization that commissioned the consulting work – and are unlikely to be publicly disclosed.
  • Remotely exploitable vulnerabilities that are found in Huawei equipment by independent security researchers are extremely valuable to various (international) government agencies. Any vulnerability that could allow someone to penetrate or eavesdrop at an international telecommunications carrier-level is worth big bucks and will be quickly gobbled up. And of course any vulnerability sold to such a government agency most certainly isn’t going to be disclosed to the vulnerable vendor – whether that be Huawei, Cisco, Juniper, Nokia, or whatever.

What does IOActive know of bugs and exploitable vulnerabilities within Huawei’s range of equipment? Quite a bit obviously – since we’ve been working to secure many of the telecommunications companies around the world that have Huawei’s top-end equipment deployed. It’s obviously not for me to disclose vulnerabilities that were uncovered on the dime of an IOActive client, however many of the vulnerabilities we’ve uncovered during tests have given great pause to our clients as remedies are sought.

Interesting enough, the majority of those vulnerabilities were encountered using standard network discovery techniques – which to my mind is just scratching the surface of things. However, based upon what’s been disclosed in these afore mentioned government reports over the last year, that was probably their level of scrutinization too. Digging deeper in to the systems reveals more interesting security woes.

Given IOActive’s expertise history and proven capability of hardware hacking, I’m certain that we’d be able to uncover a whole host of different and more significant security weaknesses in these critical infrastructure components for clients that needed that level of work done. To date IOActive the focus has be on in-situ analysis – typically assessing the security and integrity of core infrastructure components within live telco environments.

I’ve heard several senior folks talk of their fears that even with full access to the source code that that wouldn’t be enough to verify the integrity of Chinese network infrastructure devices. For a skillful opponent, that is probably so, because they could simply hide the backdoors and secret keys in the microcode of the devices semiconductor chips.

Unfortunately for organizations that think they can hide such critical flaws or backdoors at the silicon layer, I’ve got a surprise for you. IOActive already has the capability strip away the layers of logic within even the most advanced and secure microprocessor technologies out there and recover the code and secrets that have been embedded within the silicon itself.

So, I’d offer a challenge out there to the various critical infrastructure providers, government agencies, and to manufacturers such as Huawei themselves – let IOActive sort out the facts from the multitude of rumors. Everything you’ve probably been reading is hearsay.

Who else but IOActive can assess the security and integrity of a technology down through the layers – from the application, to the drivers, to the OS, to the firmware, to the hardware and finally down to the silicon of the microprocessors themselves? Exciting times!

-- Gunter Ollmann

First Published: IOActive Blog - April 30, 2013