Ample evidence exists to underline that shortcomings in a third-parties cyber security posture can have an extremely negative effect on the security integrity of the businesses they connect or partner with. Consequently, there’s been a continuous and frustrated desire for a couple of decades for some kind of independent verification or scorecard mechanism that can help primary organizations validate and quantify the overall security posture of the businesses they must electronically engage with.
A couple decades ago organizations could host a small clickable logo on their websites – often depicting a tick or permutation of a “trusted” logo – that would display some independent validation certificate detailing their trustworthiness. Obviously, such a system was open to abuse. For the last 5 or so years, the trustworthiness verification process has migrated ownership from the third-party to a first-party responsibility.
Today, there are a growing number of brand-spanking-new start-ups adding to pool of slightly longer-in-the-tooth companies taking on the mission of independently scoring the security and cyber integrity of organizations doing business over the Web.
The general premise of these companies is that they’ll undertake a wide (and widening) range of passive and active probing techniques to map out a target organizations online assets, crawl associated sites and hidden crevasses (underground, over ground, wandering free… like the Wombles of Wimbledon?) to look for leaks and unintended disclosures, evaluate current security settings against recommended best practices, and even dig up social media dirt that could be useful to an attacker; all as contributors to a dynamic report and ultimate “scorecard” that is effectively sold to interested buyers or service subscribers.
I can appreciate the strong desire for first-party organizations to have this kind of scorecard on hand when making decisions on how best to trust a third-party supplier or partner, but I do question a number of aspects of the business model behind providing such security scorecards. And, as someone frequently asked by technology investors looking for guidance on the future of such business ventures, there are additional things to consider as well.
Are Cyber Scorecarding Services Worth it?
As I gather my thoughts on the business of cyber scorecarding and engage with the purveyors of such services again over the coming weeks (post RSA USA Conference), I’d offer up the following points as to why this technology may still have some business wrinkles and why I’m currently questioning the long-term value of the business model
1. Lack of scoring standards
There is no standard to the scorecards on offer. Every vendor is vying to make their scoring mechanism the future of the security scorecard business. As vendors add new data sources or encounter new third-party services and configurations that could influence a score, they’re effectively making things up as they go along. This isn’t necessarily a bad thing and ideally the scoring will stabilize over time at a per vendor level, but we’re still a long way away from having an international standard agreed to. Bear in mind, despite two decades of organizations such as OWASP, ISSA, SANS, etc., the industry doesn’t yet have an agreed mechanism of scoring the overall security of a single web application, let alone the combined Internet presence of a global online business.
2. Heightened Public Cloud Security
Third-party organizations that have moved to the public cloud and have enabled the bulk of the default security features that are freely available to them and are using the automated security alerting and management tools provided, are already very secure – much more so that their previous on-premise DIY efforts. As more organizations move to the public cloud, they all begin to have the same security features, so why would a third-party scorecard be necessary? We’re rapidly approaching a stage where just having an IP address in a major public cloud puts your organization ahead of the pack from a security perspective. Moreover, I anticipate that the default security of public cloud providers will continue to advance in ways that are not easily externally discernable (e.g. impossible travel protection against credential misuse) – and these kinds of ML/AI-led protection technologies may be more successful than the traditional network-based defense-in-depth strategies the industry has pursued for the last twenty-five years.
3. Score Representations
Not only is there no standard for scoring an organization’s security, it’s not clear what you’re supposed to do with the scores that are provided. This isn’t a problem unique to the scorecard industry – we’ve observed the phenomenon for CVSS scoring for 10+ years.
At what threshold should I be worried? Is a 7.3 acceptable, while a 7.6 means I must patch immediately? An organization with a score of 55 represents how much more of a risk to my business versus a vendor that scores 61?
The thresholds for action (or inaction) based upon a score are arbitrary and will be in conflict with each new advancement or input the scorecard provider includes as they evolve their service. Is the 88.8 of January the same as the 88.8 of May after the provider added new features that factored in CDN provider stability and Instagram crawling? Does this month’s score of 78.4 represent a newly introduced weakness in the organization’s security, or is the downgraded score an artifact of new insights that weren’t accounted for previously by the score provider?
4. Historical References and Breaches
Then there’s the question of how much of an organizations past should influence its future ability to conduct business more securely. If a business got hacked three years ago and the responsibly disclosed and managed their response – complete with reevaluating and improving their security, does another organization with the same current security configuration have a better score for not having disclosed a past breach?
Organizations get hacked all the time – it’s why modern security now works on the premise of “assume breach”. The remotely visible and attestable security of an organization provides no real insights in to whether they are currently hacked or have been recently breached.
5. Gaming of Scorecards
Gaming of the scorecard systems is trivial and difficult to defend against. If I know who my competitors are and which scorecard provider (or providers) my target customer is relying upon, I can adversely affect their scores. A few faked “breached password lists” posted to PasteBin and underground sites, a handful of spam and phishing emails sent, a new domain name registration and craftily constructed website, a few subtle contributions to IP blacklists, etc. and their score is affected.
I haven’t looked recently, but I wouldn’t be surprised if some blackhat entrepreneurs haven’t already launched such a service line. I’m sure it could pay quite well and requires little effort beyond the number of disinformation services that already exist underground. If scorecarding ever becomes valuable, so too will its deception.
6. Low Barrier to Market Entry
The barrier for entry in to the scorecarding industry is incredibly low. Armed with “proprietary” techniques and “specialist” data sources, anyone can get started in the business. If for some reason third-party scorecarding becomes popular and financially lucrative, then I anticipate that any of the popular managed security services providers (MSSP) or automated vulnerability (VA) assessment providers could launch their competitive service with as little as a month’s notice and only a couple of engineers.
At some point in the future, if there ever were to be standardization of scorecarding scores and evaluation criteria, that’s when the large MSSP’s and VA’s would likely add such a service. The problem for the all the new start-ups and longer-toothed start-ups is that these MSSP’s and VA’s would have no need to acquire the technology or clientele.
7. Defending a Score
Defending the integrity and righteousness of your independent scoring mechanism is difficult and expensive. Practically all the scorecard providers I’ve met like to explain their efficacy of operation as if it were a credit bureau’s Credit Score – as if that explains the ambiguities of how they score. I don’t know all the data sources and calculations that credit bureaus use in their credit rating systems, but I’m pretty sure they’re not port scanning websites, scraping IP blacklists, and enumerating service banners – and that the people being scored have as much control to modify the data that the scoring system relies upon.
My key point here though lies with the repercussions of getting the score wrong or providing a score that adversely affects an organization to conduct business online – regardless of the scores righteousness. The affected business will question and request the score provider to “fix their mistake” and to seek compensation for the damage incurred. In many ways it doesn’t matter whether the scorecard provider is right or wrong – costs are incurred defending each case (in energy expended, financial resources, lost time, and lost reputation). For cases that eventually make it to court, I think the “look at the financial credit bureau’s” defense will fall a little flat.
Final Thoughts
The industry strongly wants a scoring mechanism to help distinguish good from bad, and to help prioritize security responses at all levels. If only it were that simple, it would have been solved quite some time ago.
Organizations are still trying to make red/amber/green tagging work for threat severity, business risk, and response prioritization. Every security product tasked with uncovering or collating vulnerabilities, misconfigurations, aggregating logs and alerts, or monitoring for anomalies, is equally capable of (and likely is) producing their own scores.
Providing a score isn’t a problem in the security world, the problem lies in knowing how to respond to the score you’ve been presented with!
-- Gunter Ollmann