Showing posts with label bug bounty. Show all posts
Showing posts with label bug bounty. Show all posts

Wednesday, December 11, 2019

How Commercial Bug Hunting Changed the Boutique Security Consultancy Landscape

It’s been almost a decade since the first commercial “for-profit” bug bounty companies launched leveraging crowdsourced intelligence to uncover security vulnerabilities and simultaneously creating uncertainty for boutique security companies around the globe.

Not only could crowdsourced bug hunting drive down their consulting rates or result in their best bug hunters turning solo, it raised ethics questions, such as should a consultant previously engaged on a customer security assessment also pursue out-of-hours bug hunting against that same customer. What if she held back findings from the day-job to claim bounties at night?

With years of bug bounty programs now behind us, it is interesting to see how the information security sector transformed – or didn’t.


The fears of the boutique security consultancies – particularly those offering penetration testing and reverse engineering expertise – were proven unfounded. A handful of consultants did slip away and adopt full-time bug bounty pursuit lifestyles, but most didn’t. Nor did those companies feel a pinch on their hourly consulting rates. Instead, a few other things happened.

First, the boutiques upped the ante by repositioning their attack-based services – defining aggressive “red team” methodologies and doubling down on the value of combining black-box with white-box testing (or reverse engineering combined with code reviews) to uncover product and application bugs in a more efficient manner. Customers were (and are) encouraged to use bug bounties as a “first-pass filter” for finding common vulnerabilities – and then turn to dedicated experts to uncover (and help remediate) the truly nasty bugs.

Second, they began using bug bounty leaderboard tables as a recruitment vehicle for junior consultants. It was a subtle, but meaningful change. Previously, a lot of recruitment had been based off evaluating in-bound resumes by how many public disclosures or CVEs a security researcher or would-be consultant had made in the past. By leveraging the public leaderboards, suddenly there was a target list of candidates to go after. An interesting and obvious ramification was (and continues to be) that newly rising stars on public bug bounty leaderboards often disappear as they get hired as full-time consultants.

Third, bug bounty companies struggled with their business model. Taking a slice of the vendors payments to crowdsourced bug hunters sounded easier and less resource intensive than it turned out. The process of triaging the thousands of bug submissions – removing duplicates, validating proof-of-concept code, classifying criticality, and resolving disparities in hunter expectations – is tough work. It’s also something that tends to require a high degree of security research experience and costly expertise that doesn’t scale as rapidly as a crowdsource community can. The net result is that many of the bug bounty crowdsource vendors were forced to outsource sizable chunks of the triage work to boutique consultancies – as many in-house bug bounty programs also do.

A fourth (but not final) effect was that some consulting teams found contributing to public bug bounty programs an ideal way of cashing in on consulting “bench time” when a consultant is not directly engaged on a commercial project. Contributing to bug bounties has proven a nice supplement to what was previously lost productivity.

Over the last few years I’ve seen some pentesting companies also turn third-party bug bounty research and contribution into in-house training regimes, marketing campaigns, and an engagement model to secure new customers, e.g., find and submit bugs through the bug bounty program and then reach out directly to the customer with a bag full of more critical bugs.

Given the commercial pressures of on third-party bug bounty companies, it was not unexpected that they would seek to stretch their business model towards higher premium offerings, such as options for customers to engage with their best and most trusted bug hunters before opening up to the public or offering more traditional report-based “assessments” of the company’s product or website. More recently, some bug bounty vendors have expanded offerings to encompass community managed penetration testing and red team services.

The lines continue to blur between the boutique security consultancies and crowdsourcing bug bounty providers. It’ll be interesting to see what the landscape looks like in another decade. While there is a lot to be said and gained from crowdsourced security services, I must admit that the commercial realities of operating businesses that profit from managing or middle-manning their output strikes me as a difficult proposition in the long run.

I think the crowdsourcing of security research will continue to hold value for the businesses owning the product or web application, and I encourage businesses to take advantage of the public resource. But I would balance that with the reliability from engaging a dedicated consultancy for the tougher stuff.

-- Gunter Ollmann

First Published: SecurityWeek - December 11, 2019

Tuesday, September 10, 2019

Stop Using CVSS to Score Risk

The mechanics of prioritizing one vulnerability’s business risk over another has always been fraught with concern. What began as securing business applications and infrastructure from full-disclosure bugs a couple of decades ago, has grown to encompass vaguely referenced flaws in insulin-pumps and fly-by-wire aircraft with lives potentially hanging in the balance.

The security industry has always struggled to “score” the significance of the threat posed by a newly discovered vulnerability and recent industry practices have increased pressure on how this should be done.

With the growth of bug bounty programs and vertical industry specialization at boutique security consultancies, vulnerability discoveries with higher severity often translate directly into greater financial reward for the discoverers. As such, there is immense pressure to increase both the significance and perceived threat posed by the vulnerability. In a growing number of cases, marketing teams will conduct world-wide campaigns to alert, scare, and drive business to the company.

It’s been close to 25 years since the first commercial vulnerability scanners started labeling findings in terms of high, medium, and low severity. Even back then, security professionals stumbled by confusing severity with “risk.”

At the turn of the last century as companies battled millennium bugs, the first generation of professional penetration testing consultancies started to include factors such as “exploitability,” “likelihood of exploitation,” and “impact of exploitation” in to their daily reports and end-of-engagement reports as way of differentiating between vulnerabilities with identical severity levels. Customers loved the additional detail, yet the system of scoring was highly dependent on the skills and experience of the consultant tabulating and reporting the results. While the penetration testing practices of 20 years ago have been rebranded Red Teaming and increasingly taken in-house, risk scoring vulnerabilities remains valuable – but continues to be more art than science.

Perhaps the most useful innovation in terms of qualifying the significance of a new vulnerability (or threat) has been the Common Vulnerability Scoring System (CVSS). It’s something I feel lucky to have contributed to and helped drive across products when I led X-Force at Internet Security Systems (acquired by IBM in 2006). As the (then) premier automated scanner and managed vulnerability scanning vendor, the development and inclusion of CVSS v1 scoring back in 2005 changed the industry – and opened up new contentions in the quantitative weighting of vulnerability features that are still wrestled with today in CVSS version 3.1.


CVSS is intended to summarize the severity of vulnerabilities in the context of the software or device – not the systems that are dependent upon the software or device. As a result, it worries me deeply when I hear that CVSS scores are wrongly being used to score the risk a vulnerability poses to an organization, device manufacturer, or end user.

That misconception was captured recently in an article arguing that vulnerability scoring flaws put patients’ lives at risk. On one hand, the researchers point out that though the CVSS score for their newly disclosed vulnerability was only middling (5.8 out of 10), successful exploitation could enable an attacker to adjust medicine dosage levels and potentially kill a patient. And, on the other hand, medical device manufacturers argue that because the score was relatively low, the vulnerability may not require an expedited fix and subsequent regulatory alerting.

As far as CVSS in concerned, both the researchers and medical device vendor were wrong. CVSS isn’t, and should never be used as, a risk score.

Many bright minds over two decades have refined CVSS scoring elements to make it more accurate and useful as a severity indicator, but have stalled in searching for ways to stretch environmental factors and the knock-on impacts of a vulnerability into quantifiable elements for determining “risk.” Today, CVSS doesn’t natively translate to a risk score – and it may never because every industry assesses risk differently and each business has its own risk factor qualifications that an external party won’t know.

I would caution any bug hunter, security analyst, software vendor, or device manufacturer to not rely on CVSS as the pointy end of the stick for prioritizing remediation. It is an important variable in the risk calculation – but it is not an adequate risk qualifier by itself.

-- Gunter Ollmann

First Published: SecurityWeek - September 10, 2019

Wednesday, December 7, 2016

Sledgehammer DDoS Gamification and Future Bugbounty Integration

Monetization of DDoS attacks has been core to online crime way before the term cybercrime was ever coined. For the first half of the Internet’s life DDoS was primarily a mechanism to extort money from targeted organizations. As with just about every Internet threat over time, it has evolved and broadened in scope and objectives.

The new report by Forcepoint Security Labs covering their investigation of the Sledgehammer gamification of DDoS attacks is a beautiful example of that evolution. Their analysis paper walks through both the malware agents and the scoreboard/leaderboard mechanics of a Turkish DDoS collaboration program (named Sath-ı Müdafaa or “Surface Defense”) behind a group that has targeted organizations with political ties deemed inconsistent with Turkey’s current government.

In this most recent example of DDoS threat evolution, a pool of hackers is encouraged to join a collective of hackers targeting the websites of perceived enemies of Turkey’s political establishment.
Using the DDoS agent “Balyoz” (the Turkish word for “sledgehammer”), members of the collective are tasked with attacking a predefined list of target sites – but can suggest new sites if they so wish. In parallel, a scoreboard tracks participants use of the Balyoz attack tool – allocating points that can be redeemed against acquiring a stand-alone version of the DDoS tool and other revenue-generating cybercrime tools, for every ten minutes of attack they conducted.

As is traditional in the dog-eat-dog world of cybercrime, there are several omissions that the organizers behind the gamification of the attacks failed to pass on to the participants – such as the backdoor built in to the malware they’re using.

Back in 2010 I wrote the detailed paper “Understanding the Modern DDoS Threat” and defined three categories of attacker – Professional, Gamerz, and Opt-in. This new DDoS threat appears to meld the Professional and Opt-in categories in to a single political and money-making venture. Not a surprise evolutionary step, but certainly an unwanted one.

If it’s taken six years of DDoS cybercrime evolution to get to this hybrid gamification, what else can we expect?

In that same period of time we’ve seen ad hoc website hacking move from an ignored threat, to forcing a public disclosure discourse, to acknowledgement of discovery and remediation, and on to commercial bug bounty platforms.

The bug bounty platforms (such as Bugcrowd, HackerOne, Vulbox, etc.) have successfully gamified the low-end business of website vulnerability discovery – where bug hunters and security researchers around the world compete for premium rewards. Is it not a logical step that DDoS also make the transition to the commercial world?

Several legitimate organizations provide “DDoS Resilience Testing” services. Typically, through the use of software bots they spin up within public cloud infrastructure, DDoS-like attacks are launched at paying customers. The objectives of such an attack include the measurement and verification of the defensive capabilities of the targets infrastructure to DDoS attacks, to exercise and test the companies “blue team” response, and to wargame business continuity plans.


If we were to apply the principles of bug bounty programs to gamifying the commercial delivery of DDoS attacks, rather than a contrived limited-scope public cloud imitation, we’d likely have much more realistic testing capability – benefiting all participants. I wonder who’ll be the first organization to master scoreboard construction and incentivisation? I think the new bug bounty companies are agile enough and likely have the collective community following needed to reap the financial rewards of the next DDoS evolutionary step.