Showing posts with label vulnerability. Show all posts
Showing posts with label vulnerability. Show all posts

Wednesday, December 11, 2019

How Commercial Bug Hunting Changed the Boutique Security Consultancy Landscape

It’s been almost a decade since the first commercial “for-profit” bug bounty companies launched leveraging crowdsourced intelligence to uncover security vulnerabilities and simultaneously creating uncertainty for boutique security companies around the globe.

Not only could crowdsourced bug hunting drive down their consulting rates or result in their best bug hunters turning solo, it raised ethics questions, such as should a consultant previously engaged on a customer security assessment also pursue out-of-hours bug hunting against that same customer. What if she held back findings from the day-job to claim bounties at night?

With years of bug bounty programs now behind us, it is interesting to see how the information security sector transformed – or didn’t.


The fears of the boutique security consultancies – particularly those offering penetration testing and reverse engineering expertise – were proven unfounded. A handful of consultants did slip away and adopt full-time bug bounty pursuit lifestyles, but most didn’t. Nor did those companies feel a pinch on their hourly consulting rates. Instead, a few other things happened.

First, the boutiques upped the ante by repositioning their attack-based services – defining aggressive “red team” methodologies and doubling down on the value of combining black-box with white-box testing (or reverse engineering combined with code reviews) to uncover product and application bugs in a more efficient manner. Customers were (and are) encouraged to use bug bounties as a “first-pass filter” for finding common vulnerabilities – and then turn to dedicated experts to uncover (and help remediate) the truly nasty bugs.

Second, they began using bug bounty leaderboard tables as a recruitment vehicle for junior consultants. It was a subtle, but meaningful change. Previously, a lot of recruitment had been based off evaluating in-bound resumes by how many public disclosures or CVEs a security researcher or would-be consultant had made in the past. By leveraging the public leaderboards, suddenly there was a target list of candidates to go after. An interesting and obvious ramification was (and continues to be) that newly rising stars on public bug bounty leaderboards often disappear as they get hired as full-time consultants.

Third, bug bounty companies struggled with their business model. Taking a slice of the vendors payments to crowdsourced bug hunters sounded easier and less resource intensive than it turned out. The process of triaging the thousands of bug submissions – removing duplicates, validating proof-of-concept code, classifying criticality, and resolving disparities in hunter expectations – is tough work. It’s also something that tends to require a high degree of security research experience and costly expertise that doesn’t scale as rapidly as a crowdsource community can. The net result is that many of the bug bounty crowdsource vendors were forced to outsource sizable chunks of the triage work to boutique consultancies – as many in-house bug bounty programs also do.

A fourth (but not final) effect was that some consulting teams found contributing to public bug bounty programs an ideal way of cashing in on consulting “bench time” when a consultant is not directly engaged on a commercial project. Contributing to bug bounties has proven a nice supplement to what was previously lost productivity.

Over the last few years I’ve seen some pentesting companies also turn third-party bug bounty research and contribution into in-house training regimes, marketing campaigns, and an engagement model to secure new customers, e.g., find and submit bugs through the bug bounty program and then reach out directly to the customer with a bag full of more critical bugs.

Given the commercial pressures of on third-party bug bounty companies, it was not unexpected that they would seek to stretch their business model towards higher premium offerings, such as options for customers to engage with their best and most trusted bug hunters before opening up to the public or offering more traditional report-based “assessments” of the company’s product or website. More recently, some bug bounty vendors have expanded offerings to encompass community managed penetration testing and red team services.

The lines continue to blur between the boutique security consultancies and crowdsourcing bug bounty providers. It’ll be interesting to see what the landscape looks like in another decade. While there is a lot to be said and gained from crowdsourced security services, I must admit that the commercial realities of operating businesses that profit from managing or middle-manning their output strikes me as a difficult proposition in the long run.

I think the crowdsourcing of security research will continue to hold value for the businesses owning the product or web application, and I encourage businesses to take advantage of the public resource. But I would balance that with the reliability from engaging a dedicated consultancy for the tougher stuff.

-- Gunter Ollmann

First Published: SecurityWeek - December 11, 2019

Tuesday, September 10, 2019

Stop Using CVSS to Score Risk

The mechanics of prioritizing one vulnerability’s business risk over another has always been fraught with concern. What began as securing business applications and infrastructure from full-disclosure bugs a couple of decades ago, has grown to encompass vaguely referenced flaws in insulin-pumps and fly-by-wire aircraft with lives potentially hanging in the balance.

The security industry has always struggled to “score” the significance of the threat posed by a newly discovered vulnerability and recent industry practices have increased pressure on how this should be done.

With the growth of bug bounty programs and vertical industry specialization at boutique security consultancies, vulnerability discoveries with higher severity often translate directly into greater financial reward for the discoverers. As such, there is immense pressure to increase both the significance and perceived threat posed by the vulnerability. In a growing number of cases, marketing teams will conduct world-wide campaigns to alert, scare, and drive business to the company.

It’s been close to 25 years since the first commercial vulnerability scanners started labeling findings in terms of high, medium, and low severity. Even back then, security professionals stumbled by confusing severity with “risk.”

At the turn of the last century as companies battled millennium bugs, the first generation of professional penetration testing consultancies started to include factors such as “exploitability,” “likelihood of exploitation,” and “impact of exploitation” in to their daily reports and end-of-engagement reports as way of differentiating between vulnerabilities with identical severity levels. Customers loved the additional detail, yet the system of scoring was highly dependent on the skills and experience of the consultant tabulating and reporting the results. While the penetration testing practices of 20 years ago have been rebranded Red Teaming and increasingly taken in-house, risk scoring vulnerabilities remains valuable – but continues to be more art than science.

Perhaps the most useful innovation in terms of qualifying the significance of a new vulnerability (or threat) has been the Common Vulnerability Scoring System (CVSS). It’s something I feel lucky to have contributed to and helped drive across products when I led X-Force at Internet Security Systems (acquired by IBM in 2006). As the (then) premier automated scanner and managed vulnerability scanning vendor, the development and inclusion of CVSS v1 scoring back in 2005 changed the industry – and opened up new contentions in the quantitative weighting of vulnerability features that are still wrestled with today in CVSS version 3.1.


CVSS is intended to summarize the severity of vulnerabilities in the context of the software or device – not the systems that are dependent upon the software or device. As a result, it worries me deeply when I hear that CVSS scores are wrongly being used to score the risk a vulnerability poses to an organization, device manufacturer, or end user.

That misconception was captured recently in an article arguing that vulnerability scoring flaws put patients’ lives at risk. On one hand, the researchers point out that though the CVSS score for their newly disclosed vulnerability was only middling (5.8 out of 10), successful exploitation could enable an attacker to adjust medicine dosage levels and potentially kill a patient. And, on the other hand, medical device manufacturers argue that because the score was relatively low, the vulnerability may not require an expedited fix and subsequent regulatory alerting.

As far as CVSS in concerned, both the researchers and medical device vendor were wrong. CVSS isn’t, and should never be used as, a risk score.

Many bright minds over two decades have refined CVSS scoring elements to make it more accurate and useful as a severity indicator, but have stalled in searching for ways to stretch environmental factors and the knock-on impacts of a vulnerability into quantifiable elements for determining “risk.” Today, CVSS doesn’t natively translate to a risk score – and it may never because every industry assesses risk differently and each business has its own risk factor qualifications that an external party won’t know.

I would caution any bug hunter, security analyst, software vendor, or device manufacturer to not rely on CVSS as the pointy end of the stick for prioritizing remediation. It is an important variable in the risk calculation – but it is not an adequate risk qualifier by itself.

-- Gunter Ollmann

First Published: SecurityWeek - September 10, 2019

Friday, December 21, 2012

How much is a zero-day exploit worth?

It's a pretty common question asked by both bug hunters and journalists alike - "How much is a zero-day vulnerability worth?"

There's no simple answer as I discuss the topic in my first blog posting with IOActive. You can find the discussion "Exploits, Curdled Milk and Nukes (Oh my!)" on the IOActive Labs Blog site.

Thursday, March 15, 2012

A Bug Hunter's Diary (book review)

A couple of months ago I got my hands on Tobias Klein's new book "A Bug Hunter's Diary" and have only recently managed to read through it and, I have to say, I liked it very much.

The book takes the reader through a guided tour of seven vulnerabilities uncovered by Tobias over the last few years. Unlike most books on the topic of bug hunting (which typically focus on walking through the tools of the trade and talking in generalities) this book takes on a pseudo-diary format - revealing the thoughts, assumptions and leaps-of-faith that go in to uncovering the kinds of bugs that make the headlines.

As someone who's worked extensively in the commercial bug hunting and vulnerability exploitation business, nothing beats the shoulder-surfing approach to knowledge transfer, and I think this book manages to achieve much of that experience.

Given the span of bugs, platforms and years between discoveries, it provides an interesting perspective on the responses of vendors (and product maintenance engineers) to bugs that come their way and their capability to respond/fix them. My, how times have changed (in a good way - generally).

As a technical book, I think it has legs and I don't think it'll date quickly. Tobias works through the bugs in a logical and well thought out way and, as long as the reader has some familiarity with debuggers and some coding prowess, it shouldn't be that technically taxing. The best bug hunters aren't elite coders and assembly guru's - they're folks that explore imaginative "what if?" scenarios within the software or devices they're looking at.

What bugs are covered? Well, there are several, but divided in to the following major categories:
  • VideoLAN's VLC media player
  • Sun Solaris kernel
  • FFmpeg multimedia library
  • WebEx ActiveX
  • Avast! AV
  • OSX TTY IOCTL
  • iPhone
Who's going to benefit from this book? I think the book will be well suited to senior engineers charged with debugging glitches in their companies software and folks looking to make the leap from being tool-only penetration testers and security consultants. The kind of folks that have been to one or two Blackhat Las Vegas conferences in the past, listened to various bug hunters spout their latest findings from the podium, and figured that they'd like to give it a try for real.

Shoulder-surf in the comfort of your own home (or Kindle)!

Sunday, November 15, 2009

"Responsible Disclosure" - Friend or Foe

It's been an interesting weekend on the "responsible disclosure" front. Reactions and tweet threads from several noted vulnerability researchers in response to K8em0's blog post (Behind the ISO Curtain) most notably those of Halvar Flake via his post (Why are most researchers not a fan of standards on "responsible disclosure" have been fast and (semi)furious.

On one hand it seems like a typical, dare I say it "annual", flareup on the topic. But then again, the specter of some ill-informed ISO standard being developed as a guide for defining and handling responsible disclosure was sure to escalate things.

To my mind, Halvar makes a pretty good argument for the cause that any kind of "standard" isn't going to be worth the paper its printed on. I particularly liked the metaphor...
"if I can actually go and surf, why would I discuss with a bunch of people sitting in an office about the right way to come back to the beach ?"
But the discussion isn't going away...

While I haven't seen anything on this ISO project (ISO/IEC NP 29147 Information technology - Security techniques - Responsible Vulnerability Disclosure) I suspect strongly that it has very little to do with the independent vulnerability researchers themselves - and seems more focused on how vendors should aim to disclose (and dare I say "coordinate" disclosures) publicly. In general most vendor-initiated vulnerability disclosures have been mostly responsible - but in cases where multiple vendors are involved, coordination often breaks down and slivers of 'ir' appear in front 'responsible'. The bigger and more important a multi-vendor security vulnerability is, the more likely it's disclosure will be screwed up.

Maybe this ISO work could help guide software vendors in dealing with security researchers and better handling disclosure coordination. It would be nice to think so.

Regardless, I think the work of ICASI is probably more useful - in particular the "Common Frameworks for Vulnerability Disclosure and Response (CVRF)" - and would probably bleed over in to some ISO work eventually. There are only a handful of vendors participating in the consortium (Cisco, Microsoft, IBM, Intel, Juniper and Nokia), but at least they're getting their acts together and working out a solution for themselves. I may be a little biased though since I was briefly involved with ICASI when I was with IBM. Coordination and responsible disclosure amongst these vendors is pretty important - eat your own dog-food and all that lark.

At the end of the day, trying to impose standards for vulnerability disclosure upon independent researchers hasn't and isn't going to work - even if these "standards" were ever to be enshrined in to law.

Thursday, March 26, 2009

Reigniting the Bugs for Cash Debate

It's like one of those magic candles people place on birthday cakes that sparkle and relight themselves each time you think they've been blown out. That's how I'd define the most recent ignition of the "bugs for cash" debate.

By now you'll have probably heard that Dino Dai Zovi, Charlie Miller and Alex Sotirov have declared "No more free bugs" (Dai Zovi affirms his position and provides insight to his side of the argument over on his blog titled "No more free bugs").

It's been picked up by several of the security media channels, and Robert Lemos over at Security Focus as a good summary "
No more bugs for free, researchers say" (although I'd debate this being anything like a "new chapter"). And then, this morning, I read Dave Goldsmith's blog posting Vulnerability Research: Times They Are A-Changin.

Perspective
Since I'm hardly a wall-flower and have been outspoken about the various aspects of the disclosure debate (particularly vulnerability purchase programs) for several years, I figured I'd better provide my perspective on this most recent disclosure storm.

While I respect the technical capabilities of
Dino Dai Zovi, Charlie Miller and Alex Sotirov in finding new vulnerabilities and weaponizing them in to exploits - I think there's a lot of show-boating going on, and it seems that the popular media is happy to go along for the ride.

Several people have pointed out that security researchers invest a lot of time in finding bugs and, since the "good" vulnerabilities are getting harder to find (i.e. taking more effort), they deserve to be paid for their work. I'd go along with that reasoning but for a simple fact, the software vendors haven't asked nor employed these particular researchers to find bugs in their products.

From a vendors perspective, their CEO and CFO have defined the companies operational budget and optimized their expenditure processes. Most have invested in to secure software development lifecycle programs and have included many security review and QA gates already. Most of the major vendors also employ professional (external) vulnerability research teams at the tail of the development lifecycle to "blackhat" their way to any bugs or vulnerabilities that may have been missed. Then, even having followed this process, the odd vulnerability still makes it through.

From the vendors perspective, vulnerabilities should have been caught within their existing processes. But, as someone with firsthand experience of this, each sub-process is operating within time and financial constraints. Take the third-party vulnerability researchers that consult for the vendor - they were probably contracted to provide 100 man-days of effort for $250k (plus expenses) - and may find anywhere between zero and a thousand vulnerabilities - WITHIN those time/financial limits. The vendor set those budget/time limits. If they were wrong, maybe some external (unaffiliated) security researcher will uncover a vulnerability that was missed. The vendor then needs to decide whether future investments in their security review processes are needed - and would be budgeted accordingly.

With a vendor-perspective-hat on, why should they be paying for more bugs? If it's a concern (i.e. affects customer confidence or damages the brand), they'll reprioritize their internal QA spending and increase budgets.

Vulnerability Worth
I've seen many security researchers debate the value of a vulnerability - and most are "dissatisfied" with the compensation paid by the commercial vulnerability purchase programs. As
Dave Goldsmith clearly states in his blog - "Defenders Buy Vulns, Attackers Buy Exploits" - and there's a big difference in uncovering a vulnerability and actually turning it in to an exploit.

Criminals (and Governments) pay a premium for weaponized vulnerabilities - so to compare the prices they're willing to pay for some new zero-day versus a security vendor who's focused on remediating the vulnerability is naive. And, as for these $5,000 (etc.) contests to be the first to break something - that has nothing to do with improving security, its a marketing exercise - and the researchers who participate in them are merely associating a small dollar value to their professional reputation.

Getting back to my point about a software vendors budget for assuring/improving security... What I've found is that many of the best security researchers are already contracting with, or working within, the major software vendors and helping to improve their products security. From a compensation perspective, those security researchers regularly earn anywhere between $150k to $250k per annum (plus benefits) - which is much more profitable than picking up $5k at a contest here and there.

Then there's the "Best of the best" security researchers out there. Not only are they smart enough to find the most important vulnerabilities and figure out how to exploit them, but they're also smart enough to set up there own businesses and really rake up the dollars (and get others to do the tedious research work!).

So, whats a bug worth in that context? That 100 man-day contract may yield 100 bugs - placing each bugs value at $2,500. On the other hand they may only find one bug - and that single bug is now worth $250k. Take your pick.

In my opinion "No more bugs for free", while headline grabbing, is old ground trodden over many times in the past. Routes already exist for legitimate/ethical security researchers to make a mint from the vulnerabilities they are capable of finding - if they're smart enough to understand the business.

Vulnerability showboating is for amateurs from a past age. The vulnerability research business has moved on.

Tuesday, February 17, 2009

Worlds Top Vulnerability Discoverer


Who's the worlds most frequent discoverer (and discloser) of security vulnerabilities?

It's not not a name you're likely to be familiar with (sorry "best in the world" team ;-)

With a staggering number of 612 public vulnerability disclosures through to the end of 2008, sitting in pole position, is Luigi Auriemma. Luigi managed to oust r0t (finally) sometime last year. I think that the fact that r0t appears to have "retired" from the vulnerability discovery business probably helped.

For full stats and analysis, I've posted a more detailed blog over on Frequency-X -- Top-10 Vulnerability Discoverers of All Time (as well as 2008) - Who's in Pole Position?