alphaspirit - Fotolia

Bugcrowd CTO explains crowdsourced security benefits and challenges

In part two of this interview, Bugcrowd founder and CTO Casey Ellis discusses the value of crowdsourced vulnerability research, as well as some of the challenges.

Crowdsourced security can provide enormous value to enterprises today, according to Casey Ellis, but the model isn't without its challenges.

In this Q&A, Ellis, chairman, founder and CTO of San Francisco-based crowdsourced security testing platform Bugcrowd Inc., talks about the growth of bug bounties, the importance of vulnerability research and the evolution of his company's platform. According to the Bugcrowd "2018 State of Bug Bounty Report," reported vulnerabilities have increased 21% to more than 37,000 submissions in the last year, while bug bounty payouts have risen 36%.

In part one of this interview, Ellis expressed his concerns that the good faith that exists between security researchers and enterprises is eroding and discussed the need for better vulnerability disclosure policies and frameworks. In part two, he discusses the benefits of crowdsourced security testing, as well as some of the challenges, including responsible disclosure deadlines and the accurate vetting of thousands of submissions.

Editor's note: This interview has been edited for clarity and length.

When it comes to responsible vulnerability disclosure, do you think companies are at a point now where they generally accept the 90-day disclosure period?

Casey Ellis: No. No, I think technology companies are, but it's very easy working in technology to see adoption by technology companies and assume that it's normal now. I see a lot of people do that and I think it's unwise, frankly.

I think that's where we'll end up eventually, and I think we're moving toward that type of thing. But there are caveats in terms of, for example, complex supply chain products or vehicles or medical devices -- the stuff that takes longer than 90 days to refresh and test, patch, and deploy out to the wild. The market is not used to that kind of pressure on public disclosure yet, but I think the pressure is a good thing.

The bigger problem is in terms of general vulnerability disclosure; that's not accepted outside of the tech sector yet -- at all, frankly.

There's been a lot of talk about security automation and machine learning at RSA Conference again this year. Where do you see that going?

Ellis: It depends on your definition of automation at that point. Is it automation of decision-making or is it automation of leverage and reaching that decision?

For the customers, they just want to know what they need to go and fix. But we have to prioritize the submissions.
Casey EllisBugcrowd

Using Bugcrowd as an example, we're heavy users of machine [learning] and automation within our platform, but we're not doing it to replace the hackers. We're doing it to understand which of the conversations we're having as these submissions come in are most important. And we're trying to get to the point where we can say, 'Okay, this bug is less likely to be important than this other bug. We should focus on that first.'

For the customers, they just want to know what they need to go and fix. But we have to prioritize the submissions. We have to sit in front of that customer and have these conversations at scale with everyone who's submitting, regardless of whether they're very, very valuable in terms of the information or they're getting points for enthusiasm but not for usefulness. It's actually a fun and a valuable problem to solve, but it's difficult.

How do you prioritize and rank all of the submissions you receive? What's that process like?

Ellis: There's a bunch of different things because the bug bounty economic model is this: The first person to find each unique issue is the one who gets rewarded for it. And then, the more critical it is, the more they get paid. And this is what we've been doing since day one because the premise was these are two groups of people that historically suck at talking to each other.

So we said we're going to need to pull together a human team to help out, and then what we'll do is we'll learn from that team to build the product and make the product more effective as we go. It's a learning loop that we've got internally, as well. And what they're doing is, basically, understanding what's a duplicate [submission], what's out of scope and things like that. There are simple things that we can do from a filtering standpoint.

Duplicates get interesting because you have pattern matching and Bayesian analysis and different things like that to understand what the likelihood of a duplicate is. Those are the know things. Then there's the heavy stuff -- the critical importance, wake up the engineering team stuff.

There's also a bunch of stuff we do in terms of analyzing the vulnerability against the corpus [of known vulnerabilities] to understand what that is, as well as who the submitter is. Because if they're a notorious badass who comes in and destroys stuff and has a really high signal-to-noise ratio then, yes, that's probably something that we should pay attention to.

There's a bunch of really simple stuff or comparatively simple stuff that we can do, but then there's a bunch of much more nuanced, complicated stuff that we have to work out. And then we've got the human at the end of [the process] because we can't afford to get it wrong. We can't say, no to something that's actually a yes. The whole thing gets basically proofed, and then those learnings go back into the system and it improves over time.

Do you receive a lot of submissions that you look at and say, 'Oh, this is nonsense, someone's trying to mess with us and throw the process off'?

Ellis: Yes. There's a lot of that. As this has grown, there are a bunch of people that are joining in for the first time, and some of them are actively trolling. But then, for every one of those, there are 10 that are just as noisy, but it's because they think they're doing the right thing even though they're not.

If someone runs Nessus and then uploads a scan and says, 'That's a bug!' then what we do at that point is we say, 'No, it's not. By the way, here are some different communities and education initiatives that we've got.'

We try to train them to see if they can get better because maybe they can. And if they've initiated that contact with us, then they're clearly interested and enthusiastic, which is a great starting point because just because they don't know how to be useful right now doesn't mean they can't be in the future. We give the benefit of the doubt there, but obviously, we have to protect the customer from having to deal with all of that noise.

When it comes to that noise in crowdsourced bug hunting, do you think those people are looking more at the reward money or the reputation boost?

Ellis: It's usually both. Money is definitely a factor in bug bounties, but reputation is a huge factor, too. And it goes in two directions.

There's reputation for the sake of ego, and they're the ones that can get difficult pretty quickly, but then there's also reputation for the sake of career development. And that's something that we actually want to help them with. That's been an initiative that we've had from day one, and a bunch of our customers actually have people in their security teams that they hired off the platform.

Jason Haddix [Bugcrowd vice president of trust and security] was number one on the platform before we hired him. We think this is actually a good thing in terms of helping address the labor shortage.

But, to your point, if someone comes in and says, 'Oh, this is a quick way to get a high-paying career in cybersecurity,' then we have to obviously temper that. And it does happen.

Last question: What activity on your platform has stood out to you lately?

Ellis: There's a real shift toward people scaling up in IoT. We have more customers coming onboard to test IoT. I think the issue of IoT security and awareness around the fact that it's something that should actually be addressed is in a far better state now than it was when IoT first kicked off years ago.

And the same thing that happened in web and mobile and automotive is happening in IoT. With IoT, it was 'We don't have the people [for security testing]. Okay, where are we going to get them?' I think the crowd is reacting to that opportunity now and starting to dig into the testing for IoT.

And here's the thing with IoT security: For starters, bugs that are silicon level or at a hardcoded level are probably out there, but the cost to find them and the value of having them [reported] hasn't justified the effort being put in yet.

That's usually not what people are talking about when they're talking about IoT bugs. It's usually either bugs that are CVEs [Common Vulnerabilities and Exposures] in the supply chain software that forms the operating system or bugs that are in the bespoke stuff that sits on top. And, usually, both of those things can be flushed and changed.

We're not at the point where you've got a more common issue and you're not able to change it ever. I assume that will happen at some point but, hopefully by the time we get there, people are going to be thinking about design with security more in mind for the first place, and all that older stuff will be at end-of-life anyway.

Dig Deeper on Risk management

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close