Getty Images/iStockphoto

Bugcrowd CTO talks hacker feedback, vulnerability disclosure

Bugcrowd CTO Casey Ellis said the company's new penetration testing service helps establish the company beyond public perception of it being purely a bug bounty platform.

SAN FRANCISCO -- Despite being one of the biggest names in the space, Bugcrowd didn't start as a bug bounty platform.

Bugcrowd founder and CTO Casey Ellis told TechTarget Editorial at RSA Conference 2023 that Bugcrowd was originally founded in 2011 not as a bug bounty platform but rather as a crowdsourced penetration testing company. It is perhaps fitting, then, that Bugcrowd announced last week a new "Penetration Testing as a Service" (PTaaS) offering that lets clients purchase and manage crowdsourced penetration tests without the need for a sales call.

The service exists alongside other features of Bugcrowd's Security Knowledge Platform, such as vulnerability disclosure, bug bounties and attack surface management.

"[The PTaaS offering] kind of establishes what we're doing as a true multi-solution platform, which is what we have been this entire time," Ellis said. "It's just that 'bug bounty' is so loud as a concept that we kind of get bracketed into just doing that. Like, what we've actually built out does a whole bunch of different things."

In addition to the new announcement, Ellis discussed Bugcrowd's bug bounty platform as well as the landscape, including hacker feedback and vulnerability disclosure.

Editor's note: This interview was edited for clarity and length.

What have you been up to at RSA?

Bugcrowd founder and CTO Casey EllisCasey Ellis

Casey Ellis: We're running around, meeting with people, learning stuff. We've got events coming up -- all those different things. It's good to be back into the conference swing of things. It's tiring, obviously, but post-COVID, getting to meet people face to face -- that's cool. We're taking advantage of that.

As for the PTaaS offering, I started Bugcrowd from running a pen test company that was the predecessor to Bugcrowd. Everyone talks about a skill shortage in cybersecurity, which expresses itself in the pen test market. You get this supply-demand imbalance, which means people aren't available or they're overpriced. There's also a lot of friction, I think, around procurement and scoping.

What we wanted to do was eliminate more of those problems. How can we connect through our platform and get the right talent at the right time to whatever requirements the customer has? We've got 300,000 [hackers] signed up on the platform, and the sun's always up somewhere, so our ability to scale into that problem means that we can do it fast. I think with traditional pen testing, oftentimes, you're waiting six weeks for the bench to free up. We don't have that problem.

Has Bugcrowd always been diversified beyond bug bounties?

We were doing crowdsourced pen testing before we were ever doing what the market now considers to be "bug bounty." It was our first product. The reason we went so heavy on putting "bug" in the name Bugcrowd is that in 2012, 11 years ago, the market didn't really have a concept of a hacker being a good person. That was the first problem we had to overcome.

From an educational marketing standpoint, the easiest way to do that was to point to someone like Facebook and say, 'Hey, look at that bug bounty program,' because they're already doing that stuff. If they're soliciting the help of these good faith hackers out there on the internet, maybe you can do that too. That was the main tool that we used to change that perception of hackers being inherently evil to there also a lot of people that use hacking for good.

Switching to the bug bounty side of things, researchers have -- for years -- shared their criticisms of the bug bounty space: silent patching, poor communication, bugs being downgraded, and low payouts. That's not specific to Bugcrowd, and there are two sides to every story. But how does Bugcrowd approach these industry-wide criticisms?

Ellis: The first thing is to listen. If your researcher mentions they've got a problem of some sort, and then suddenly other researchers jump in and starts [going to your customer support page] asking what the researcher is talking about, it creates overhead. That part's annoying, but also, maybe they're right. The issue might be something that we need to actually listen to and use to adjust how we do what we do to improve the overall platform and make things safer and more predictable for the hackers.

If the customers are happy and they feel like they can trust us and like the company is on their side, then they're engaged. If they're engaged, then they're doing the things that are valuable for the people that run programs on our platform. The net at the end of the day is the hacker gets paid, the company gets what they want and we get paid to make it all happen.

Security is, as an industry, built on unintended consequence. The fact that no one has fully figured it out yet is inherent to what we all do. The assumption that we've always operated off is, 'Let's absolutely bend ourselves over backwards to as best we can but also assume there are things that we're going to learn -- things that will get broken in our own model – because our whole company is built on breaking stuff.'

That might sound like a cop-out answer. But there's evidence that we've got a reputation for that on the customer and the researcher side. If we end up in a position where I think Bugcrowd could have done something better about, improved the process, improved the policy, handled triage in a different way or paid out differently, we'll own up to that. Then we'll learn and work to not do it again.

One of the issues wrapped up in these researcher criticisms is transparency and non-disclosure agreements [(NDA)]. Both Bugcrowd and HackerOne allow for clients to establish programs where researchers aren't credited and vulnerabilities are not made public -- in some cases, indefinitely. This is controversial because researchers want to be credited, and they want the vulnerability to be disclosed. How do you approach the issue of transparency in bug disclosure?

Ellis: I think, in context, where this gets wrapped around the axle pretty frequently is what we were talking about before -- how there are expectations around bug bounties that people then apply to everything platforms do. Like the work that we do that's under NDA if we're doing work for the Department of Defense or the Air Force -- high-security places that don't want people talking about what they've done, in the same way as if they hired pen test consulting. The big thing with NDAs is it's okay as long as that's practically called out as the nature of the engagement.

It's clear that if you're opting into this program, this is what you've agreed to. If you don't like that, you don't have to opt in -- that's fine. But there are people that are okay with that, and they'll get the work done. That gets conflated with the principles of coordinated vulnerability disclosure [(CVD)], which I've been an ardent champion of for a long time.

I think the best, most mature, transparent, beneficial practice is for a company to say, 'Hey, humans are awesome, but they're not perfect. Sometimes we make mistakes. Sometimes those mistakes create vulnerabilities. That's not because we suck. It's because we're human. What we've done is taken the step to acknowledge the truth of that and then the additional step of saying we need help. If we miss something, we're going to do everything we can to prevent that and to avoid vulnerabilities. But we're going to also assume that somebody gets through. If you find something, you tell us.' At the end of the day, there's accountability on that process as well with the whole idea of a proactive CVD deadline.

I think not a lot of companies do that. What I'd like to see happen is for disclosure to be treated and thought about in that way. It's not airing dirty laundry. It's not because you've failed in a vital sense. It's because building software is hard. That's a problem everyone has. There are all sorts of things that I think are pushing the conversation in this direction. But in my mind, it's still early on.

I think the researcher and transparency issues speak to a larger push and pull that inevitably occurs between researchers, the clients running the program and Bugcrowd. How do you balance these various forces?

Ellis: You're right, there's a tension. We've got three parties in play: Bugcrowd, the hackers and the customers. From Bugcrowd's perspective, what we do is try to promote and reward best practice, so it's more of a carrot than a stick approach. But there's stick. We have fired customers for mistreatment of researchers in the same way that we will ban researchers from the platform if they act in bad faith. But there's a balancing act to that, and I think it's about encouraging best practice and then trying to drive a desire to do that as opposed to enforcing it.

Alexander Culafi is a writer, journalist and podcaster based in Boston.

Dig Deeper on Application and platform security

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close