Discovering vulnerabilities in your own software is a full-time task. So, what if you could get crowdfunded help from experienced security researchers? Bug bounty programs do just that: provide organizations a helping hand in discovering bugs and vulnerabilities before bad actors can.
To help organizations develop their own bug bounty program, author and security researcher John Jackson wrote Corporate Cybersecurity: Identifying Risks and the Bug Bounty Program.
"I wanted to make sure everyone understood what goes on in a program and how things should be run," he said. "I was getting frustrated when program managers wouldn't understand bugs and, on the flip side, saw how communication went between hackers and employees."
Here, Jackson provides tips for creating a bug bounty program, explains how it differs from a vulnerability disclosure program (VDP), discusses issues to consider when developing a bug bounty vs. VPD, and more.
Editor's note: The following interview has been edited for conciseness and clarity.
Who would benefit from reading Corporate Cybersecurity?
John Jackson: Organizations that either have vulnerability disclosure programs and are looking to get more into the paid disclosure space of bug bounty programs or organizations that are looking to start small with bug bounty programs. Specific roles that would benefit include application security engineers, bug bounty program managers and C-level people who need to understand how a bug bounty fits into the organization. It doesn't hurt to have hackers read the book, either.
Are there more bug bounty programs or VDPs today?
Jackson: I would say there are definitely more vulnerability disclosure programs. Surprisingly, when I was doing research for this book, I learned a lot of wealthy companies out there, such as Ford, a Fortune 10 company and Adobe, have VDPs but not bug bounty programs.
What is the fallout from having a VDP instead of a bug bounty program?
Jackson: There are pros and cons to running a VDP versus running a bug bounty program. Vulnerability disclosure programs suffer from a couple things -- one being resources. Because you're not responsible for paying a hacker and you expect them to report out of good faith, the expectation on the program side of the house is that anything discovered is put on the back burner. A lot of the times, when you report to a vulnerability disclosure program, there could be a critical bug that goes unresolved for three months, four months, a year. That isn't uncommon; I've seen it before. Additionally, communication between the VDP and hacker is hit or miss. Communication between a bug bounty program and hacker is more efficient because it's operated by a middleman, aka the triager.
However, the one thing you don't get with bug bounty programs is the full disclosure experience because, a lot of the times, what the company is paying for is for you to report under NDA [nondisclosure agreement] so it can fix the bug and move on. A lot of them do not allow public disclosures. The tradeoff between VDPs and bug bounty programs is the difference between money and culture.
What advice do you have for companies considering starting a VDP or bug bounty program?
Jackson: There are a total of four different program types out there: public or private bug bounty and public or private VDP. The difference between them is cost and visibility.
My recommendation would be to start with a small, private bug bounty program and expand the scope quarterly. Always be looking to expand the scope until you have all your major assets covered. If you start with a wild-card scope and you're new to bug bounty programs, you're just going to get clobbered with submissions. Usually, the bug bounty platform providers [e.g., HackerOne or Bugcrowd] allow you to give them a target number of hackers to invite per month or quarter, and you establish a cadence with them.
The reason I didn't suggest a VDP is that a lot of companies get trapped in a loop. They have a VDP, and whether it's public or private, they're getting bug submissions constantly. They start to say, 'Why pay for bugs? Why pay for vulnerabilities?' They miss the bigger picture, which is they're not going to get the thoroughness that comes from hackers motivated to make money through a bug bounty program.
What are some tips for companies launching a bug bounty program?
Jackson: I have a few different immediate tips that come to mind. Make sure all aspects of your security are covered. For instance, if you're a heavy web application, have things like load balancers, web application firewalls, and endpoint detection and response [EDR] tooling. With application coverage and EDR out of the way, you also want to have legal involved to ensure your ends are covered in case something happens in the bug bounty program. Liability is important.
With security controls in place, the next step is to do things like internal and external penetration testing. Cover everything by having some baseline level of security to make sure you don't go broke immediately when you start the bug bounty program.
Also, make sure you have application security engineers ready to handle the workload of submissions a bug bounty program brings. Have a program manager to triage submissions. You need someone who understands the seriousness of submitted vulnerabilities. When I was a program manager, I watched bugs get closed out of reports that I've looked at and said, 'No, this is legit. The bug seems pretty serious, and we should look at it further.' From there, I'd pull it into an open state and triage it. Program managers need to do their due diligence. Have program managers who at least understand the different types of vulnerabilities, especially in web apps.
Another tip is to launch a private program first. This way, you can start with a smaller amount of target assets that are in scope. From there, expand the scope quarter by quarter until everything is covered.
What are some of the biggest issues that arise when companies create a bug bounty program?
Jackson: Confusion and paranoia. With bug bounty programs, you'll have a higher amount of pen testing traffic because hackers are continuously testing your application. This can be frustrating because you want to determine whether this traffic is criminals attempting to get in or just hackers testing your systems. But that's just overthinking things. Don't try to determine if it's a security researcher or a criminal; just try to stop the hacker. If someone gets onto your network, can you stop them? How are you going to stop them? You have to go through the chain of custody process -- no matter what.
To make things easier, program managers can email security researchers through their bug bounty platform and ask if anyone has been testing something they've seen increased traffic to. Simply ask that they contact you if they've been testing web shells, etc., and then you can identify traffic by source IPs. Another option to know whether it's a security researcher or an attacker is to provide a VPN testers must use.
Having a SIEM is crucial if you're going to run a bug bounty program. It would ingest all the logs of any asset you want tested. You could look at source IPs to see who is doing what.
How do you recommend security researchers start out working on bug bounties?
Jackson: When you're getting started as a researcher, you're trying to figure out everything and work your way around things. Don't be discouraged by bigger companies -- for example, Hulu or Google -- that have a lot of assets. Having more assets means there's a good chance you'll find more vulnerabilities. Researchers too often will target medium or small public bug bounty programs when starting out. That is the worst thing you can do -- you want to find the biggest public programs. With those, a lot of hackers are probably running automated scans and not getting hands on with the targets. If you want to get more private invites, get hands on with the with specific applications and network assets, and test them thoroughly.
The other thing researchers need to think about is the impact of the different bug types. Also, you need to be thorough in reports. You have to understand how to maximize impact and not get frustrated. For example, you may discover remote code execution is possible, but it might just be some straggler developer server that's not really network-connected at all, which means the impact is much lower.
About the author
John Jackson is a senior offensive security consultant and founder of Sakura Samurai 桜の侍, a hacking group dedicated to legal hacking. He is most known for multiple CVE and government/enterprise security research contributions. Jackson has contributed to the threat and vulnerability space, disclosing several pieces of cyber vulnerability research and assisting in resolution for the greater good. He continues to work on several projects and collaborates with other researchers to identify major cyber vulnerabilities.