Confessions of a bug bounty program manager

Adversary Academy Research
5 min readFeb 17, 2023

In my previous article I wrote about my experiences as a top ranked bug bounty hunter. In this article I will write about my experiences on the other side of the fence triaging bug bounty program submissions. This article will hopefully serve to highlight some of the traps that exist in the bug bounty space.

Hopefully my dual roles in the bug bounty space will help existing researchers have insight into what a program manager might be looking for when receiving a bug bounty submission. I am also hoping to I highlight some of the challenges that I see for the customer or client side of the bug bounty space.

When I was supporting a bug bounty program the program was managed by Bugcrowd. Other programs may have their own unique set of challenges, but I would expect that some of the common challenges are exactly the same across the majority of the bug bounty space.

Looking back from a value perspective, I think that the highest value we received was from two unique events happening in our program.

The first event was during the initial program launch. New programs usually attract quite a bit of attention as researchers rush to find shallow and easily monetized bugs. As a result of this attention the initial “penetration test” or targeted test phase of the bug bounty program revealed a decent number of low to medium severity findings. If a company were looking to replace a traditional point in time penetration test and still be able to provide auditors with a “pentest report” this initial launch phase would be a good place to look to replace a traditional pentest vendor albeit at a higher price point.

Quick! find XSS!

However, once the initial burst of activity was over quickly delved into an extended period of low value or duplicate bugs. Something a company might want to consider is the amount of time that it takes their internal staff to triage and review bugs that are essentially useless. During this time, I found myself to be frustrated with the program. I can’t blame the researchers however, as I mentioned in my previous article the economics of most bug bounty programs incentivizes researchers to target easily automated bugs.

The Second Event was quite a bit later in the programs run time when we finally had researcher submit high impact bugs. These bugs demonstrated the value of having access to a large researcher pool. As I mentioned before, instead of focusing on shallow bugs and easily automated bugs some researchers focus on a specific class of vulnerability. And in our case, we were able to attract the attention of a researcher who had familiarity with our tech stack.

Bug Bounty! Great Success!

However, one thing to consider is the average price of a yearly contract with a bug bounty vendor is about $100,000. So, while the high impact vulnerabilities were valuable I still have difficulty matching the value extracted to the price paid per year. As usual your mileage may vary with a program like this, I would expect to see the number of high and critical vulnerabilities trail off year over year. So it may be that the first year or two of the bug bounty programs represents a great value for your organization and in the years following the value starts to trailer in correlation with the number of high and critical submissions.

So, what can someone do to ensure their program is successful?

If it were up to me, these are the questions that I would ask a bug bounty vendor.

Do you have a certain number of researchers who are skilled in my tech stack?

Bounty vendor may have statistics on the number of researchers who are part of their program who are skilled at say API testing. But do they have details on the percentage of researchers who are skilled at finding Java deserialization vulnerabilities? Those types of metrics might be useful for someone who knows what they’re using for their tech stack.

If they can’t answer this question with explicit detail, you may want to consider other options!

Can I invite or incentivize certain researchers?

If someone is finding bugs in your platform, you may want to ask them and incentivize them to spend more time looking at your software.

Realize that most bug bounty programs do not have exclusive access to researchers. Most researchers are looking at programs across multiple platforms. So, if you can get a better price with one vendor versus another it may be in your best interest to go with that vendor.

Pay above an above average rate for high and critical vulnerabilities.

Really, we are trying to reward researchers for their time and disincentivize researchers for low impact bugs. If it were up to me, I would pay a below average payout for low and medium severity bugs and an above average payout for high and critical bugs. In the screenshot below you can see Bugcrowd’s payment trends by severity. For P1 severity bugs the average payout is $1000 to about $5000. I would probably start p1 pay outs at $5000 to $10+ thousand depending upon the overall impact to the organization.

make your p1 and p2 payouts above the average line, p4 and p5 below average

Think about it this way, if you pay bugcrowd $100,000 for one year of a bug bounty program, and you only get one P1 bug because your payout was at or below the average payout. That bug essentially cost you 100k, yet the researcher only gets 1 or 2k for their work. If instead, you set the payout to $10,000 and you get 10 p1 bugs, you are paying the market rate for exports and significantly driving down your overall attack surface. In my mind that is the definition of a win-win. Researchers are getting paid for their time and you are getting a better return on investment.

Are there any other options?

Adversary Academy offers targeted vulnerability research as part of its pentest services. This breaks the traditional challenge with penetration tests being point-in-time assessments and brings in the perspective and capabilities of an advanced and well funded adversary. In this case you are paying for sustained and focused research targeting your enterprises technology.

Another option would be a partnership with a responsible vulnerability acquisition platform like ZDI. For some pwn2own categories you can sponsor or support the event and your software or hardware will be in the list of targets for the competition. Tesla does this every year.

--

--