What you need to know before you start your bug bounty program

Tips and tricks on what to expect from a bug bounty program in your organization: how will the program help your security posture, and how to take care of your response team who is going to be on the front lines.

What you need to know before you start your bug bounty program

Organizations use bug bounty programs to pay hackers who discover website vulnerabilities. The idea is to incentivize the bug researchers to disclose the vulnerabilities for payment instead of selling them on the black market.

To me, they are one of the features that make information security unique. Nobody pays individuals who notice a missing fire extinguisher in a shopping mall!

What's happening to warrant this practice, then? Hackers' definitive personality traits push them to "try" things out. "What if I did this?" Stories of hackers who try to do "the right thing" by disclosing a flaw, only to get sued, justify bug bounty's necessity.

I've been part of a bug bounty program for the past 4 years, to great success. This week, I share my experience with anyone looking to start their program: what to consider before and during the program, what are the benefits, the struggles, etc.

Fear not, hackers, I will also give you advice on how to communicate with companies successfully!


Bug bounties work

I don't want to ignite the "bug bounty vs traditional penetration test" debate here. In the end, both methodologies fill different needs. What's certain is that a bug bounty does unleash hundreds of specialists of various skill levels on your online assets. I promise you: they will find things fast.

The main advantage of bug bounties is the continuousness. Your developers release a feature, and the next day hackers will flag bugs.

When I say they work, I mean it both ways. Not only do hackers find flaws fast, but your organization gets put on the spot. To maintain your reputation, your responders must validate the findings fast, and your developers must address them in a reasonable timeframe. Everybody is on notice.

Including the bosses.

See, it's one thing to get a report from your internal pentest team. Management knows the team rocks. Of course, they will leave no stone unturned, and they know the environment! But a bug bounty participant? They're just random guys and gals in their basement, for crying out loud!

Bug bounties work as a proxy for "what's going on in the wild". If your organization's IT management is technical enough, I recommend you add them to the reports feed as watchers.


Restrain your grand ambitions

Bug bounties are cool. That can become a problem. After a few years, cool becomes cringe. I mean, I was told GIFs were "out".

The problem is that a continuous program is tough for your internal teams. The hype fades, but the tasks remain. The manager who ordered them may leave the organization. Reorgs happen all the time. Suddenly you want to "pull the plug" and realize you promised it to a customer or a partner. Or maybe your CTO bragged about it in a magazine so it would look foolish to close it.

So, how do you avoid making your bug bounty program irrelevant after a few months?

Most bug bounty platforms offer a "limited" bounty option: 30, 60, 90 days, etc. This works well to test the waters before you commit to a full-blown program.

A limited "point in time" exercise can therefore give you the advantages of a bug bounty, without the stress of a continuous exercise.

For the same reason, I also recommend a private bug bounty program. A public program allows the whole world to test your systems. While this is cool, this also means a lot of people your teams must deal with.

A private program can either work on "invite-only" or with a minimal "reputation" score for each hacker. By restricting the program to only experienced hunters, you protect your teams from the noise.

Speaking of which...


It's a stressful job for responders

Imagine your bug bounty program in 3 years. The cool factor is gone. Many flaws took months to get fixed. Management only sees the tens of thousands of dollars you're paying out. Your defence team is playing "short straw" to determine who gets to deal with the findings on a given week...

And you may be wondering: "How did we get here?" Well, look at the stressors.

  • Many reports are amateur findings. There is an annoying trend in the hacking world called "thinking you are an ethical hacker because you've watched a few YouTube videos". This is how you get people using scanners, copy-pasting the outputs, and flagging bugs as "high importance" because their online class said it was a major security flaw to "leak" that you used a Windows Server.
  • Hackers don't read the rules of engagement and the scope of the program. Half of the reports you'll get will be either out of scope or explicitly stated as "a feature, not a bug". This gets super annoying.
  • Most reports are poorly written. Did ChatGPT fix that? Nope. Sadly, a majority of hunters do butcher the English grammar. Yes, many finders are not native English speakers. But once again, with generative AI, you don't have an excuse.
  • Negotiating the price of a bounty will feel pointless. I can't imagine how this must feel in those giga corporations like Microsoft. It's a common practice to "negotiate" with the hunters about the severity of a bug. Hackers will always suggest a higher tier to maximize their earnings. Your internal teams have a better understanding of the environment, so they have more information available to put the vulnerability into context. But the back and forth over a medium vs low can be for, say, $100. For a hacker in Pakistan, this $100 matters a very great deal. But for your corporation? Yeah...
  • Some researchers feel entitled. The most controversial opinion on bug bounties comes from people saying hackers provide "free quality assurance and security testing" as part of their tasks, which are only paid if they uncover a flaw. Some finders do keep this in mind, so they have no patience with your internal teams.
  • Your teams will get nuts chasing after developers. I kept the worst for last. Every developer's knee-jerk reaction to a security finding is to try to dismiss it so they won't have to work on it. I dare you to find another reaction anywhere! You won't! Over time, this becomes harder. Inevitably, your teams will take a finding to a developer that turns out to be stupid (the finding, not the developer). And just like that, you've burned your reputation with that team. You best believe the next finding will have to be airtight! The worst is that your team will be between a rock and a hard place: waiting for the devs to validate a flaw, while the hackers ask for their bounty... just in the middle of two angry parties.

This is what you are getting yourself into! Ensure your team is motivated if you want to get down this path.


Bug bounties have become weirdly industrialized

You may think of hackers as software engineers who enjoy finding bugs on websites in their spare time, bringing their whole skills to the game.

In reality, there is a "game" that is being built around bug bounties. It goes like this:

  • A hacker finds a flaw in one bug bounty;
  • They automate the discovery and the report creation;
  • They launch their finding on every other bounty.

Often, when we looked at the hacker's profiles, we realized they simply were reporting that same security flaw everywhere.

Don't get me wrong, this "industrialization" is not a problem per se. It does skew the overall picture of your application, though. Since these flaws leverage automation, they don't require an understanding of your web app or of your environment. I would add, that they don't mimic actual attackers.

Often, you end up in a position where you wish the hacker had "held on" to the vulnerability to "chain" it with others to create an actual attack path. But the "automaters" deal in volume and they'll just collect the easy money.


Conclusion

Whatever your organization does, it must take into consideration that bug bounty carries costs beyond the actual bounties (and the platform you use). It's an essential "last line of defence" to discover flaws before the bad guys, but it may not represent an accurate portrayal of the attacks you want to prevent. The biggest expense will go to your internal teams who will get put under stress, especially in the long term. You must ensure you have buy-in from your teams and that you give proper recognition to your defence team that deals with the "slush pile" of reports.

Did you participate in a bug bounty? You are a researcher and want to share your perspective? Tell us in the comments!



🥳
Thank you for reading!

If you like my content, subscribe to the newsletter with the form below.

Cheers,
Pierre-Paul