The Depressing Effect of Bug Bounties
Why we need to focus on capacity building
Watching the InfoSec industry change over the last few years leaves little doubt in my mind that the majority of players jump onto what is hot, either through a re-branding campaign, acquisition, or legitimate shift. At RSA this spring, the buzzword du jour was threat intelligence; of late however, the hype-machine has been focusing on bug bounties.
Bug bounties are the latest "solution" to the ever-divisive debate on disclosure of vulnerabilities that has been hashed and rehashed by everyone from folks on /r/netsec to some of the highest levels of government. As a proposed middle ground, bug bounties allow for vulnerability researchers to be compensated for their time and effort while providing the software companies responsible for the security vulnerabilities the information to fix the weaknesses as well as a grace period to implement fixes before the reporter can disclose publicly.
While they have been touted as a solution to the disclosure debate, and the "ethical" thing to do (typically by the for-profit firms providing them as a service), there is another perspective I'd like to share. Let me try to make my case via an analogy: the side-effects of humanitarian food aid. At first glance, providing shipments of food to poor countries is appealing: if the population is not worried about famine or hunger, they can focus on bettering their infrastructure and education. However, unless this aid is provided with the utmost care and focus on building capacity, it can have unintended consequences. As found in this FAO report, the depressing effects of aid on the market for food forces out local providers or incentives them to downsize, then when an interruption in aid occurs, the local market is worse off. Bug bounties can provide a similar depressing impact on the market and end up harming the end-user of software.
By artificially deflating the cost of finding and fixing bugs in operation/shipped product through monopolistic means, bug bounties remove the economic incentive to develop better software by integrating security-aware architects into the SDLC. Bug bounties use their monopoly on setting prices (and preach the evils of selling exploits to other buyers on the market), usually after the vulnerability has been disclosed. The below chart from Software Engineering Economics shows why companies are and should be motivated to fix bugs before shipping to consumers:
Bug bounties allow a company to set their own value on bugs, harming the consumers by delaying security fixes until after they are found for minimal costs instead of investing in "capacity building" to enable their SDLC to improve, creating higher-quality software from the get-go. Reviewing the statistics provided by HackerOne (as of 11/13/15), the average bug found and fixed in operation or shipped software is $341.18. That pay-out also includes strings attached, the most glaring of them is the ability for the company to determine the value for the bug once they already have the information needed to fix it (neutering its value). I know of one researcher who found a serious security bug in a large organization and was promised a tee-shirt (which was never mailed). Additionally, once the bug is committed to a bug bounty program, the clauses attached for receiving the payout can border on ludicrous. This vulnerability in Slack was reported almost 18 months before it was allowed to be publicly disclosed!
The weakened software published by organizations who now have less incentive to provide solid QA and architecture will put consumers at risk. When the prices paid for bug-bounties is compared with this price-list (cheeky, I know), the differences are clear:
This massive price disparity results in part from the vendors' avoidance techniques of market forces. Since monopolies end up hurting consumers, I argue we must focus our attention on building safer software and mitigating entire classes of attack if we truly want to protect people online.
To end on a positive note, I do want to highlight the valuable work that the bug bounty community has done to try and change the adversarial responses to disclosure that in the past silenced researchers who wanted to improve their security. These aggressive tactics are an attempt in many cases to offload the blame of shipping unsafe software onto the person who found it. If applied correctly, bug bounties can offer a welcome incentive to newer researchers or at least show that disclosures will not be prosecuted, however their hype and economic imbalances that come with them should be carefully considered.
Addendum: In his blog post, Bruce Schneier asks if vulnerabilities are sparse or common. In thinking of this a bit more, and some discussion with Wendy Nather, this question greatly impacts the role of bug bounties. If bugs are indeed sparse, then there is a "small" number of them that can be discovered by a bounty and eventually that software is "done" or completely safe (thus the bounties are no longer needed). If bugs are common, then a bug bounty will have little real impact on the security of the end users as the odds over a bug overlap from an attacker and bounty submission is minor. In this instance bug bounties are a distraction from the security of the product. If bug bounties "dry up" for a vendor or product, then it would indicate that bugs are sparse, though the endless parade of bugs seems to indicate otherwise...
Cyber-security Philosopher and Boffin