Given Enough Money, All Bugs Are Shallow

And that, in a nutshell is why security is so difficult. If you spend too many of your resources on security, especially at the beginning when you’re building the framework your product will be built upon for years, you get killed by the competition who had more features and spent less time on security.

You go under, and it’s only later that there’s a reckoning, and even then, there are lots of insecure programs that never get exploited. The calculus can end up being “do you court near certain bankruptcy now by prioritizing security over features, or do you risk bankruptcy later by putting security on the back-burner and hoping that you don’t get hit by a security exploit?”

The only way out of the conundrum is to make security cheaper, and the only way for that to happen is to have many best practices built into the development process, language, or class libraries. It won’t solve the problem, but if there are well-publicized, standard ways to approach the areas where most security problems show up, you can cut down the number of problems. Sort of like cutting down the number of sorting mistakes by making writing one’s own sorting routine redundant 98% of time.

1 Like

This also leads to controversy:

The larger the company, and the bigger the budget, the less defensible it is to not pay something though.

“Money makes security bugs go underground”

That comment was so funny it hurt my feelings.

So your logic is that if people pay coders to find bugs, coders will hide the bugs for the bad guys because the bad guys might pay more. So your answer is to… Have the good guys pay nothing.

What? Where’s the logic in that? (Hint: There is none.)

Paying bounties to reveal bugs in no universe would ever more heavily incentivize coders to hide bugs, no matter how much twisted rationalization you try to bring to the table.

Aren’t most security bugs in specific categories?

  1. Risky compromises – OpenSSL was vulnerable to heartbleed because we ( the host admim, the project manager, the visionary and the cya grunts ) just couldn’t turn SSLv3 the — off, knowing it was old and compromised in the 1 in 10,000 chance a visitor was connecting with an obsolete browser or some mobile app from 2003. The invented notion that nailing down security might result in Someone Important getting an error? The same well intended idealism that floods SO with questions on how to get this simple nice elegant style or script to work seamlessly in IE8?

  2. Open doors and weak locks - no real exploits beyond the overlooked points of weakness. I don’t need to know whether your app has integrity checking or bad separation of concerns, I just need to try a generic buffer overflow attack or look for potential confused deputies and see if they let me exploit them.

  3. The weird exploits that rely on intimate knowledge of a system and how some flaw maybe could wreak havoc if that soft spot cascades into the various other components required after the wall is breached (i.e. Venom).

It seems like for the first scenario, the goal should be to ditch the parts we know are weak rather than keep them around “just in case” until someone finally finds the mother of exploits that leaving those weak doors available.

For the second case, automated test attacks that emulate real world thinking are, as mentioned, not likely to catch enough. But basic unit testing is a very reasonable strategy. Is this memory buffer at risk for overflow? Let me overflow it and check.

For the weird ones, you have to subscribe to the adage: “if you want to find your missing baby, call a bounty hunter. If you want to find a donut shop, call a cop”. The general issue I’ve noticed with these vulnerabilities is the time bomb effect they have. " we discovered that the virtual floppy drive that no one ever noticed before, has a flaw that allows you to hijack the host machine. Let’s see who’s faster, you and everyone with a VM network trying to patch this, or the guy who figures out the worm that can actually hijack the entire network with only this tidbit."

Whether we pay for bug hunts or they get done in Vegas for fun and sport, the net result is a 1-6 month panic trying to fix the issue and auditing any possible existing damage. Paying at least give you a chance to get the panic over with sooner than later.

Though there are potential illegal monetary rewards for finding bugs, creating legitimate monetary bounties for bugs could easily backfire and encourage buggy patch submissions.

When Vietnam was a French colony, the government offered a bounty on rats as a means of pest control. Rather than reducing the number of rats in Hanoi, it caused entrepreneuring individuals to start farming rats.
( https://en.wikipedia.org/wiki/Cobra_effect )

The same thing could easily happen bugs.

Interesting argument that the black market for bugs is never going to pay as much as the bug bounties: