Given Enough Money, All Bugs Are Shallow

That’s a very good point. I think an important part of it would be to change the perception of pride, as it relates to software development. People shouldn’t be proud of writing code, but rather proud of maintaining code.

How to change that perception in that way on a large scale… that needs more thinking about.

I don’t quite agree with the bolded text. I don’t get paid (yet) for any of the research I’ve done. :frowning:

1 Like

If there is a fundamental architectural issue with the first implementation, then yes, building a rewritten implementation can be a perfectly valid thing to do. It means you do not have to deal with backwards compatibility.

Like to move in a slightly different direction, if I may, with a slightly different take on peer review. To me, it’s more about the reviewee, not so much the reviewer(s). Which is to say, review should involve both roles, that is, the coder should actively explain their code to the reviewer. As a coder, I really have to be part of the review.

The article and several responses note that the reviewing eyeballs aren’t necessarily experts in the problem domain, and certainly not in the particulars of the code under review. That expert is of course, the person who wrote that code. So it’s useful to turn the coder into an effective reviewer of their own work.

So I’ve learned over the years that the critical point of a code review, is that I have to be able to provide a coherent explanation of what I’ve added/changed/removed to the reviewer, such that I could convince myself that it makes sense. And often enough, I’ve found that the bit of code I’m explaining just doesn’t do what I meant it to ("… and here I’m checking that the input string isn’t empty…"), and as I say those words, I realize that bit of code is buggy.

Not to say that there’s no role for a completely independent reviewer, especially when lives depend on correct code, but the easiest & fastest way to find most bugs is to have the coder explain assumptions & code, and have both coder and reviewer(s) check assumptions & code interactively.

Not so easy in an open-source context, I agree. Can only suggest that a contributor should actively recruit reviewers, or that someone build a mechanism to facilitate that (e.g. let people register as reviewers so that a new commit results in a request for review).

Firstly on OpenSSL, I still believe that LibreSSL is the better solution, the OpenBSD guys have yet to do us wrong, OpenSSL however is now repeatedly showing up in security bugs. I’ve personally been wondering if LibreSSL is vulnerable to the recent SSLv3 exploits. I don’t think it is, because I think they removed SSLv3 support entirely.

We all have the same goal, more secure software

No, the NSA has the goal of having more exploited software. Most companies have the goal of making more money, they don’t take security problems seriously.

You know what I think the real problem is though? education, most programmers aren’t taught security how to mitigate security exploits. In job interviews you wouldn’t believe the number of programmers that can’t answer this question:

what is SQL Injection? how do you prevent it?

More than 50% of the people we interview can’t answer both parts of this correctly (second part I’m just looking for parameterized queries/framework)(heck I’m not even sure 50% get the first part right). When I ask about other security related stuff they’ve never even heard of it.

My point is, our web development books don’t teach to avoid SQL injection (let alone harder attacks like CSRF or Javascript Injection), or C books don’t teach to bounds check. How can we expect to improve security when we aren’t teaching people how to be secure?

1 Like

As I pointed out originally, making open source robust and secure is a “public good”. It happens with Linux because you have many sponsors with their own paid personnel submitting patches to the kernel, but the kernel is active. It has to get new drivers, new CPUs, new architectures.

I think it might only cost a few million dollars to “fix” the more static but important infrastructure projects like OpenSSL. Then issue grants or fellowships for uberhackers to sit home and clean up the code. But I doubt Redhat, Ubuntu, the Linux Foundation (since it is not part of the kernel), and the rest of the industry will do something like this. Note that GPG had/has the same problem - most package managers use gpg authentication.

I think I indirectly said that the eyeballs need to be skilled in that not everyone can refactor to the quality plateau, but that means more money to hire enough of those who can.

Excellent discussion so far. I agree with @prasun that better automated tools would be nice, but I think the community doesn’t pay much attention because the task is so immense, the fraction of security flaws automation can discover is small, and nobody has any illusions about the fact that automation will never discover many classes of exploits.

Anyway, this article about a modern OS X exploit highlights the amazing power of reading the code as it leads to better and deeper exploit development…

Even if the code here is assembly.

I don’t think the security of open source software (especially, infrastructural) is a matter of money. Rather, I believe that it is mostly the matter of establishing the right governance mechanisms for critical open source software projects. I briefly touched on this issue in this answer of mine on Quora: http://qr.ae/diooK.

Money was always associated with exploits, the issue was the exploiters were either publicly disclosing for recognition in the security community only or they were enticed by the money in the black market for 0days.

A lot of security people sold exploits before bug bounty, the difference is nobody bothered to try to squeeze any from corporations. It was a joke.

You’re right though, those with bug bounty programs are definitely scrutinized more heavily, but that’s basic economics. It isn’t as if there is no incentive to find bugs in services which are critical without specific bounty programs. Back in the day, I would have loved to have found one for the recognition alone. Plus, there are non-specific bug bounty programs these days like the hackerone “The Internet” bug bounty program, which pays out for critical vulnerabilities to a wide variety of critical resources.

Anyway, the point is, money for exploits will exist whether we want them to or not. The difference is they’ll be sold to the black market only rather than disclosed to companies. I’ve made some bounty money and it has made me research things I never would have before, and has generally made me better at researching other things in my spare time.

You have a point that they might be drawing attention away from other services, but those services aren’t flat out ignored. Disclosures look good on resumes. Anyone wanting to make a name will do it.

And sure, this all sounds very opportunistic, like, I have not once mentioned “what about just helping them out for the good of society!”, realistically, nobody really gives a crap about that. Nobody in the entire world, not enough to forsake their real life responsibilities, anyway. What about “helping them because you like the project”, well, that’s another matter. A hobbyist security researcher may do just that, however, it’s far less enticing than money and wanting to contribute out of the goodness in their heart.

Honestly though, a lot of security people don’t really care about open source projects all that much. It’s the thrill of the chase, and breaking things that is enticing. Not helping, unlike open source.

Sure, I know a lot of people who want to contribute to open source security tools which aid in breaking stuff, but that’s very different from considering a disclosure a contribution to an open source project in the same way code contribution is. A lot of us simply do not feel that way, and honestly I think it makes us better at what we do.

We don’t have a particular drive to share beyond what gets us recognition for being a good security guy. We like to be recognized for the things we do, but by other researchers, not developers precisely.

It is a far different culture, in my opinion, and I really don’t think what you’re suggesting is going to work.

EDIT: Final word, if somebody tries to “ransom” bugs, post their name on a shame list or report them to the police. They fully deserve it.

There’s a balance between how much formal analysis a language supports and how cumbersome it is to use. The more you specify about your code, the more a computer can verify about it. We’ve seen big changes in the last decade, as type inference and null safety have taken off in modern languages.

  • Type inference: In trendy languages like Swift, Scala, and TypeScript, rather than telling the compiler what type a variable is, the compiler usually figures it out from how it’s initialized. When that variable is returned from a function, the type of the function is inferred. And the inferences continue as other functions return the result of that function. The result is strict type safety with the ease of a late-binding scripting language.
  • Null safety: Many newer languages discourage or disallow nulls. Scala has wrapper classes (None and Some) to replace nulls. Groovy, C#, Swift and others have operators for short-circuiting nulls or supplying default values.

If you want to get a sense of where we’re headed, try using Kotlin in the IntelliJ IDEA IDE. IntelliJ’s secret sauce for the last decade has been deep static analysis built right into the IDE, so it can flag bugs (e.g. unreachable code) as you type. Kotlin is the language they’re developing to replace Java (in their own code), and it’s built around static analysis. For example, if they can prove that a variable is a certain type, it is automatically cast as that type.

Preparing for Kotlin is also changing how the IDE treats Java. Kotlin requires variables that can be null to be labeled. With every new IDE version, the static analysis assertions for Java get stricter. At first the IDE was just encouraging me to annotate when a method can or can’t return nulls. Now it pretends to mark up the code with assertions like “if given null, return null, else never return null.”

Bringing this back to security, languages are getting smarter about providing safety without being too cumbersome. But none of these automated tools will ever fix security. For one thing, computers can’t tell a security hole from an intentional feature, and they can’t tell attackers from regular users. Especially since the attackers are trying to look like regular users. For another thing, attackers will always exploit the weakest link, which may be a buffer overflow, a confusing UI, or the user’s willingness to believe a lie.

First time posting, so I just want to say that this is an excellent blog! I don’t agree with everything on here (I’m not as big a fan of tablets and the like as Atwood) but I can see where you’re coming from on just about everything.

As for this post, I was thinking: you don’t necessarily need to use money as the (only) incentive. I imagine for the vast majority of us, every program we use has certain weaknesses that really affect our work flow. For simplicity, I’ll just talk about features, though the usual “don’t necessarily give the user what they say they want. Instead solve their problem” caveat applies.

So one option to provide incentives would be: If you find a bug, then we’ll move one or more features that you’d like to have higher up on our priority list (or more generally: We’ll make it a higher priority to improve some aspect of the program that you feel is lacking). The bigger the bug, the higher the bump in priority.

Now obviously this would have to be within reason. If used blindly you could easily run into feature creep. Plus some features may not actually be good ideas, or may not mesh well with your program, or it may just not be viable to implement.

While this does have some of the same issues as money (I don’t want to tell you about my bug, because then your feature will go up, not mine!), I don’t think the issues are as severe. For one thing, improving the program helps everyone (unless the feature is really really niche), whereas paying one person only helps that one person. For another, I imagine that it can be easier for people to find common ground on aspects of a program that need improving. For a third, it’s nowhere near as high stakes.

Not a huge fan of this particular post, sad to say. It conflates two things, the first a genuine insight, but the second a misunderstanding.

The bit about not all OSS users having the expertise to contribute is spot on. Speaking as someone who has run a couple of smallish Free Software projects, you will find that only a very small percentage of your users are capable of becoming contributors. It varies wildly by project too. For an API (where users are presumably all programmers), it might be on the order of one in a hundred, while for an application it is more likely to be one in thousands. But of course an application is liable to have far more users, so it balances out (if you ignore requests for free personal support).

However, saying that the presence of a single (even a single major) bug in an Open Source project disproves Linus’ Law, is a fundamental misreading of the point of the statement. The idea is that it is far easier to find and get fixed bugs in software you depend on if there’s some access to the sources outside of the development house. It was formulated as an absolute (by ESR) probably either to make it more pithy, or because ESR is a guy who likes to think in absolutes.

Put it this way: if we somehow had a magical historical wand that could retroactively make OpenSSL’s market position occupied by a proprietary solution, would this identical bug not have been possible? Would the proprietary solution have had less such bugs, or in fact more? There’s no way to tell with this particular hypothetical of course, but from what I’ve seen dealing with other software of both types in my career, my money would be firmly on more (and what’s worse, you’d be at someone else’s mercy to get the damn bug fixed).

This isn’t about producing perfect software. Its about producing better software.

I’d be very careful making such “nobody” statements.

And that, in a nutshell is why security is so difficult. If you spend too many of your resources on security, especially at the beginning when you’re building the framework your product will be built upon for years, you get killed by the competition who had more features and spent less time on security.

You go under, and it’s only later that there’s a reckoning, and even then, there are lots of insecure programs that never get exploited. The calculus can end up being “do you court near certain bankruptcy now by prioritizing security over features, or do you risk bankruptcy later by putting security on the back-burner and hoping that you don’t get hit by a security exploit?”

The only way out of the conundrum is to make security cheaper, and the only way for that to happen is to have many best practices built into the development process, language, or class libraries. It won’t solve the problem, but if there are well-publicized, standard ways to approach the areas where most security problems show up, you can cut down the number of problems. Sort of like cutting down the number of sorting mistakes by making writing one’s own sorting routine redundant 98% of time.

1 Like

This also leads to controversy:

The larger the company, and the bigger the budget, the less defensible it is to not pay something though.

“Money makes security bugs go underground”

That comment was so funny it hurt my feelings.

So your logic is that if people pay coders to find bugs, coders will hide the bugs for the bad guys because the bad guys might pay more. So your answer is to… Have the good guys pay nothing.

What? Where’s the logic in that? (Hint: There is none.)

Paying bounties to reveal bugs in no universe would ever more heavily incentivize coders to hide bugs, no matter how much twisted rationalization you try to bring to the table.

Aren’t most security bugs in specific categories?

  1. Risky compromises – OpenSSL was vulnerable to heartbleed because we ( the host admim, the project manager, the visionary and the cya grunts ) just couldn’t turn SSLv3 the — off, knowing it was old and compromised in the 1 in 10,000 chance a visitor was connecting with an obsolete browser or some mobile app from 2003. The invented notion that nailing down security might result in Someone Important getting an error? The same well intended idealism that floods SO with questions on how to get this simple nice elegant style or script to work seamlessly in IE8?

  2. Open doors and weak locks - no real exploits beyond the overlooked points of weakness. I don’t need to know whether your app has integrity checking or bad separation of concerns, I just need to try a generic buffer overflow attack or look for potential confused deputies and see if they let me exploit them.

  3. The weird exploits that rely on intimate knowledge of a system and how some flaw maybe could wreak havoc if that soft spot cascades into the various other components required after the wall is breached (i.e. Venom).

It seems like for the first scenario, the goal should be to ditch the parts we know are weak rather than keep them around “just in case” until someone finally finds the mother of exploits that leaving those weak doors available.

For the second case, automated test attacks that emulate real world thinking are, as mentioned, not likely to catch enough. But basic unit testing is a very reasonable strategy. Is this memory buffer at risk for overflow? Let me overflow it and check.

For the weird ones, you have to subscribe to the adage: “if you want to find your missing baby, call a bounty hunter. If you want to find a donut shop, call a cop”. The general issue I’ve noticed with these vulnerabilities is the time bomb effect they have. " we discovered that the virtual floppy drive that no one ever noticed before, has a flaw that allows you to hijack the host machine. Let’s see who’s faster, you and everyone with a VM network trying to patch this, or the guy who figures out the worm that can actually hijack the entire network with only this tidbit."

Whether we pay for bug hunts or they get done in Vegas for fun and sport, the net result is a 1-6 month panic trying to fix the issue and auditing any possible existing damage. Paying at least give you a chance to get the panic over with sooner than later.

Though there are potential illegal monetary rewards for finding bugs, creating legitimate monetary bounties for bugs could easily backfire and encourage buggy patch submissions.

When Vietnam was a French colony, the government offered a bounty on rats as a means of pest control. Rather than reducing the number of rats in Hanoi, it caused entrepreneuring individuals to start farming rats.
( https://en.wikipedia.org/wiki/Cobra_effect )

The same thing could easily happen bugs.

Interesting argument that the black market for bugs is never going to pay as much as the bug bounties: