Sins of Software Security

I see a lot of faults due to the programmer failing to validate user data (or other data).

Failing to validate user data is a huge problem. It’s the root cause of:

  • Command Injection
  • SQL Injection
  • Cross-Site Scripting (XSS)
  • Format String

And part of Improper File Access, too.

Developers need to design software with the realization that some of their users will be evil, and design accordingly. You can’t trust user input, ever.

Interesting article from Schneier that touches on the usability issues of security:

http://www.wired.com/politics/security/commentary/securitymatters/2007/04/securitymatters_0419?currentPage=all

Of course, it’s more expensive to make an actually secure USB drive. Good security design takes time, and necessarily means limiting functionality. Good security testing takes even more time, especially if the product is any good. This means the less-secure product will be cheaper, sooner to market and have more features. In this market, the more-secure USB drive is going to lose out.

I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that “won” weren’t the most secure firewalls; they were the ones that were easy to set up, easy to use and didn’t annoy users too much. Because buyers couldn’t base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren’t the most secure, because buyers couldn’t tell the difference.

Via Ned Batchelder:

The security announcement site XSSed has an archive of identified XSS vunerabilities, ordered by the traffic the page receives: TOP Pagerank List. The listings include an iframe demonstrating the vulnerability. Very slick. It’s sobering to see how many high-profile sites have problems like this.

http://www.xssed.com/pagerank

SANS identifies the three programming errors most frequently responsible for critical security vulnerabilities:

http://www.sans-ssi.org/top_three.pdf

  1. Accepting input from users without validating and sanitizing the input

  2. Allowing data placed in buffers to exceed the buffer

  3. Handling integers incorrectly

I’m a network and firewall administrator and here’s my complains about application people.

  • They don’t know the port their application is using.
  • Some know the port number but can’t tell if its TCP or UDP.
  • They are RUDE and always want to have their way. The STUPID way of neglecting security over connectivity by asking firewall administrator to open all ports.

Application people either support a finished software or involved on its development. Whichever, they should understand the importance of security which start from the software development and know their products.

A security problem within the framework immediately affects all software using it (class break).

Sounds like someone needs to re-read Ken Thompson’s “Reflections on Trusting Trust” (http://www.acm.org/classics/sep95/) - C/C++ are in absolutely no sense of the phrase immune to the same class breaks you point out as an Achilles’ Heel when using Java/.Net.
To me, it’s a pretty silly reason to avoid managed frameworks - you’re trading up from a hypothetical risk to a real one while trying to do some mental gymnastics to convince yourself that you’re safer (because you’ve got C Wizards working on it instead of C# Flunkies, maybe?).
Security’s tough and made tougher by the fact that applications sometimes aren’t built with security in mind. You don’t need to subscribe to Bugtraq but bolting security on after the fact sure feels like it’s an order of magnitude more difficult than designing it in to the initial product.

Any insecurities in the OS itself that would affect a C/C++ program will also affect a C# program, plus you have the additional “leaky abstraction” of the .NET Framework.

And as Thompson’s ACM article points out, a compiler’s a leaky abstraction too. I still fail to see how you’re introducing a new class of potential breaks with a managed framework. A security flaw could be discovered in the Java compiler or in gcc or in the Java framework or in static libraries you’re linking against.
It just sucks a bit harder when there’s a .Net security patch because they’re wont to be way bigger files than an updated compiler.

I actually picked this book up a couple months ago for some light night reading. It is really nice the way that they breakdown the Sins.