Protecting Your Cookies: HttpOnly

HttpOnly should be the default. Making security easily accessible (instead of an obscure feature, as one of the commenters called it) and secure behaviour the default is an essential part of security-aware applications.

But as is typical with IE, providing safe defaults would need some sites to update their code, so unsafe is default, and no one updates their code to add safety. (Why should they? It still works, doesn’t it?)

As for sanitising input: Since input data is supposed to be a structured markup, I agree with other commenters that the very first thing should be to parse it with a fault-tolerant parser (not a HTML encoder as someone else suggested) in order to get a syntactically valid canonical representation. This alone already thwarts lots of tricks, and filtering is so much more robust on a DOM tree than on some text blob. Not easier, but no one said security was easy.

And such a DOM tree nicely serializes to something which has all img src=… attribute values quoted etc., at least if your DOM implementation is worth it’s salt. (I recommend libxml, bindings available for practically every language)

What I do not understand is why the browser is rendering that invalid HTML block.

Also the web application should validate the input and check if it’s valid HTML/XHTML and uses only the allowed tags and attributes. Moe and others seem to be thinking of the same thing.

as mentioned before the sanitiser is clearly written badly. I’d bet its overly complicated in order to fail on this example (something to do with nesting angle brackets? why do you even care how they are nested if you are just encoding them differently?)

further, the cookies are being used naively out of the box. how about encrypting the data you write to them based on the server ip or something similar so that these tricks can’t work?

HttpOnly by default would still be good though… you have to protect the bad programmers from themselves when it comes to anything as accessible as web scripting.

i’m also in favour of storing the data already sanitised. doing it on every output is one of those everything is fast for small n scenarios, and it removes the risk of forgetting to re-sanitise the code somewhere.

Is there a good existing santizer for ASP.NET?

Great post, I totally agree about the need to protect cookies.

I’ve been using NeatHtml by Dean Brettle for protection against XSS for quite a while now and I think its the best available solution, though I admit I have not looked closely at the Html Sanitizer, you mentioned.

http://www.brettle.com/neathtml

Another barrier that is frequently used with applications that must accept user-generated HTML is to separate cookie domains: put sensitive pages on a separate origin from the user-generated content. For example, you could have admin.foo.com and comments.foo.com. If sensitive cookies are only setup for domain=admin.foo.com, an XSS on comments.foo.com won’t net anything useful.

So that’s what you’ve been so busy working on since your last post? Makes me glad I’m wracking my brain with WPF and XAML instead of Web 2.0 stuff.

No, we just improved it. That’s how code evolves. Giving up is lame.

When you find yourself at the bottom of a hole it’s best to stop digging.
Also what Mr Blasdel said.

Uh, couldn’t someone just filter the response from the server to remove the httpOnly flag? It seems very half-assed to use a feature that is client-side, in SOME browsers. This is a circumstance where it’s important enough to come up with a solution that isn’t just more obfuscated, but that actually has increases the security by an order of magnitude.

Just my opinion.

@correct:

Sorry if I didn’t give you sufficient credit :wink:

My point was less about re-auth in general, but more about trying to detect who had a legitimately rotating IP address. If detected, cookies can’t be trusted… so force the user into an auth scheme that used cookies as secondary to something else. Primary would be SSL Certs or (shudder) Basic Auth over HTTPS.

Thoughts?

@Tom

Here was the list I initially had:

That’s probably good enough for anonymous comments. These ones are also safe and useful for untrusted comments:

That’s 9 tags. If you want to add a video or an image, you could use a bit of DHTML or Flash to pop up a media selector widget for approved sites: Flickr, YouTube, etc. People get to select URLs to pages, but that’s it. On the back end, check the URL to see if it looks hacked. If so, reject it.

For trusted contributors, you could open it up even more and use tables, headers, links, etc… in which case you’re looking at closer to 20 tags.

For very trusted contributors, you get to use attributes like SRC for IMG, and maybe even SCRIPT nodes.

Of course, @dood mcdoogle summed it up quite well when he said that input filtering cannot ever be sufficient… so you always need an output filtering step. However, there’s no harm in pre-parsing your data and teaching your audience what will and what will not be tolerated.

@Tom

My tags got gobbled… I these are critical for anonymous comments:

B, I, UL, OL, LI, PRE, CODE, STRIKE, and BLOCKQUOTE

Anything else, and you probably want to be a verified or trusted user.

Quite an eye opener; thanks Jeff. Also, WTF, when are you going to accept me as a beta user?!

I’m not sure why you people are being so hard headed. He didn’t say that he didn’t ALSO fix the sanitizer. But like all things in web security adding the HttpOnly flag raises the bar. Why not do it? He isn’t advocating using HttpOnly in lieu of other good security measures.

As for sanitizing input verses output I prefer to sanitize output. There are too many other systems downstream that are impacted by sanitizing the input. I write enterprise systems, not forums. There is a big difference. I can’t pass a company name of Smith%32s%20Dairy to some back end COBOL system. They wouldn’t know what to do with it.

For those of you that decide to sanitize your input, it must be nice to write web applications that live in a vacuum…

The Web needs an architectural do-over.

With recent vulnerabilities like the Gmail vulnerability I’m really starting to question whether it is possible to write a secure web app that people will still want to use. Even if it is, it seems like it is little more than a swarm of technologies that interact in far more ways than are immediately obvious.

Why not keep a dictionary that maps the cookie credential to the IP
used when the credential was granted, and make sure that the IP
matches the dictionary entry on every page access?

Most of us get our IP addresses through DHCP, which means they can change whenever our system (or router) is rebooted.

I’m still quite leery of your sanitiser, for the reasons I described on RefactorMyCode: you’re doing blacklisting even if you think you’re doing whitelisting. Your blacklist is more or less anything that looks like BLAH BLAH X BLAH, where X isn’t on the whitelist. As you can see, it’s very hard to write that rule correctly. Your bouncer is still kicking bad guys out of the queue. Instead your bouncer should be picking up good guys and carrying them through the door. If the bouncer messes up, the default behaviour should be nobody gets in, not everybody getting in!

As an interesting side note to those who say you should sanitize late rather than early:

I have run into all kinds of XSS when opening tables in my database. Yes, I learned that opening said tables in PHPMyAdmin might not be a good idea.

That was an interesting experience to be sure.

I have to agree with what most people are saying. Allowing direct HTML posting that other users can see is sure to cause at least headaches, if not major problems. You’re better off using some kind of wiki system, or some kind of subset of HTML, where only the tags you are interested in are allowed.

Hey, But how do I set the HttpOnly flag on cookies. I certainly did not find it in the preferences/options dialog.

IP spoofing over UDP = easy, IP spoofing over TCP = hard

The biggest problem in security is that a lot of people think that hard is the same as impossible. It is not. We can patch this and that hole after we’ve completed implementing our design and make it harder to attack our system, but we’ll never really know if we’re 100% safe.

In that regard, giving up is not lame. Playing catch-up is better than not. It’s also better than going back to the drawing board when you’re well into beta (aka scope creep), unless you have infinite budget. I do believe, though, that in the design stage, as Schneier says, security is about trade-offs. If a feature introduces security risks that are not absolutely not tolerable, then it might indeed be a good idea to drop it altogether, if designing built-in protection against a class of attacks is not feasible.

IP spoofing over UDP = easy, IP spoofing over TCP = hard

As someone who has written an IP stack, I’m not really sure what about TCP makes it particularly hard. I’m not saying it isn’t, I just don’t see why it would be offhand.

It might (might) be tough to push aside the rightful IP holder from an established connection. However, initiating a connection with a spoofed IP should be just as easy as spoofing your IP in UDP and getting the victim to respond to you.