Protecting Your Cookies: HttpOnly

@correct, it all depends, and I don’t see the problem with storing escaped data. It’s a space tradeoff I’m willing to make, but where I feel the penalty for failure is less severe. I’m pessimistic and nowhere near perfect, so I will forget now and again despite best efforts.

If you don’t allow unsafe characters, then just completely remove them from input. Done

If you do allow unsafe characters, there are two scenario’s

  1. you store user input verbatim, and you always remember to escape when displaying output, and you hope input cleaning works 100%.
  2. you store user input escaped, and you need to remember to unescape when your user is editing.

Penalty for failing in (1) - where you forget to escape, you expose your users to xss, etc.
Penalty for failing in (2) - editable has your escaped content lt; gt; amp; etc. - looks stupid, but still safe.

Performance penalty in (1) is continual escaping on every view.
Performance penalty in (2) is only escaping / unescaping when edited.

(2) isn’t perfect, but I’ll take the hit, trusting the team to get the few edit scenarios correct versus the 100’s of view scenario’s correct. I call it err’ing on the side of caution.

Is there something I’ve overlooked? What is your objection to storing escaped data?

Restricting your cookie based on IP address is a bad idea; for two reasons:

An IP address can potentially have a LOT of users behind it, through NAT and the likes. There’s even a few ISPs that I’ve known to do this.

And secondly, your site breaks horribly for users behind load balancing proxies (larger organisations, or even the tor anonymising proxy).

Anyone remember when Yahoo! messenger used to allow javascript (through IE i guess)? I used to type simple little bits like instant unlimited new windows or alerts to mess with my friends… thinking back on that, that’s just scary!

On the same subject from a J2EE perspective

Well done Jeff. XSS, yeah, yeah, yeah. Not my site. HA. Just found a hole and implemented httpOnly. Thanks for this reminded!

so, basically, HttpOnly-cookies protect you from your specific exploit and force the attacker to just redirect the users to a fake login on a page he controls or something similar.
If you allow arbitrary javascript on your site, its not your site anymore. HttpOnly-cooke does not change that.

Live and learn. I made a simple comment system for a website and it basically just removes every character that could be used in a script attack when the data is posted back to the page. Essentially it doesn’t let you use any HTML or other fancy stuff in the comment area so it’s a bit limited in what you can display for a comment, but at the same time it’s generally pretty secure (crosses fingers) and while comment spam happens at times whatever links are posted aren’t ever active.

Unfortunately, as for any other browser-specific features or those being too recent, we might as well acknowledge that httponly for a second a then, completely forget about it because it’s totally useless … the usual web development nightmare : we’re stuck to the narrowest common set of features :frowning:

Okay, HttpOnly is an easy temporary fix, but we all know where such tempting temp fix lead us, right ? I’m sure we all agree here it’s not a substitute for sanitizing, but guess what happens in the real world …

If you’re allowing people to use the image tag to link to untrusted URLs, you are already OWNED.

For starters it allows a malcontent to cause people’s browsers to GET any arbitrary URL, fucking with non-idempotent websites, doing DDOS, whatever.

On top of that, for both IE and Opera, if they GET an URL in an img tag, and find it to be javascript, THEY EXECUTE IT. The script tag was totally unnecessary in that hack for targeting IE and Opera.

scriptalert(‘hello XSS!’);/script

Jeff, what sites did you use to guide you through making StackOverflow XSS resistant?
I am about to embark on a side project and would like to make the site XSS hardy.

Assume the IP address changes. This means either malice, or a ISP with a rotating pool of proxy IP addresses. Either way, you need something stronger to fix this.

You should re-challenge for non-password information (secondary password, favorite color, SSN, phone call, whatever). Then walk them through secondary authorization with SSL certificates… like myopenid does.

And if the requirements of your application include the
ability to accept such input… then what do you suggest?
I just love how programmers think that they get the final
say when it comes to functional requirements.

You love odd things… and I already took that into account. Read this article about what Jeff is doing, and you’ll see my proposal fits in fine with the functional requirements:

Offhand… I can think of no good reason why a non-trusted user should be allowed to use more than 5-10 safe HTML tags. If I’m wrong, I’d like to see what you think the requirements are.

No, we just improved it. That’s how code evolves. Giving up is lame.

Giving up on idiotic idea is generally considered wise.

@bex: Offhand… I can think of no good reason why a non-trusted user should be allowed to use more than 5-10 safe HTML tags. If I’m wrong, I’d like to see what you think the requirements are.

Name them. I will bet you a contrite apology that someone will add an 11th that they’d want within 5 minutes.


Did you just tell me exactly what I told you, but like you thought of it yourself? Yeah, you did.

HttpOnly should be the default. Making security easily accessible (instead of an obscure feature, as one of the commenters called it) and secure behaviour the default is an essential part of security-aware applications.

But as is typical with IE, providing safe defaults would need some sites to update their code, so unsafe is default, and no one updates their code to add safety. (Why should they? It still works, doesn’t it?)

As for sanitising input: Since input data is supposed to be a structured markup, I agree with other commenters that the very first thing should be to parse it with a fault-tolerant parser (not a HTML encoder as someone else suggested) in order to get a syntactically valid canonical representation. This alone already thwarts lots of tricks, and filtering is so much more robust on a DOM tree than on some text blob. Not easier, but no one said security was easy.

And such a DOM tree nicely serializes to something which has all img src=… attribute values quoted etc., at least if your DOM implementation is worth it’s salt. (I recommend libxml, bindings available for practically every language)

What I do not understand is why the browser is rendering that invalid HTML block.

Also the web application should validate the input and check if it’s valid HTML/XHTML and uses only the allowed tags and attributes. Moe and others seem to be thinking of the same thing.

as mentioned before the sanitiser is clearly written badly. I’d bet its overly complicated in order to fail on this example (something to do with nesting angle brackets? why do you even care how they are nested if you are just encoding them differently?)

further, the cookies are being used naively out of the box. how about encrypting the data you write to them based on the server ip or something similar so that these tricks can’t work?

HttpOnly by default would still be good though… you have to protect the bad programmers from themselves when it comes to anything as accessible as web scripting.

i’m also in favour of storing the data already sanitised. doing it on every output is one of those everything is fast for small n scenarios, and it removes the risk of forgetting to re-sanitise the code somewhere.

Is there a good existing santizer for ASP.NET?

Great post, I totally agree about the need to protect cookies.

I’ve been using NeatHtml by Dean Brettle for protection against XSS for quite a while now and I think its the best available solution, though I admit I have not looked closely at the Html Sanitizer, you mentioned.