Preventing CSRF and XSRF Attacks

Another public service announcement. cool.

One thing worth being clear on: If you follow the double-submitted cookie method, and the cookie value you place within the HTML form has any kind of meaning beyond preventing CSRF, you’re opening yourself to other problems.

To avoid the complications, use of those approaches:

  1. Make sure the cookie used in the HTML form is NEVER used for anything else (such as, say, user authentication)
  2. What you put in the form is a value derived from the cookie. an HMAC of the cookie keyed on some server secret would work great for something like this.

The later approach is sane, cheap (no server state, no new cookie) and sufficient to avoid the XSRF class of attack, and is probably my favorite.

Sure your solution prevents XSRF information modification (via POST). But how does it prevent XSRF information disclosure via GET requests?

Sure I could place the cookie random value into every GET request, but then that’s not very REST.

@Dave
The random in-page/form-element token solution can be low overhead, something like a hash of:
secret + year(or other time-elements) + users-ip

afaik it can be used in place of the referrer and can be saved once it’s used for more control (so fewer tickets to track directly).

Thank you for the post; hopefully someone is still paying attention to the comments as I have an inquiry on the topic.

It is my understanding that we can not provide additional http headers for a request done via a form element; is this correct? If so, then in a purely ajax application that does not utilize forms (instead uses a RESTful web service, for example) couldn’t I rely on adding a custom header value to the XHR request and checking for that value on the server? This should remove any possibility of GET/POST from img/script tags as well as forms being submitted via javascript. The only way to add the header would be through XHR, which the client side would deny if it were not same-origin.

Is this valid logic?

@fool: Uups, I was in misunderstanding how this double-submitted cookie actually works. So ignore the previous reply.

Actually the specified technique only works when HttpOnly is NOT set. When it is not set, only the javascript served from same site can access the cookie, due to the same-origin policy. When HttpOnly is set, javascript cannot access the cookie.

@fool I think the cookie==form field technique is only strong if you set HttpOnly when setting the cookie, right?

No, let me explain.

Let’s assume we have bank that has implemented this double-submitted cookie (cookie=form) technique.

Lets say you are logged in to your bank account. Then you are tricked into site http://badguys/attack.html, which has button which triggers CSRF attack to bank’s site (money transaction etc).

Now, in this case, it does not matter if HttpOnly is set on the cookie, because of javascript’s same origin policy does not allow http://badguy/attack.html reading bank’s cookie in any case. So the double-submitted cookie technique or similar works either way.

However, hacker might try different CSRF attack than submitting a form. Maybe hacker has found a XSS exploit on bank site that can utilized with CSRF attack. In that case, HttpOnly can protect against cookie hijacking.

Example:
Lets say that user clicks link on http://badguy/attack.html that has content like a href=http://bank/exploitable_xss_site?param=some_javascript_here. Now user clicks it, and lands on bank page. The attacker javascript is executed because of XSS hole and it sends the session cookie to attackers website. This would not be possible if HttpOnly was used.

However, this does not really protect that much. If site has XSS problem, preventing attacker from getting cookie does not prevent him having his way with the website. See the URL below.

To combat this, websites sometimes require user to login again. For example, if you want to change shipping address in amazon.com, you have to enter credit card info again. This is done to prevent still-unknown XSS attacks from ordering stuff from amazon and shipping them to attackers location.

Ref:

@Metal Hurlant What you put in the form is a value derived from the cookie. an HMAC of the [session] cookie keyed on some server secret would work great for something like this. […] [this] is sane, cheap (no server state, no new cookie) and sufficient to avoid the XSRF class of attack, and is probably my favorite.

I’m using that technique now, I can’t think why using new unique cookie with timeout would be better, as suggested in this article.

However, why use HMAC? How it is more secure than say normal SHA-1 in this case? For example, lets say that on the server, following happens:

Before sending form:
html_form_key = sha1(current_session_cookie + server_secret)

After receiving form:
html_form_key = sha1(current_session_cookie + server_secret)
if received_form_key != html_form_key: print attack!

How HMAC makes this any more secure? (I don’t know about HMAC).

Amen to mandatory cookies, but there are a whole lot of people out there who have been told that cookies are bad, mmmkay, and just won’t budge. Their brother in law, who knows enough to get their printer unjammed, told them so.
http://protectplay.ru

I recently wrote an article showing how to perform CSRF so that developers can duplicate it and learn how to defend their website. You can check it out at blog.runxc.com/post/2009/07/06/CSRF-by-Example-How-to-do-it-How-to-defend-it.aspx

Good post. This is very useful post

I’m sure I’m missing something, but couldn’t an attacker load the web site in question in an iframe in the evil website and then lookup the fkey with javascript? As a matter of fact, could he simply use javascript to programmatically submit the form in the iframe? There must be some type of security checks done by the browser to prevent this?

Also, things will probably get really interesting once cross-site XmlHttpRequests are implemented by more browsers (Firefox 3.1 will have it)…

That’s very right, not XSRF can be performed through ajax. And everything’s going towards ajax, so be careful about that one.

For those arguing about whether cookies are good or bad, be aware that this problem is larger than just cookies. This vulnerability comes into play with any kind of authentication where

  1. the continuing authentication is automatic (requiring no additional user interaction), and
  2. sensitive actions have well known or predictable URLs.

For the authentication part, the two most common techniques (cookie-based and HTTP auth) are both automatic; which means that subsequent HTTP requests continue with the same authentication without requiring any HUMAN interaction. Lesser used techniques such as browser-side SSL certificates also have this property. So it’s not just cookie-based logins that are vulnerable.

Note there is another technique transmitting authentication credentials, URL munging (inserting some secret credential token in all URLs), which may or may not have this automatic property depending on how carefully its used. The URL munging approach though has many challenges of its own, such as leaking credentials via offsite links with referrer headers, interference with browser history, and not being RESTful, etc.

For the second part, the URLs must be predictable or guessable. If you’re building a RESTful site, this unavoidable. Of course if you dispose of REST, and bookmarking, and lots of other good webby features; you can randomize your URLs. That will be quite effective at thwarting these attacks.

As mentioned, the double cookie solution works because it effectively randomizes the URL, especially in a GET. Yes, in a POST the randomness is not technically in the URL, but effectively it is still part of the identity of the resource being accessed. But also think of methods like PUT and DELETE (which can be invoked with Ajax); those can’t easily be protected using the double cookie method without some unusual effort (such as URL munging, or perhaps sending extra non-standard HTTP headers to transport the second validation code/cookie copy).

Another idea, especially for super-sensitive operations, is to use a captcha in your forms. This will effectively neutralize the automatic authentication, by forcing the user to interact with the request. It doesn’t even have to be a strong captcha, just as long as its value is random.

This is a very hard problem to solve; it is easy to underestimate all the attack vector variants.

@yp, Jeff covered why the tactic of defending against XSRF by checking the referrer isn’t really viable in his earlier post on XSRF: http://www.codinghorror.com/blog/archives/001171.html

@Hoser:

  1. The internet is global, how are you going to enforce these laws?
  2. I’m sure your country has laws against theft. Does this mean everyone leaves their car/house unlocked?

And ASP.NET has protected against this since its inception.

It’s like SQL injection. If people would just get with the times, it wouldn’t even be an issue.

If possible, stop relying on old, outmoded technologies, and be very careful with new and immature ones. If you must rely on them, you should be demanding from the developers that they build these security measures directly into the framework. Individual website developers should not have to think about this kind of nonsense.