Cross-Site Request Forgeries and You

Scratch that… I just read the HEAD bit… Man do I feel silly.

XSS, CSRF, you just need one more article (session high-jacking) to have spoken about every simple and common attack vector used against website.

Jeff,
I really love your blog. I appreciate that you are open about the mistakes you make, where other places are so prideful that they try to cover up vulnerabilities.

Just yesterday I was doing XSS attacks on my own company intranet! I never thought of doing XSRF. (We have an away comment field for our status, that does not get scrubbed in any way. Mostly just having fun here to pass time. Nothing malicious.)

Jeff so was stackoverflow found to be vulnerable to xsrf attacks by that one user ‘erratic’ or whatever that asked to do security testing on the site? Man I certainly hope not.

You say So what can we do to protect our websites from these kinds of cross site request forgeries?

Number one (most important) is Check the referrer. Then a couple sentences later you say Don’t even bother with referrer checks.

Will you make up your mind please…?

  1. I want to confirm that checking UrlRefeffer [hoping to prevent XSRF attack] is a waste of time. It can be spoofed by malicious user. That can be done by combination of XSS and XSRF attack: injecting javascript into HTML output (XSS) on one web page and producing forged request (XSRF) pointing to another page of the same web site.
    That’s also correct that legitimate users may have empty UrlReferrer. Rejecting doing business with such users is a mistake.

  2. I agree that introducing parameters cuts off the most obvious XSRF attacks.
    But if one of your pages is XSS vulnerable (allows javascript injection), then even if you have dynamic parameters to prevent XSRF (on another page), javascript can still read these dynamic parameters and re-submit them, so the request would succeed.
    That’s how Gmail was hacked – the hacker used XSS vulnerability on some obscure Google’s web site site in order to exploit XSRF vulnerability in Gmail).

amn, interesting ideas.

That head thing doesn’t really fix things. While the web server may test a link and get an image, the client’s browser may do the request and get a redirect. No problem with the first idea (attaching extra data to post or quesrystring, just as Jeff used in point 2), though if it’s externally guessable or discoverable information (like Jeff’s username, or to be topical the town where Palin first met her husband), it becomes possible to attack. Userids and usernames would be out.

For example, if a forum was targetted where the current logged in browsing users are listed, this list can be automatically checked and used as a basis when serving the image from the attacker’s website. Got a johnsmith45 logged in? okay, instead of the image we’ll redirect to /logout?confirmuser=johnsmith45

If it is possible to send private messages to individuals, an attack could be individually tailored. Ideally, the correct approach for choosing what to use as the second proof would have to be some data not externally visible, and probably per session - the session id used in the cookie would probably do - if the cookie session id is already externally known, hijacking is already possible. Sigh.

Users are the most vulnerable security flaw I can think of!

Absolutely. That is why I’m designing a site that no user will use.

Some web frameworks provide built in protection for XSRF attacks, usually through unique form tokens.

So what does ASP.NET MVC do?

i am aware that the author probably does not read comments this far down, but this font looks like ASS on my screen .

Users are the most vulnerable security flaw I can think of!

Which is why I teach my less technologically inclined family and friends the basics of user responsibility.

…Or I install Mozilla, AdBlocker, and NoScript to keep them from bad habits, hehe.

Ironically, the double-submitted cookie solution is prevented by your earlier security measure of making your cookies HttpOnly.

if you’re running an asp.net site you can set the viewstatekey property during the page_init to sign the viewstate with a unique user value and avoid CSRF. You should also use viewstatemac to ensure the viewstate’s not tamperable and enable viewstate encryption so it can’t be decrypted (http://msdn.microsoft.com/en-us/library/aa479501.aspx)

thx for the OLDNEWS post

this blog is NOOB CENTRAL

Just a quick check of some of my banking websites shows they are certainly vulnerable to an XSRF logout ‘attack’. I’m not sure how bad that is though. Most online applications can and should err to the side of logging a person out.

However, I think the deeper problem here is one I’ve struggled with as a website developer. When a request coming in to the server, one can’t tell whether it’s from an IMG tag, iframe tag or click. (or script tag vs entering url in location bar)… It would be nice to have a browser header that specifies what the URL request came from… or maybe what type of media the browser is expecting. Have any solutions of this type been proposed?

As you have pointed out that the form does not run on the original
website, in that case wont the browser dis-allow such a request? As
the request is being originated from javascript to a different domain
shouldn’t the browser catch it and throw a javascript error?

@Amit: Nope, this request is perfectly valid. Imagine, for example, submit a query to Google Search from any page, or posting a request to Facebook to ‘Share This’ from any page. The problem, I believe lies in the underlying concept of REST architecture, which is why the discussion about tokens above, which represent the state information missing from a typical HTTP request.

@Jono

That’s why it’s generally a bad idea to access information using just Request[SomeVar].

Sleepy Matt:

I dispute your arguments. I hardly can see how a web server checking a link with a HEAD request might get an image, but the browser doing a GET request get a redirect? Especially with a /logout type of URL. A web server might compose a custom HEAD request, designed to mimic the more popular web browsers, exactly for the purpose of foreward-checking the kind of response users will get, and if they do not get an image mime type, then reject the URL as image source.

Also, a confirmuser=jeff12345 is as bad idea as no idea at all. When I said authentication, that is what I meant. A query string variable confirmuser=jeff12345 is not a form of authentication. A better variable in such case would be a temporary session id, if any (one of examples), it is private, and it is temporary.

Jeff’s proposition even though looked like mine, had a DIFFERENT PURPOSE and DIFFERENT METHOD. He is proposing to use hidden form fields with a unique server key, which will essentially disable the API for other websites. I am advocating for cross site API. I DO WANT developers to target my site API, by HTTP. He proposes a solution which disables it altogether, since only the server itself knows the unique key and embeds it in its own pages. Nobody else has a chance of knowing what this unique key is. In my case I am proposing to just send a temporary session id, possibly over HTTPS, to a logout script, so that it may authenticate the user that the script is supposed to log out, instead of it assuming it should just log out the default user - the one a cookie points at. A cookie that is sent by the browser intrinsically, a fact that is exactly the pillar of exploit by an attacker that wishes people click on a link. An attacker will have to know the session id, if it wants to achieve anything of importance, and the session id may only be displayed to the user, and retrieved with an API in exchange for a valid username and password. I do not expect end users to be aware of their session ids, but a username and password pair, sent over HTTPS might do too. Regarding passwords, it is always questionable though, because they are the weakest link. If HTTPS somehow fails and password becomes public knowledge, it is a gravely serious security breach.

Another thing Jeff proposes, which has little to do with my ideas, is the double cookie method. I do not see any good reason (over mine) to use more cookies. I am tired of cookies. I do not like so much cookies. Web has diabetes by now. Please stop with obscure non-solutions to simple non-problems. Thank you.

Excellent post! Thanks for sharing!!

Fourth method to prevent XSRF: Disable JavaScript for all sites except the ones you opt-in, e. g. using NoScript on Firefox. A pity this solution is currently levered by the fact that about every site moronically insists on using JS, even for such basic things as links :-(( Thanks you infantile webdesigners and PR experts!