Cross-Site Request Forgeries and You

@o.s. - That’s why it’s still in beta, this is the time to work these things out

Also, people suggesting to check the referrer, unless I’m misunderstanding the post, since the page is being injected with a fake url in an IMG tag, wouldn’t the referrer still be the correct site? The content is in fact coming off the page you are viewing targeting a logout URL on the same site…?

@amn:
I dispute your arguments. I hardly can see how a web server checking
a link with a HEAD request might get an image, but the browser doing
a GET request get a redirect? Especially with a /logout type of
URL. A web server might compose a custom HEAD request, designed to
mimic the more popular web browsers, exactly for the purpose of
foreward-checking the kind of response users will get, and if they
do not get an image mime type, then reject the URL as image source.

Yeah, but the server does have a specific IP (or range of IPs). Serving different content based on the IP wouldn’t be hard. Similarly, the server makes HEAD requests, and browsers don’t. Differentiating content based on the request type is easy as well.

That’s not to say that your approach is impossible… just that it ends up exploding in complexity to make it a strong technique.

Quite a few people seem to say that using POSTs instead of GETs will solve your problems, but I’d like to illustrate some examples where it wouldn’t:

  1. Many frameworks abstract the retrieving of request parameters, whether they are in the querystring or form, e.g. in asp.net, the developer may have written his code like:
    String myVar = Request[somevar];

expecting the request to be posted, but sending this variable via a GET will work just as well.

  1. Not all actions require parameters, e.g. the running logout example, so it actually doesn’t matter whether the request was sent with a GET or POST unless specific parameters are expected from the request. Frameworks like asp.net may help here as things like button events are implicitly tied to POSTs.

So the point is, if you want to be really careful, you would need to actually check that the request method is a POST instead and not a GET.

So basically don’t allow any user input that is used with requests. Like href or src and encode everything else. Problem solved.

I used log people out years ago on phpBB forums, along with worse exploits.

They have now made any critical action requires the session id sent via GET. Works well, everyone has a unique logout link, but wreaked all my fun. :[

how about changing your REST URL to include the user identity?

http://restfull.url/username/logout

Simple and efficient …

You people weaks agains XSRF do not deserve to be called webdesigner/webdevelopper. It’s a VERY WELL KNOW security issue since the web begins.
Use POST, flag your forms, period.

I’d think you’d be ok if you let users upload their images to your own webserver? Its only the external sources that would have this vulnerability?

Holding aside the vulnerabilities attendant upon allowing files to be uploaded to your server.

What I like about sollution #2 (Secret hidden form value) is that it also helps to handle spam bots, the kind of scripts that flood every unprotected shoutbox with ads, links to various ‘adult’ sites and so on.
Anyway, nice post there, Jeff.

Excellent explanation and reccommendations. There are more XSS, SQL injection and XSRF-type threats out there than one can shake a stick at.

There’s a good post at Vaclav’s Blog about the related issue of insecure web browsing. Organizations can try to secure their code against XSRF and XSS all they like, but if a single member of the group starts browsing on infected websites, it puts their operation at risk of a breach. Post: The Web Browser, Security Threat Number One. http://www.pcis.com/web/vvblog.nsf/dx/the-web-browser-security-threat-number-one

Yeah - I looked at stackoverflow and didn’t like the concept nor the implementation. But I still hope it is popular for him.

Look at http://amareswar.blogspot.com/2008/10/interesting-findings-on-csrf-cross-site.html

This is a problem that had me stumped at work for a while!
We have our own closed source CMS system and on sites that had comments modules attached we were getting users complaining of being randomly logged out all the time. I spend hours trying to figure out what was going on!

We used to destroy sessions if the user went to any page with ?logout=true in the url, that has since changed and more care is taken over what is allowed in comment boxes!

Sure, Having a hidden secret value in the form will help against some attack vectors, but what about a javascript that first requests an entire HTML page and then parses out the hidden key?

Sure, its makes it slightly more complex, but still not a 100% solution to the problem!

interesting blog, thank you for sharing. One might want to point out that a lot of CSRF attacks stem from the user’s browser. Some browsers (Chrome for example) have built in sandboxes to prevent CSRF… just a thought. Again thanks for sharing.

Web development is scary by default.
Hoffmann on September 23, 2008 04:13 PM

Absolutely!

It’s not CSRF, it’s Clickjacking:

The page loads an iframe that prepopulates the twitter status form,
repositions the frame, and makes it transparent (and positions it in
the z-index above the button). When the user clicks the button,
they’re actually clicking the iframe, which clicks the button ON
twitter, bypassing the CSRF protection. Nice (-:

Simplest solution: twitter shouldn’t allow the form to be populated
from the URL.

S

erm… that previous comment was about the Twitter worm going around today. Sorry about getting my wires crossed, there (-:

S

Thank you for article. I took great pleasure to read

“The trick here is that remote XmlHttpRequest calls can’t read cookies.”

That’s not entirely correct. It depends.
What about Cross Site Tracing (XST)?
http://www.owasp.org/index.php/Cross_Site_Tracing

Do not let us forget that cookies STILL get sent along with ALL requests even though cookies are flagged as httpOnly.
httpOnly only prevents an attacker from using XSS flaws to perform attacks such as session hijacking through the use of document.domain etc.

Whereas document.domain cannot read cookies belonging to another domain (I am thinking of a malicious page which, as in some of your examples, could execute some JavaScript code without the user noticing), it is still possible in some circumstances to access that information even with an XmlHttpRequest, but performing a TRACE request instead.

The response to TRACE request will return (because of the debugging purpose of TRACE requests), in the body, the same text the server received with the request. Therefore, in the response text you will find the content of the cookie, regardless of the httpOnly flag, and regardless of whatever controls you peform on GET/POST requests.
This, because as said browsers still send all cookies with all requests.

Luckily, nowadays this only works with certain browsers, but what I meant to say is that in some cases it is possible to read cookies with remote XML HTTP requests. It is enough to think of how many companies still use older versions of Internet Explorer (affected) and do not allow staffs do update their systems…

I certainly don’t usually forget, among other things (some of which you have also mentioned in this great post), to disable the TRACE method on my webservers.

But how many do as well? :slight_smile:

I think this is still today something which we can’t yet completely ignore.