justan0ob: #1 way to prevent this sort of thing if you use sites that fall victim to it is to LOG OUT of websites you identify yourself on when you’re done with them. If you close a tab, the cookie/session will hang around. If you log out, the site should refuse to acknowledge your CSRF request.
@David: Using cookies for authorization is a necessity. The only other solution is using the IP address, and (other than a hole lot of other problems) would still have the same problem. Maybe you are thinking about persistent cookies, where you are still logged in after restarting the browser. But even if you don’t want that, if you want to save state between one page and the next you need cookies, because it’s the only way the server can identify you.
And to answer your other question, the PHP session is identified by a cookie.
Jeff, why did you choose to have a unique form variable with a timeout? I can’t see any advantage, but I can see disadvantages (using a form after lunch, the next day, having returned to work on Monday).
Matt (October 15, 2008 05:38 PM): because the cookie isn’t the problem. If I had a link to Google here: http://www.google.com/ and you clicked it, it should return your Google page. The one with ..@gmail.com in the corner etc. In order to do that, cookies are needed. If you have gmail as your homepage, you need to send the cookie to have you logged in immediately. The same goes for the query string, or form posts. They aren’t the problem, they make the web more flexible, the assumptions the web app is making are a problem.
@Andres: a cookies for auth a necessity? What about HTTP auth? That’s what it was made for.
Just curious, but hasn’t Asp.Net been protecting us against this type of attack since 1.1? I was under the impression that there was a unique key embedded into the ViewState that is checked upon every single post.
This XSRF theoretical attack is like having an engine off a 747 land on your head. Possible, but I’m not staying indoors for the rest of my life because of it. Only 2 actual XSRF exploits that have ever been documented that I could find,yes only 2. (Google for yourself)
A vulnerability in GMail was discovered in January 2007 which allowed a attacker to steal a GMail user’s contact list Great,some teens list of other teens addresses that they e-mail all day long with nothing to say,I can get all the e-mail addresses I want in 2 seconds,it’s called www.google.com . A different issue was discovered in Netflix which allowed an attacker to change the name and address on the account, as well as add movies to the rental queue etc… Ooooo,I bet this hacker wasn’t hard to track down
Both these could have been accomplished with a simple keystroke logger. Hackers,like water,take the easiest path.
Show me this being used to actually get something useful, where an easier hack cannot be used, and I’ll gladly eat my words.
I’ve seen it happen to friends at 4 different startups in the valley over the last couple years. Most people don’t share when they get attacked as they are afraid that it will only attract more attempts.
@Dave - there is nothing you can do to prevent DOS attacks. I suppose you can make yourself more vulnerable or easier to DOS, but a determined DOS is unstoppable (heh, except at the switch!)
Another very interesting post. At least to someone like me, with little experience of security issues.
One question that keeps occurring to me, whenever this type of attack is mentioned, is what this means for OpenID?
Does using OpenID reduce the risk, or make it worse? Or does it make no difference?
Hmmmm… security advice from someone that made the eminently hackable stackoverflow, and also relies on the notoriously insecure OpenID. Well, I guess you’re trying. When are you going to change that captcha?
Just make sure that http referer is your own site.
@Nicolas: Sorry, I forgot about HTTP auth, but it suffers from the same problem.
Where I come from, yes. It’s nearly unheard of to lock your front door (a locked door is usually a sign of foreign visitors), and not uncommon to leave the keys in the ignition 24/7 (and even those who leave the keys inside of their unlocked house nevertheless leave the car doors unlocked).
It took moving to a large city for me to understand this thing about locking your doors, and by extension it opened me up to realizing how horrifically irresponsible it is to be unlocked on the sprawling digital metropolis of the Internet. Excuse the grandiose metaphor.
On the other hand, it’s horrifically easy to break into a locked house. I’ve done it by accident when I was 15 – my cousin was pushing the door closed as I tried to open it, and eventually succeeded in locking it without me noticing. One shove, without really using a lot of force behind it, and the door ripped off its hinges and fell into the house. In the middle of a cold winter.
We horrifyedly tried to cover it up with large amounts of wood glue, to no avail.
The solution by Zeller and Felten can be improved by including a second hidden field in the form specifying the name of of a unique cookie where the cookie contains the value also included on the webform and the webform value is unique for every request, even the same user.
This will allow the form to be used multiple times in different tabs of the same browser, subject only to expiry time of the cookie for a particular instance of the form.
This will support back buttons and minimize the exposure.
The fact that there’s jail time involved in break enter lets me sleep far more soundly than the shiny new locks and bulletproof doors I just installed in my home.
In that case, you know nothing about crime.
A lot of people still seem to be missing how CSRF works.
It’s not about gaining access to information, it’s about tricking you, or your browser, to access URLs that perform actions.
Let’s say I’m a member of the cheese of the month club, at omgcheez.com. I’m logged into that site so I have some session cookie that is only readable by omgcheez.com. Let’s say the site is designed such that most activities require being logged in but use simple URLs (e.g. deactivating your membership would require nothing more than navigating to the url: http://omgcheez.com/membership/deactivate).
Now, let’s say Jeff hates cheese, and he wants me to drop my cheese of the month subscription. All he has to do is trick me or my browser into accessing the http://omgcheez.com/membership/deactivate URL. And there are many ways to do that. He could put a link on some page or email and try to dupe me into clicking on it. Even easier, he could get me to go to some page of his which does nothing more than redirect to the deactivation URL. Easier yet, he could get me to go to a page containing a broken image link to that URL. Or, let’s say Jeff and I both participate in the same online forum somewhere which allows posting inline images, Jeff could post a reply to some thread I read which would contain the subscription deactivation URL as an image link, boom I’m unsubscribed as soon as my browser tries to retrieve the image.
When retrieving the URL my browser will automatically send my session cookie to the omgcheez.com site, which will detect that I’m logged in, and it will deactivate my subscription. The fact that the referrer is some other site is irrelevant to the browser.
Note that at no time would Jeff ever have or need access to my account or session details, and yet he can got me to perform an action on my account without my consent.
The key here is sites which use predictable URLs (and inputs) to perform actions. If malicious parties are able to predict those URLs then they can get you to access them and perform actions.
I’m waiting for a CSRF flaw on an ADSL (or similar) router to lead to DNS hijacking and the like.
Just think, at least in the UK, there are only a few predominant ISPs (sky, talktalk, bt) who have significant market share … they all provide ADSL modems, and nearly all ADSL modems trust requests from 192.168.x.x; how long before someone manages to change the primary/secondary dns servers the ADSL router uses through a GET request?
What protects you is not jail time; it’s the small number of thieves compared to the large population. If your argument were true, there would be NO theft or murder in your country.
Low probability and easier pickings are what protect most people from theft.
I’m talking more about a root problem in the distinction between webpage, image, js, data requests. I’m specifically talking about just GET requests where no action is performed, but work is. My focus is on reducing invalid and bogus requests and if there was some verification as to what type of request it was. The browser knows what type of thing it’s looking for in an image tag request, but it doesn’t tell the server that it’s requesting an img. That is the ROOT of this problem.
An even stronger, albeit more complex, prevention method is to leverage server state – to generate (and track, with timeout) a unique random key for every single HTML FORM you send down to the client. We use a variant of this method on Stack Overflow with great success.
Doesn’t maintaining this kind of state open up your website to Denial-of-Service attacks? An attacker could cause your site to generate so many of these form tracking records that either 1) your server falls over from the memory required (if there’s no upper limit on the total number), or 2) cause the form tracking records of legitimate users to overflow out of the table (if there is an upper limit on the total number).