New Cross-Site Request Forgery Attacks

Interesting:

CSRF vulnerabilities occur when a website allows an authenticated user to perform a sensitive action but does not verify that the user herself is invoking that action. The key to understanding CSRF attacks is to recognize that websites typically don't verify that a request came from an authorized user. Instead they verify only that the request came from the browser of an authorized user. Because browsers run code sent by multiple sites, there is a danger that one site will (unbeknownst to the user) send a request to a second site, and the second site will mistakenly think that the user authorized the request.

If a user visits an attacker's website, the attacker can force the user's browser to send a request to a page that performs a sensitive action on behalf of the user. The target website sees a request coming from an authenticated user and happily performs some action, whether it was invoked by the user or not. CSRF attacks have been confused with Cross-Site Scripting (XSS) attacks, but they are very different. A site completely protected from XSS is still vulnerable to CSRF attacks if no protections are taken.

Paper here.

Posted on October 6, 2008 at 5:42 AM • 26 Comments

Comments

SparkyOctober 6, 2008 7:57 AM

Interesting how new attack vectors are still found every now and then.

Couldn't this be fixed completely client-side, by having the browser tag the tabs that have been opened from a link on the authenticated site, and not send the authenticating information if the tab hasn't been tagged? The tag should probably be erased as soon as the user types anything in the address bar, or follows a link to another domain (in other words, when the user is re-using the browser tab).

BahbarOctober 6, 2008 8:18 AM

This may fix the problem, but will undoubtedly break some usage patterns.
I use verious google services all the time, and I usually start new tabs from scratch, expecting my authentication to still apply.
Also, "moving to another domain" is something a lot of sites do even when moving from things that, to the user, seem completely internal.

TheSchmurphOctober 6, 2008 8:21 AM

Am I missing something? This isn't new. I'd say it's quite old. Consider CSRF in combination with XSS. It's been used alot.

And no, it can't be solved very well in the browsers. Use a seperate browser for ie. Facebook and do all your pr0n smurfing on another browser.

Using server side tokens is the way to go to mitigate CSRF, and make sure all your XSS vulns are taken care of, or your site will be used to CSRF someone else's.

KaukomieliOctober 6, 2008 8:31 AM

This ain't a new attack vector, but in times of ubiquitious tabbed browsing and persistent logins it has a far greater reach then it did a couple of years ago.

wallyOctober 6, 2008 9:04 AM

@sparky, ironically, many of the solutions you suggest are not possible due to the security built into the browser. OWASP suggests a similar solution (including a unique value in each page that is validated on the next request) but it doesn't solve the problem. It just raises the bar.

bexOctober 6, 2008 9:31 AM

well... a new random "request-scope" ID would raise the bar quite a bit, wouldn't it?

I don't think one tab in a browser is allowed to read data in a tab from another browser. Barring a browser bug, or a way to predict the next "request-scope" ID, I'd think its a pretty good solution.

TodOctober 6, 2008 9:36 AM

New to academia, not new to the field -- which is pretty normal; academia is terrible at coming up with "new" anything for computer/network security.

Anyway, the major references are 2004 to 2006.


Wicked LadOctober 6, 2008 10:03 AM

I, too, have been reading about this attack vector for a while. For example, I remember a case from last December...
http://www.davidairey.com/...
...where an attacker used an XSRF attack to intercept a victim's e-mail. The attacker used this to hijack the victim's domain name registration and hold it for ransom.

DmitryOctober 6, 2008 10:07 AM

@TheSchmurph
It is very dangerous to think that you are safe as long as you do not surf pr0n. You may want to take a look at Thomas Dübendorfer presentation "Are Internet users at risk?". The slides can be found here:
http://thomas.duebendorfer.ch/publications/talks/...
Page 12 shows distribution of malicious URLs by content category. The adult content has twice more malicious URLs than any other category, but malicious content is everywhere! Often web sites do not pre-screeen ads that they serve to users. Some sites may poorly validate content provided by users on their forums. Some sites got hacked (as 510,000 sites were hacked in April of this year exploiting .ASP SQL injection). It is unrealistically to have a separate browser to each web-site you may want to visit.

So, it is a serious server-side issue that cannot be solved on the client side without breaking many legitimate usages. There was a long discussion on what can be done CSRF in November 2006, when passwords of many MySpace users were stolen. (See Bug 360493 on mozilla.org).

TheSchmurphsterOctober 6, 2008 10:24 AM

@Dmitry, Well, of course it's naive to think that no-porn is safe! (it is bad for the psyche, and the warrior-spirit! :)

It's naive to think that "proper" sites are safe, yes..., having in mind all the ads (now, that's a good point of entry).

It's merely an example. If don't want your Facebook sending weed ads to your friends, use it in a another browser (and don't click on stupid links you receive). Won't mitigate evil flash ads, but it sure as phlegm will raise the bar a considerable bit. On that particular site.

Point being; I don't mind pr0n sites, or other I'm-not-logged-into-sites for that matter, CSRFing eachother. So, they get their own browser.

All in all, the problem is architechtural - and not "fixed" easily, most certainly not on the client-side.

It's just exploiting what people tend to forget - the web is stateless.

KevinOctober 6, 2008 10:57 AM

Couldn't this be solved completely by servers always checking the referer header to make sure anything sensitive originates from one of their own pages?

Neal LesterOctober 6, 2008 11:09 AM

This is also a usability issue which can adversely impact users in the absence of a malicious attacker. A simple example would be a user that has two accounts on a single web site:

1) Log in as User1
2) Update phone number for User1
3) Log out.
4) Log in as User2
5) Whoops, I gave the wrong phone number for User1. Click back button, fix and resubmit the User1 phone number.
6) On the server, User2's phone number is now replaced with the value intended for User1.

TheSchmurphOctober 6, 2008 11:21 AM

@Kevin No.

@Nead Lester, That'd be a very badly written site to not even kill the session at step 3.

Besides... How could your example
happen? If you store the userid in the POST to change the phone number, maybe. Then you'd probably get what's coming to you anyway. I may be missing something here?

webappsterOctober 6, 2008 12:03 PM

Hi,

This attack is far from new. It's been known to the web application security community for a couple of years alongside cross-site scripting and SQL Injection. OWASP has been talking about it for a while now. And recently the new PCI-DSS v1.2 included it in requirement 6.

I have noticed that this blog doesn't cover web application security issues, or only rarely. That's surprising because web apps are one of the primary targets and source of data breaches. For such a prominent blog to ignore web app security issues is perhaps an indication that most of the security community hasn't woken up to the dangers posed by insecure web applications.

posted a commentOctober 6, 2008 12:30 PM

@webappster
This is more of a blog on Schneier's life than anything else. A story won't appear until it comes up on his radar. Hence the bias towards TSA, counter-terrorism and squid.

KaukomieliOctober 6, 2008 3:03 PM

@webappster:

This blog is not a news site. There are plenty of those already out there with varying quality.
As far as I am concerned this is a blog where readers are invited to the thoughts of Bruce Schneier and what crosses his mind or gets his attention - it might even be squids...

jnarveyOctober 6, 2008 3:52 PM

Great post. These kinds of threats are popping up all the time. Good that the companies involved, like ING Direct and the NYTimes took action quickly, but the point is to be proactive, not reactive. Ah, well.

Good blog post on the related topic: If security is not built in, it's not there

http://www.pcis.com/web/vvblog.nsf/dx/...

DmitryOctober 6, 2008 4:56 PM

@Wicked Lad
It is still not clear whether David Airey was a victim of the XSRF attack. See the last comments on the page you referred to above. On August 19th, 2008, Josiah Carlson asked David to modify his original post to specify that he was not exploited due to the XSRF hole, to which David replied: "I’m still not sure exactly how I was exploited. It’s in the past now, thankfully."


@TheSchmurphster
HTTP is a stateless protocol, but saying that the web is stateless is not exactly correct due to cookies. In a truly stateless communication the result of any request is unrelated to previous ones. If the web was truly stateless then the XSRF attack would not be possible (you would not be able to fake anything except the source IP). What happens with XSRF is exploiting your current session (your authorization token) by some malicious site to send a request on your behalf.

Completely prohibiting cross-site posting would prevent the XSRF attack, but there are many cases where sites need to communicate across domain names. A more reasonable approach is to have the cross-site policy on web-servers, which specifies what domains are trusted and do not accept post request having an untrusted referrer.

BWOctober 7, 2008 3:29 AM

The solution to this problem is to require each critical request to have a session id that isn't stored in the cookies but only in script space and hidden in form. This isn't a browser bug, it's a website design issue.

Clive RobinsonOctober 7, 2008 4:11 AM

@ Dmitry,

"HTTP is a stateless protocol, but saying that the web is stateless is not exactly correct due to cookies."

And "that is the rub" as they say.

Back in 1995 I gave a short talk at Kingston University about bolting state onto statless protocols specifficaly HTTP and not checking the 'where from' / 'who by' / 'why' of each state change.

The response was at best a yawn. It is only well after a decade are some (but not most) people waking up to the issues.

The simple fact is that you realy have to understand the implications of overlaying a state based protocol onto a statless protocol, especially when it is done in an ad hoc way like "cookies" or any other of the mishmash attempts by self taught "code cutters" in the name of expediency.

If you think about things in a general way you will realise that all of the security problems with state based protocols on statless protocols apply virtually across the board.

That is a generlisation of an attack against say TCP on IP will probably work against some other protocol on say HTTP.

I fully expect to see a lot more of these attacks as time goes on. Eventually people will wake up and realise it's time to replace the "ad hoc" mishmash of "I invented it and it works for me" protocols with those designed specifficaly to deal with security issues, and they might even decide that HTTP is nolonger fit for purpose.

ShaneOctober 7, 2008 1:44 PM

Completely old news, as is the incredibly easy fix that most competent developers implemented straight away, a long time ago.

ManeeshOctober 9, 2008 7:30 PM

A bank has implemented a feature, in its website, that prevents users from double-clicks, back and refresh. Primarily aimed at preventing multiple-submits (in cases like fund transfers etc). Though this feature was not meant to counter CSRF, its design is so robust that CSRF attacks are ineffective against its site. Reason being it addresses the fundamental flaw. CSRF is successful because most of the URL's are predictable or static and remains the same across "logged in" sessions.

ManeeshOctober 9, 2008 7:32 PM

A bank has implemented a feature, in its website, that prevents users from double-clicks, back and refresh. Primarily aimed at preventing multiple-submits (in cases like fund transfers etc). Though this feature was not meant to counter CSRF, its design is so robust that CSRF attacks are ineffective against its site. Reason being it addresses the fundamental flaw. CSRF is successful because most of the URL's are predictable or static and remains the same across "logged in" sessions.

If one addressed this fundamental flaw then it provides protection against CSRF and also the recently discovered clickjacking (aimed at cross domain attacks)

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..