Schneier on Security
A blog covering security and security technology.
« Three Emerging Cyber Threats |
| Friday Squid Blogging: Sex Life of Deep-Sea Squid »
September 23, 2011
Man-in-the-Middle Attack Against SSL 3.0/TLS 1.0
It's the Browser Exploit Against SSL/TLS Tool, or BEAST:
Using the known text blocks, BEAST can then use information collected to decrypt the target's AES-encrypted requests, including encrypted cookies, and then hijack the no-longer secure connection. That decryption happens slowly, however; BEAST currently needs sessions of at least a half-hour to break cookies using keys over 1,000 characters long.
The attack, according to Duong, is capable of intercepting sessions with PayPal and other services that still use TLS 1.0which would be most secure sites, since follow-on versions of TLS aren't yet supported in most browsers or Web server implementations.
While Rizzo and Duong believe BEAST is the first attack against SSL 3.0 that decrypts HTTPS requests, the vulnerability that BEAST exploits is well-known; BT chief security technology officer Bruce Schneier and UC Berkeley's David Wagner pointed out in a 1999 analysis of SSL 3.0 that "SSL will provide a lot of known plain-text to the eavesdropper, but there seems to be no better alternative." And TLS's vulnerability to man-in-the middle attacks was made public in 2009. The IETF's TLS Working Group published a fix for the problem, but the fix is unsupported by SSL.
EDITED TO ADD: Good analysis.
Posted on September 23, 2011 at 1:37 PM
• 34 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Most webservers and several major browsers are open source, the flaw has been known for years and the fix has been published for over a year.
Forcing people to release source is clearly the solution to our security woes.
SSL 3.0 could have implemented the renegotiation indication extension, and in any case, most implementations of ssl 3.0 are also implementations of tls 1.0. I am not certain if any ssl3-only clients implemented the RI extension, but it was designed as it was intentionally, in part to leave open the possibility of SSL 3.0 implementations. Recall that the SSL was not an IETF standard, so the IETF WG would not have been able to require or standardize RI for SSL3.
On the current attack, preferring rc4-sha should work around the problem, at least according to the early details. I blogged about it in the linked URL above.
The problem discovered by Duong and Rizzo and exploited by their BEAST tool is NOT the same as the one made public in 2009. The fix to that 2009 problem is irrelevant to the attack of Duong and Rizzo. Your comment is pretty much terminally confused here.
Something more constructive in comparison to my previous comment:
The vulnerability on which the new attack is based is indeed well known. It's due to the use of predictable IVs for CBC mode encryption in SSL3.0 and TLS1.0. That this can cause problems was first observed by Dai and Rogaway, as noted by Bodo Moeller here:
Also relevant is a 2006 article by Greg Bard, showing how the distinguishing attack of Dai and Rogaway might be turned into a plaintext recovery attack under certain circumstances:
Essentially, Duong and Rizzo have rediscovered Bard's attack and made it work for real against https cookies. To do this, they had to come up with some new twists, and much kudos is due to them for that.
The long term solution is to switch to TLS 1.1 or 1.2, which mandates per-packet, random IVs. A full security analysis of this scheme is to appear in a forthcoming paper at Asiacrypt 2011 (this is a shamless plug for my own work with Tom Ristenpart and Tom Shrimpton).
However, this is not an easy transition to make because of lack of support in clients and servers for these versions of TLS. There are other workarounds, including the sending of a dummy packet for each real packet on the TLS connection. This hack was already implemented in OpenSSH some years ago because of exactly the same issue with predictable IVs there.
Your comment is terminally misdirected: if anything's "confused", it's the quoted article, which talks about one MITM attack and then links to a different MITM attack.
That's irrelevant, of course: "the vulnerability that BEAST exploits is well-known", and yet here we are.
The notion of effectively forcing companies to release their source, or else be ground into the dust by frivolous lawsuits, is absurd.
Just tried logging into my bank account with all ciphers except rc4_128 disabled, and the connection failed. Turned aes_256 back on and I got right in. :-(
@Glenn: my bad, I didn't realise that what was posted was just a quote from some random article. I guess Bruce is too busy these days to check for accuracy himself or actually write his own commentary :-)
Fred Whelps: nope. When you are inside an iFrame of a different origin, you can't simply grab the cookies of the main window.
Browsers security do not allow you to read the content (that includes cookies) that is coming from a different domain.
So if i understand correctly:
That just seems like a really weird choice.
Two things that hit me while reading article and comments:
1. The IETF's TLS Working Group published a fix for the problem, but the fix is unsupported by SSL. I guess this means just SSL not TLS, because as far as I know OpenSSL for example did implement the RFC into their library quite soon after the publication of the standard link So as long as TLS is used and no fallback to SSL is allowed, this should effectively close the vulnerability from 2009.
Did I get that right?
All this talk about "browsers" is misleading. The exploit is fundamentally about TLS, so SSH would be vulnerable too, say in a wireless context. Correct me if wrong.
So... Time to scrap SSL completely? :)
Maybe we should make a TLS 1.3 right away (or go for using TLS 1.2) and start a campaign to just get everything below that dropped.
SSH uses its own protocol and not TLS. You build OpenSSH with the OpenSSL library only for the crypto support.
While I can appreciate the technical achievement that this represents, I don't believe that it presents any reason to panic (or even change).
One potential application that justifies the trouble is inline encryption or SSL sessions of apps that are privacy-preserving. In other words, plenty of potentially useful data can be recorded in encrypted form but is inaccessible afterwards during a compromise. Compromising the key gives access to previous data. Additionally, if the key is used for authentication too, the attacker might have a forgery opportunity they can use without maintaining control of an actual machine (e.g. MITM).
I haven't even gotten started on APT's. There's quite a lot of potential for cracking a protocol which people assume will be secure.
I wonder if this could be the final nail in the coffin for the idea of using a block cipher in CBC-mode for encrypting communications channels. It has no real security advantages over a stream mode (like counter or OFB) or a stream cipher, and is more complicated to implement correctly due to issues like padding and having to implement the decryption side of the cipher. Bruce argued in "Practical Cryptography" back in 2003 that counter mode was probably the way to go, and he wasn't the only one saying so, yet it still feels like an oddity when it does show up in real life.
The exploit as presented relies on the WebSocket API. However, the WebSocket API really should be preventing the connection due to the same-origin policy (as the port is different). This wouldn't be the first security flaw in WebSockets.
The paper claims that Java and Silverlight can also be attacked. What these should be doing isn't clear, but they probably should be enforcing the same-origin policy too.
Nice try pseudo-9ac322038b09b254b658252a35c4fae7f9b84b9d!!
Given that TLS 1.1 and 1.2 improved on 1.0, does anyone know why IE9 has them disabled by default? What reason is there to not enable these out-of-the-box?
The simple answer is "compatability issues" with web servers nearly none of which run anything other than SSL 3.0 / TLS 1.0.
A couple of comments.
I like Eric Rescorla's writeup about the BEAST attack:
The BEAST attack exploits predictable IVs for future SSL records of the same connection for TLS cipher suites with block ciphers in CBC-modes in SSLv3 and TLSv1.0. This is possible because of the "running IV", i.e. reuse of the last cipherblock from the previous SSL record as initial IV for a new SSL record. And all what BEAST succeeds in doing is recovering blocks of plaintext by lots of guessing (trial and error), BEAST does _not_ recover any encryption keys.
A prerequisite for the BEAST attack is that your Browser will happily execute arbitrary malware of the attackers choice (Man-in-the-Browser attack) allowing that malware to send arbitrary requests to any attacker-chosen Web Servers and where the Browser will obediently insert allegedly confidential existing Cookies into the header of each request performed by the malware. Really, this is a "you are p0wnd" prerequisite, so it is just mildly interesting that the Browser did not serve that Cookie to the malware on a golden platter as well. But I hope it helps to remind folks that lightheartedly mixing and matching untrustworthy code with sensitive data in protocols may have unintended consequences on confidentiality, and it is a bad idea to make bold assumptions about properties of protocols beyond their design limits.
Quoting the TLSv1.2 spec: http://tools.ietf.org/html/rfc5246#page-16
Any protocol designed for use over TLS must be carefully designed to
deal with all possible attacks against it. As a practical matter,
this means that the protocol designer must be aware of what security
properties TLS does and does not provide and cannot safely rely on
SSLv3 and TLSv1.0, with their running IV for cipher suites with block ciphers in CBC mode provide an "encryption oracle" (enabling a "chosen plaintext attack") when supposedly confidential data is mixed with arbitrary data from an attacker, and the attacker is allowed to watch SSL records on the wire before submitting new data that will be transmitted at the beginning of new SSL records on this very connection.
Different TLS connections (even when resumed from the same cached TLS session) and even both directions of the same TLS connection use distinct encryption and MAC keys, so they're isolated from each other.
The distortion that TLSv1.1 adds with the use of explicit IVs for each new TLS records should provide a sufficient security margin for this kind of attacks, but the security margin depends on the ciphers block size rather than its key size. A non-predictable IV distorts the output of a "CBC encryption oracle" against chosen plaintext attacks. Fortunately the SSL records use a MAC for integrity protection inside the CBC-encryption as an integral part of the protocol, or the explicit IV prepended to the ciphertext could have created a "CBC decryption oracle" in return.
As far as Authentication cookies are concerned -- these aren't "valuable secrets on their own" but rather a means for a specific purpose (limit who can submit a request that will be executed under an existing session). So with respect to what BEAST did in the PayPal attack (it uncovered the plaintext of an SSL-Cookie that was silently assumed not to be accessible to malware running in the browser), there is really *nothing* that it gained from that particular attack and could not have done directly and without the attack: submitting requests that are executed under the session identified by that SSL-cookie. Would BEAST directly submit evil requests, then the server would have not the slightest indication that it is being attacked. BEAST's guessing attack itself is noticable heuristically on the server from its amount of network traffic and traffic characteristic (a high number of strange requests with lots of small, dribbling SSL records containing fairly random binary junk).
What if user put TLS connection into sandbox? Ex. into incognito mode.
btw. this would be one possible mitigation for SSLv3 and TLSv1.0:
OpenSSL has been able to do this for years with empty (instead of one-byte) SSL records, but "empty" is a real border case that might face more interop problems than a prepended fragment of 1 byte.
It is important to do this only for application data records (and it is only necessary for application data records), because a large number of the installed base of TLS implementations will abort the connection when faced with fragmented handshake messages, e.g.:
This solution incurs some additional overhead (slightly more on the receiver), because of the additional TLS record that needs to be processed, and is only necessary for SSLv3&TLSv1.0. Although the TLS protocol has always explicitly stated that message boundaries will not be preserved by the record layer
there might exist a few brittle application designs that choke on such a newly imposed additional fragmentation of application data.
@jouy: The BEAST attack is all and only about malware in the browser. Malware that can be entirely remote controlled by the attacker. And that malware requires a feedback loop from a ccoperating network monitor in order to make the attack adaptive, i.e. trickle out a new SSL record on the established TLS connection where the Cookie had been sent, read the SSL record on the network to learn the last ciphertext from it and calculate the next "guess" to make it an adaptive chosen plaintext attack. If it was just about sending requests, then there would not be any advantage for SSLv3/TLSv1.0 over TLSv1.1.
Have your ever think what happen if you first ISP router is captured and then route your connection to other site and then back to your ISP?
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.