Collision Attacks Against 64-Bit Block Ciphers

We’ve long known that 64 bits is too small for a block cipher these days. That’s why new block ciphers like AES have 128-bit, or larger, block sizes. The insecurity of the smaller block is nicely illustrated by a new attack called “Sweet32.” It exploits the ability to find block collisions in Internet protocols to decrypt some traffic, even though the attackers never learn the key.

Paper here. Matthew Green has a nice explanation of the attack. And some news articles. Hacker News thread.

Posted on August 26, 2016 at 2:19 PM34 Comments


Lisa August 26, 2016 5:04 PM

It is clear that this attack requires a huge number of blocks (~2^(n/2)) to work, which is a concern for high bandwidth data transfers such as video streaming.

But I do not see how this attack would not be practical for a banking website using TLS with session generated 3DES (3TDEA) keys, with lightweight webpages of mostly textual content with bits of small static images, and a session timeout.

Dicussion in the IETF-CFRG mailing list has many avocating to remove 3DES cipher suites from TLS 1.3, but this seems like overkill, when it appears that it can still be used safely with low bandwidth websites.

Jonathan Wilson August 26, 2016 6:28 PM

Is there any reason for the newer TLS versions to support old ciphers like 3DES? Why would anyone use 3DES instead of something newer and more secure?

IMO the proliferation of different cipher suite options is a bad idea. TLS 1.3 should pick the most secure set of options (e.g. only SHA2 and SHA3 for hashing and only AES for block ciphers) and mandate those only as being supported. If you need RC4 or 3DES or SHA1 or some other old algorithm for some reason, you can keep using TLS 1.2 or below and not claim to support TLS 1.3 at all.

Anton August 26, 2016 8:41 PM

Why would anyone use 3DES instead of something newer and more secure?

Because it is hard to install modern software on old hardware. Windows XP is still quite popular in poor regions of the world. Not all have money to frequent hardware upgrades.

Dan August 26, 2016 11:58 PM

Because it is hard to install modern software on old hardware. Windows XP is still quite popular in poor regions of the world.

Doesn’t Opera still support Windows XP? If so, can’t it use modern ciphers or does it still rely on XP’s cipher suite?

Brad August 27, 2016 1:33 AM

  1. Whenever I hear the loudest, most media repeated, ultra strident advocates for a particular security concept or technology, sometimes even using reverse-psychology, my BS-meter goes off the scale. (e.g. Unlock the terrorist iPhone, ban BlackBerry Messenger it harms policing, Bitcoin is Anonymous it hides criminals.). The age old classic play, is to arrange/find/buy some backdoor (of any variety), and then scream “it’s securing the enemy, think of the children!” Not only do people buy it, they pass on the info free of charge.
  2. Mandating only the “most secure” ciphers smacks of arrogance. Analysis will continue, vulnerabilities will continue to become known, and the definition of “most secure” will change. That’s even when game theory can at times disprove the concept of a single most secure choice.

Maintain flexibility, rigidity in this domain leads to brittleness, not strength. A robustly secure world is not a mono-culture, don’t assist class breaks. (kudos for ‘Beyond Fear’).

Jonathan Wilson August 27, 2016 3:58 AM

@Anton If you are on XP, you will either be using an old no-longer supported browser (e.g. IE6/7/8, old Firefox, old Chrome, old Opera, whatever else) in which case it will never support TLS 1.3 so removing 3DES from TLS 1.3 doesn’t matter for that. Or you will be running Firefox (the only browser I can find that still supports XP with their latest still-being-developed version) and Firefox doesn’t care that Windows XP only supports old ciphers like 3DES since Firefox has its own crypto code (and will be able to implement whatever set of ciphers the TLS standards people decide should and shouldn’t be in TLS 1.3 just as easily on XP as on any other platform they support)

There is absolutely no downside (even for users of older OSs or older hardware) that I can see to not including 3DES as one of the allowed ciphers for TLS 1.3.
I doubt there will ever be a web browser or any other implementation of TLS out there (on any hardware or software combination) that needs TLS 1.3 but cant implement the stronger ciphers and needs to support 3DES for TLS 1.3.

Marc-André Servant August 27, 2016 7:24 AM

This is only one possible attack on 64-bit ciphers. If you have a network full of pooled, dedicated chips (and you pay their owners), they will regularly find SHA-256 hashes beginning with 64 leading 0 bits (once every minute or so at current hashrate).

This proves 64-bit crypto is vulnerable to brute force alone, without even considering fancier attacks. There’s enough money in the hands of most large nation-states to build a custom hardware cracker, and cybercriminals could do it with a medium-to-large botnet.

ab praeceptis August 27, 2016 8:13 AM

I take this to indicate the sad state we’re in. No, I’m not talking about “64-bit algo attacked”. I’m talking about the utterly poor level of analyzing and reporting.

First: Practically this is more to do with SSL/TLS and with the CBC idiocy than with “64-bit algos insecure”.

Why am I picking on those “just pesky details”? Because it’s those that offer the attack vector, that “weaken the door lock”.

Just look at M. Greens text – and I actually do value M. Green for his often excellent explanations. But the matter is so ugly and there is so much smelling that even very good explainers have a hard time to find a reasonable compromise between explaining and avoiding to shout “Help! We are providing mental asylums with our work and even some of us might actually be patients”.

Example in case: The nice thing about CBC is that … it can be proven … secure if we make various assumptions about the security of the underlying block cipher.

Let me tell you something: The number 42 can be proven to be a secure, *if me make various assumpions about …”. Heck, making various assumptions it could even be proven that being bitten by a cobra can be fun.

In reality, we can only have 2 reasonable assumptions: a) everyone having any whatsoever remote change to get at our bytes must be assumed to be an enemy and a strong one at that. b) math works.

I’d like to add a third axiom: The most secure doorlock is virtually nonsensical if mounted on a house of pumpkins.

CBC is a classical case of a) cryptologist not being immune to attacks of idiocy and b) no matter how good your cryptology, you’ll fail if the engineers don’t understand them or simply don’t care (“committees” anyone?).

Nick P August 27, 2016 9:19 AM

@ ab

I thought this attack was funny when I read about it. Standard practice going way back is to change your keys regularly for CBC mode to counter any such risks. You also have to trust whoever is on other end of the tunnel for very, obvious reasons. Imagine my surprise seeing on front page of Hacker News a famous attack that (a) assumes no rekeying and (b) begins with malicious code. I said, “No shit 64-bit CBC would fail…” I’m thinking about submitting an article about how lawnmowers are broken and deadly because you can tape the handle in place and pull them across the grass with your feet lying down in front of them.

Meanwhile, in land of people using it as instructed, everything is fine. 🙂

Note: With your background in proof, you might like miTLS project. It’s a verified, reference implementation. Something that should’ve happened a long time ago.

@ All

You might like Not Quite So Broken TLS. They do it in Ocaml trying to keep as many protocol engines pure and testable as possible. The impure stuff they isolate into dedicated modules. Despite safety checks & runtime, the performance is within 73-84% of C implementation. That or something similar used in MirageOS is available here. One small project for formal methodists is running it through the ML version of QuickCheck and converting it to ML to run through CakeML compiler. You get more-tested source then verified, machine code. That’s assuming miTLS isn’t available due to licensing. If it is, then do it but they might lock it up like CompCert.

ab praeceptis August 27, 2016 10:15 AM

Nick P

Thanks for the hint (miTLS). Funny: The very Karthikeyan Bhargavan who is one the core people in the problem this blog post here is about, is also deeply involved in crypto verif. That alone would be reason enough to have a closer look at it.

As I highly value the french in math and math-related fields (such as CS), I happen to know about the diverse INRIA acitivity and projects anyway.

Actually, you bring up something that is somewhat of a discomfort zone for me, albeit for “political” reasons, namely microsoft, which play a certain (important) role with miTLS, too.
Let me put it this way: I have learned to excessively dislike and mistrust microsoft and when I first heard about INRIA even thinking about talking to evil corp, I was desillusioned.

However, it’s a matter of fairness for me to clearly state that I was wrong and INRIA was right (at least to a major degree). evil corp really is working hard on getting things done better and right. Not that I would touch one of their “operating systems” but again, we an’tbut see that microsoft is pumping serious money and efforts into the good and right cause.

That said, will I use a tool in F#? Call me a funny old stick but there are limits to my flexibility (particularly when evil corp is involved such as in “F# is their language”).

F7? hmmm, not bad, really, but in the end one of those cases where I feel “smart people. Did smart things. But academicians. As an engineer I will refrain to learning fron it but to not work with it”. But I’m ready to say that F7 is among the few reasonable approaches I’ve seen in that field. And it delivers, it seems.

A propos TLS: I happen to know about a project concerned with that problem, too, and from a very pragmatic engineers angle (but theoretically sound). Unfortunately, I’m not in a position to say anything more about it.

Talking about that: there is a new OpenSSL version … and it seems a nightmare is coming true. A lot of software using it can not even be rebuilt. Yuck.
(Wirth was spot on when insisting and working on a proper module concept and proper interfaces. Maybe after another nuclear melt down we’ll finally understand that).

hawk August 27, 2016 10:40 AM


3DES is used extensively in physical security like building entry systems. When you swipe or prox a card to gain entry it is almost certainly using 3DES.

Nick P August 27, 2016 10:49 AM

@ ab

” evil corp really is working hard on getting things done better and right.”

The one disadvantage of working with Microsoft is they often patent or lock-up improvements to prevent commercial competition. Aside from that, it surprises me that you’re surprised by Microsoft Research doing a good job in this. Microsoft Research, unlike the product teams, regularly does awesome research with smart people that often results in useful tooling as opposed to idealistic or worthless bullshit like majority of CompSci. Best work was VerveOS: a holistic work that makes best of UNIXen look half-assed in QA. They also have driver, C, and ASM verifiers they’re using in Windows products, esp Hyper-V hypervisor. They also created a secure browser, plugin architecture, and SPARK-like language that might end up in their products. They even let you play with their tools here. They kick ass. Again, with drawback I started with.

“That said, will I use a tool in F#? ”

If they give permission, I imagine it could be ported to SML or Ocaml easily enough. Or you can do a source-to-source translator from F# to them.

“But I’m ready to say that F7 is among the few reasonable approaches I’ve seen in that field. ”

I’m glad you’re being fair. Remember that stuff like this, done by extra-smart people, is supposed to help the average engineer by being reference specification. The Ocaml people mention this in their paper where they’ll implement TLS then compare traces between F7 and theirs. One can also gain understanding of the protocol through the formal implementations because they’re unambiguous. Reading specs takes less training than creating or proving them. The basic ones might also be portable to annotations in languages like SPARK, Dafny, or Eiffel.

“Talking about that: there is a new OpenSSL version … and it seems a nightmare is coming true. ”

You should look up LibreSSL. The OpenBSD team, fed up with OpenSSL BS, ran through the codebase with machetes hacking out bad stuff followed by some real coding. There’s already been several vulnerability announcements for OpenSSL that didn’t affect LibreSSL.

Dave August 27, 2016 11:19 AM

Looks like they would need to capture an enormous amount of traffic data, 785 GB. If that can go on undetected, you have bigger problems.

ab praeceptis August 27, 2016 11:32 AM

Nick P

The one disadvantage of working with Microsoft is they often patent or lock-up improvements to prevent commercial competition.

Strange as that may sound, that’s something I can understand. After all, they sponsor all that research and work not because they are white knight aliens fighting to create a better world. They are a corp and operate in a world driven way too much by business interests.

And let’s be clear about what I take to be a very reasonable set of hypotheses: a) It must be and is their very vital interest to avoid reaching a point where IT gets avoided. A happy IT world is also a world where they can thrive in. b) If any major player achieved a status of being reasonably trusted and considered as someone providing more or less “bullet-proof” (business lingo) products the reward will be in the billions and billions.

I’m glad you’re being fair.

I’ll inevitably sometimes fail but a) I really try hard and b) I try hard for a reason. One of the very foundations of science is to see what there actually is and not what we feel or what we hate or like or … to see. So, in the end trying to be fair serves oneself quite a lot, too.

supposed to help the average engineer

If you describe intentions, chances are you’re right. If you describe reality and results, no. I posit that the vast majority of developers turn away impressed and frightened (if they look at all). And turning away is easy; there is a plethora of reasons and “reasons” (like “we don’t do .net, here”).


I don’t want to hijack this tread, so I’ll largely keep back. But let me hint: While Ocaml (for a variety of reasons) produces good results (relative to all the crap out there) I do not consider it a mountain or even a major hill top of wisdom. I take it to be more of an, albeit quite well working – and properly engineered -, interim bandaid.

Similarly and more generally I’m not too happy about all the functional languages that are widely regarded almost as some kind of messianic saviour. A very major part of their magic lays in simply declaring by decision a world view that is in a way cheating. Maybe that’s good and useful in our current sitution (like jailing someone away can be considered a working means of, say, avoiding him having drug problems, but it’s certainly not the best or even a correct solution).

Computers are, I conject, very much about global and supralocal state. A real solution would be one that meets this premisse. And btw. stacks, SSA and others are states, too, in the end, albeit seemingly self cleaning and more secure ones. That “secure”, however, is not a property of their quality but rather a property raising out of gross failures and misconceptions in the “traditional” approach (and of the fact that meanwhile have at least begun to think before we throw up yet another “super-language”).

You should look up LibreSSL

I know about it since about when the very idea entered the world. Sadly, however, very much software software is bloodily (and, Pardon me, brainlessly) written for and custom tailered to OpenSSL (incl. innards one should have kept away from).

Looking over this post afain (Yes! I discovered the preview button) I note that you got me. I fell for your lure and digressed quite far away from the issue of this blog post. You evil smart man! *g

r August 27, 2016 5:53 PM

@by the rules,

Is your digression a collusion attack?

Many arguments, is > than no arguments.

Ratio August 28, 2016 7:38 AM

@ab praeceptis,

You quoted this paragraph from Matthew Green’s description of the attack in part:

The nice thing about CBC is that (leaving aside authentication issues) it can be proven (semantically) secure if we make various assumptions about the security of the underlying block cipher. Yet these security proofs have one important requirement. Namely, the attacker must not receive too much data encrypted with a single key.

And you responded:

Let me tell you something: The number 42 can be proven to be a secure, *if me make various assumpions about …”. Heck, making various assumptions it could even be proven that being bitten by a cobra can be fun.

It seem you have completely failed to understand what Matthew Green is saying.

The key point is that while CBC can be provably secure, it won’t be if there is any way for the attacker to get access to too much data that was encrypted with the same key. (This may have something to do with the topic of the post.)

The rest of the paragraph gives an idea of how to get from “can be” to “is”. It turns out that for CBC to be provably secure, the security properties of the block cipher that produces the cipher blocks that are chained are somehow relevant. You apparently think that’s just absurd, but how could it be any other way?

ab praeceptis August 28, 2016 10:14 AM


Front-up: I’m absolutely in no way attacking M. Green or his text. In fact, and I clearly wrote that, M. Green quite often does a very good job in explaining crypto. That’s why I had but good words for him. But even the best and friendliest experts can’t paint smelly ugly things nice.

In a way you demonstrate one of the (possibly major) problems: can is irrelevant. It can also be secure to swim with sharks or to walk in a minefield.

Relevant is is secure within a reasonable set of assumptions like e.g. that the IV isn’t hard coded and used over years.

Also keep in mind that security to a large degree is about probability. Hence, we must not introduce any mechanism that by its nature decreases probability on our side and increase it one the attackers side, potentially up to the point where that mechanism becomes an attack vector or a significant ingredient.

CBC boils down to “reusing random”, albeit in disguise. I know, I know, you might say, but the IV is no secret, hence there is no problem. To which I calmly respond: Read again about the attack. Classical case of probability working against the defender and for the attacker. And the result is on the table.

Know what? I assume that there are even (x)DES users out there who do have “their” fixed IV (government agencies, anyone?). For a good laugh they might have even payed for it (“We don’t just pick any random garbage. We had some mathematician to select an IV with good properties for our usage”).

To end constructively:

I’m involved in a project where non-secret data, which, however are later ingredients for crypto (think e.g. of nonces/IVs) are exchanged between 2 parties. The approach I chose was to have these data not being used directly but as input/seed for a completely unrelated and sound PRNG which then creates the data that are finally used as IV/nonce/etc.

That’s ridiculous, one might say. But, simple reason: Probability and pattern diffusion. The more random any and everything on the wire looks, the better. Additionaly it provides me with some extra cover if later one of the main algos is found to have a weakness.
This article here wouldn’t exist, for instance. The weakness itself were still there but using it as an attack vector wouldn’t work.

I also mention this because it’s an example of what I preach: As engineers we must not simply consume security (use crypto algos) but we must understand them at least reasonably well and in the end it’s up to us to create secure software.

To put it in a funny way: Everyone knows “Don’t you dare to fumble with crypto (or makeshift you own)! Crypto is a delicate thing better left to the experts”.
I always remind of that – and add one phrase “But neither dare to use it not knowing it quite well enough”.

That is the healthy zone for engineers.

Dirk Praet August 28, 2016 11:09 AM

@ ab praeceptis

Sadly, however, very much software software is bloodily (and, Pardon me, brainlessly) written for and custom tailered to OpenSSL (incl. innards one should have kept away from).

That situation is gradually improving. On FreeBSD, for example, everything is now by default compiled against LibreSSL, and the amount of stuff that explicitly depends on OpenSSL is rapidly diminishing. I wish the folks behind some popular Linux distributions would do a similar effort so we can finally get rid of OpenSSL.

ab praeceptis August 28, 2016 11:48 AM

Dirk Praet

As I said: I’m not a linux fanboy, quite the contrary. But I took the frame of this to be “widely known major OSs” – to which linux belongs but the BSDs don’t.

Also, again: I know of LibreSSL (praised be the OpenBSD people) and of some other promising alternatives, too. Unfortunately, however, gazillions of computers and programs still run with OpenSSL and can’t be simply changed/replaced.

More importantly, quite some of the underlying problems and quirks are bound to end up in [whatever]-SSL again. It’s not their fault but that’s the way it is. Unfortunate.

If anyone wishes to engage in reflections on SSL/TLS then we’d need a very different approach. One example: Does SSL/TLS really answer the relevant questions? I don’t think so. After all, it came into life under less than ideal circumstances and with one problem in mind, a problem, one might realize, that actually is a non problem for many cases where SSL/TLS is used, and sometimes even counter-security.

r August 28, 2016 8:00 PM

@by the rules,

At this point, the SSL/TLS discussion should be practically moot minus a few fringe cases considering the larger CA && TLD[R] problems (or is the third-party problem a lesser? (lessor? third-party obv applies to codebase too.)). We have to be very careful with (TP)PKI, trust is a huge problem. As per both Nick’s and your frames of reverence we need something SMALL, RELIABLE and VERIFIED. Not something that’s explicitly unverifiable.

ab praeceptis August 28, 2016 8:25 PM


In theory I enchantedly agree. In vivo, however, there’s millions of lethargic, careless, ignorant, half-penny-saving corporations and agencies out there plus gazillions of users, sites, mechanisms etc. which without SSL/TLS would desastrously stand still.

Had only we seen the pandora’s box factor earlier …
(Or: Had only we activated our brains before putting SSL into the wild …)

r August 28, 2016 9:15 PM

@by the rules,

After reflecting on what happened at the NISTwits, from a global and human-rights oriented perspective it’s obvious what happened: it was a short sighted and erroneous move on the part of the international community and it shouldn’t happen again. Even with the security against ‘hackers’ as you wouldn’t call them that @65535 is happy with concerning “let’s encrypt” it really is only a half-baked solution that only further re-inforces the top-down problems of legislation, enforcement and interoperability.

Will de-US-Eminent-Domain’ing the Wild Wild Web encourage an open market capable of addressing this situation?

Proof of botchulism is easy, even implementations of DNS and TLS in RUST will still be vulnerable to undermining from any international or individual’s standpoint.

If you co-sign a lease on my car and I smash it into someone out of anger, haste or greed and I die – you lose.

So we all co-sign trust? This isn’t working out very well for those of us who “thought we could trust”.

Should we just chalk (chock(?)) that up as another undeliverable?

“Promises, promises.”

I guess the requirement of interoperability is at the crux of the problem, but there’s just so much congestion on this road…

Can we develop the tools required for our traffic engineers to make accurate and timely assessments of both the situation now and any future situations (or accidents) before hand?

Can those assessments be made reliably? responsibly? duplicatable? trust-ably?

Can we develop the tools required to make our roads safe? even from dangerous drivers?

Do we have the capability and resources to offer these tools to those who need it? when they need it? before they need it? where they need it?

The server is where the ownus should be on interacting with clients, just like a store they should have parking available for handicapped peoples (and projects).

A client (or customer) shouldn’t have to haggle with a store over the expected or accepted exchange – no shirt, no shoes, no service.

If I don’t like your “no blacks allowed” sign you shouldn’t be operating, that’s an antiquated perspective and illegal in most parts of the world.

Why do we mandate that one have unisex parts to enter into our unisex bathrooms?

ab praeceptis August 28, 2016 9:48 PM


There is little in your post I can relate to; most of it sounds weird to me (which may well be a lack on my side and you may be right. It’s just not how I look at things).

One piece I can relate to is SSL/TLS, which let’s encrypt comes down to. I never got and never will get a certificate from them because certificates are among the most grave problems in SSL/TLS. Moreover I don’t believe in democracy as a solution tool in technical issues. Whether greedy-corp sells a certificate for organs (or pennies) or whether people-power distributes them for free is irrelevant for me. Moreover there are other aspects which to judge I lack the expertise but an example that comes to mind is: if greedy-corps f*cks up big time I do have some (practically worthless) rights; after all, they’ve been payed.

My pain stems from insultingly gross non-logic. It actually is desirable to be able to prove/to know with certainty that party A actually is party A.

The way that is done with certificates, however, is insultingly mindboggingly stupid and ridiculous. But – and that, so I come to assume more and more, is intentionally so – there is something that “solution” does very well: to trick, betray and f*ck people.

The nsa or kgb etc. could hardly have wished for more.

And I’m missing the experts standing up and warning people that SSL/TLS and particularly certificates are a lousy solution and have shown themselves to be more of a problem than a solution.

It’s not that better systems aren’t conceivable or feasible. But for one reason or another the experts seem to not be in a hurry (or some guys in black suites have told them to look the other way … what do I know)

I have done and know of projects where what-certs-promise is actually needed. And we’ve found well working solutions. All the pieces are available but for whatever reason the world seems to be happy with snakeoil certificates, especially since they are free. And hey, you get a nice little lock icon.

r August 28, 2016 10:01 PM

OT, I know I’ll stop after this:

Somebody a couple weeks ago made a “carrot and stick” argument for the IMF/USD/global economy.

This is what happens when we allow unrelated (and potentially maligned or subvert[ed|able]) interests to co-sign a contract of trust.

Starting over isn’t a bad idea, we have a pretty good idea of what works and what doesn’t I think.

You’re right though, my last post was metaphore thick. Hopefully some people can draw parallels from it and correct my mistakes. I agree that TPPKI isn’t too bad of an idea but the current implementation is where the problem lays (I believe), maybe with ‘vouchers’ and pre-sharing we can eventually defeat the effective race-conditions of providing a legitimate and discovering a falsity but who knows.

Those were just my attempt at formulating my non-expert position.

Dirk Praet August 29, 2016 5:48 AM

@ ab praeceptis

And I’m missing the experts standing up and warning people that SSL/TLS and particularly certificates are a lousy solution and have shown themselves to be more of a problem than a solution.

I’m not sure what you mean by that. Everybody in infosec knows the CA system and SSL/TLS are badly broken. There’s no discussion whatsoever about that.

Much needed initiatives like LibreSSL and Let’s Encrypt were never meant to solve the underlying problems, but to mitigate the huge mess and security hazard OpenSSL had become and, in the case of Let’s Encrypt, the fact that there is still way too much plain text communication going on. And yes, CA’s can be compromised and SSL/TLS can be MITM’d, but implementing other and better solutions is kinda hard when – like you so eloquently say yourself – the entire internet infrastructure depends on them, especially aging stuff no longer supported by its vendors but for which those in charge get no budgets to replace.

Putting in place valid alternatives generally is an uphill battle as it requires time, money and people to support it, both on the board and in the field. Good luck migrating from Windows to a Linux or FreeBSD desktop environment if the only person you can find to support it is an autistic 40-year old living in a basement and your entire staff is complaining that customers still want their documents in MSFT formats.

Don’t get me wrong: you’re obviously a person who knows his stuff and your comments are entirely justified, but one thing I have learned over the years is that the best or most secure solution hardly ever prevails unless it doesn’t break existing stuff, comes with competitive pricing, reliable, long term technical support and strong adoption by peers. The only notable exceptions to that rule are high security environments driven by compliance and regulation requirements.

It’s an unfortunate fact of life every security engineer eventually has to come to terms with, unless he/she has the good fortune of being able to exclusively work for companies that actually understand what he’s talking about because they have a firm business case to implement it.

Woo August 29, 2016 8:00 AM

I’ve read several articles on this attack by now, but one question still is unanswered to me – perhaps I can gain some enlightenment from the smart people here 🙂
What kind of data can actually be recovered in this way, that would not be possible to get at using different, easier, methods – considering that this collision attack has rather high requirements like being able to inject malicious scripts into the user’s browser session. Cookies, session IDs and other interesting stuff can surely be snarfed up far more easily in other ways, once you’ve got your nose in the client.

ab praeceptis August 29, 2016 10:54 AM

Dirk Praet

I agree to a large extent. But – and that’s fat “but”:

a) “lets encrypt” – offering a disease for free doesn’t address the problem (nor does their “innovative” mechanism).

b) Doing that (“lets encrypt”) and preaching it in pretty every IT gazette sends a disastrous message, namely that the experts think SSL/TLS CAs is perfectly fine but should be 1) free and 2) somewhat tuned.

I consider other approaches (like e.g. the MS/french project introduced by Nick P) by far better. Sure, nobody can make SSL/TLS a useful, elegant and nice animal but one can pull some it’s uglier sharp and poisonous teeth.

Trust me, I have thought about it and a lot and the result is: one can repair that whole festering boil so as to be a useful and largely well behaving monster. Unfortunately that would involve “politics” and in one of its worst incarnations, namely the “education” of greedy $$$corps to at least not completely ignore the peoples needs.
The technical side is feasible. >90% of the users can be pushed to use better algos out of a rigorously trimmed downed assortment, we do have the crypto and we do have the tools as well as the know-how (and the pain to keep certain lessons in mind).
Moreover we do have mechanism to really and credibly check identities as well as mechanisms for distribution.
One element we do not yet have but which can be created is a way to attribute trust values to the players and to allow the end user to create his own mask of whom he trust whom he doesn’t. One helpful item in that regard would be to have classes of authority/geo-region. Something like “french state authority” is class 1 (highest) authority class for region “france” objects but only class 3 (medium) for non-french objects; or “greedy corp” can not even be higher than class 3 as it’s lacking proper authority and mechanisms. On the other end the user could specify that they don’t accept certs from any CA with < class 2 for business transactions but that, say, down to class 4 is OK for unimportant stuff.
Finally we’d also need some feedback loop, i.e. a system where incidents and cases of abuse would be able to lead to a lower classification.

In summary: We can repair at least major parts of the boil and we could enhance it to something actually trustworthy. But to do that, if were serious about it, we also needed to critically look at our own field and upgrade education.

Paeniteo August 29, 2016 2:03 PM

I know it’s not an option for TLS and far better options exist in TLS, but, just for curiosity: would other cipher modes like CTR, CFB, OFB be similarly affected?

bob mondo October 11, 2016 8:43 PM

Isn’t everyone being unreasonable? Someone is going to have to roll his chair a couple of feet and remove support from their companies web page for crappy old cipher suites not even supported by the operating systems their products require to operate.

Next someone else will be required to put up a page with the current order of supported cipher suites for their operating system on the support gateway. Maybe even write a script to divide into SECURE and INSECURE/BACKWARDS COMPATIBLE/PLEZ STELE MES DATAS, or a prompt instructing people to please install a free browser that supports some better security.

We’ll end up with web pages that say BATTLEFIELD 11 is not supported by WIN 95. OMG!

Every time you type something in your local gov representative’s, or even your president’s contact page, someone in RUSSIA won’t be able to capture your data. They can always get it from inside the network, but then they got to email some attachment to one of the staff and the resulting infection will no doubt be discovered a couple of years later. Then they’ll audit how much info has been ex-filtrated from the minister of defence or some other old codger who loves hitting OK every time he is prompted so he can read his favourite angry old man blog.

And how will they capture your meta data then? All the petty drug crimes will go unsolved!

I am ultima October 11, 2016 8:51 PM

Once all the government departments quantum computers come online, my bot net will routinely be using them to crunch your encryption. Why even try, you are just wasting extra CPU cycles? Submit your freedom to consume to me so I can use your banks credit write-offs (it’s only a couple a billion each year) to mail me some new gear to the old lady down the road who collects my packages for me. I mow her lawn and talk to her, you’d leave her on her own wouldn’t you!

Thomas Pornin's friend November 4, 2016 3:40 AM

@Nick P: “You should look up LibreSSL. The OpenBSD team, fed up with OpenSSL BS, ran through the codebase with machetes hacking out bad stuff followed by some real coding. There’s already been several vulnerability announcements for OpenSSL that didn’t affect LibreSSL.”

Thomas Pornin, that I respect very much, released under MIT license an alpha implementation of TLS 1.{0,1,2} in 2.5 MB of C without malloc(), see

Anyone respecting Thomas Pornin or his contributions to, e.g., stackexchange, should review or at least try bearssl.

ab praeceptis November 4, 2016 4:16 AM

Thomas Pornin’s friend

I also came across that new work of Pornin. But alas, bear-ssl ist hardly alpha; it’s in a very early stage.

I agree with you, though, that Pornins tls implementation is one of the extremely few that I take serious and that I see as a coming alternative.

I didn’t yet look at the code but from what I read so far I’m under the impression that bear-ssl, too, will suffer from one weakness that plagues virtually all implementations: No verification. I hope I’m wrong but I don’t think so because starting development in C (or C++ or Java or …) is a quite reliable sign of a design/development process without verification and formal model/spec.

But still, Pornin is Pornin and his approach looks like the best I’ve seen in a long time.

Definitely something to keep an eye on. One of these days I’ll run a quick check on what’s there so far.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.