Matthew X. Economou February 11, 2013 2:42 PM

I’m somewhat astounded to read that TLS doesn’t include proper authenticated encryption. Or did I misunderstand the paper?

Alan Kaminsky February 11, 2013 3:47 PM

@Matthew: TLS doesn’t include proper authenticated encryption.

I wouldn’t make so broad a statement. TLS with AES-CBC and HMAC-SHA1 leaks some information about the plaintext. As pointed out in the related blog post by Paul Ducklin, TLS version 1.2 with AES-GCM authenticated encryption is not susceptible to this attack.

It’s probably fair to say that TLS has accrued too many options and versions to remain secure overall. Time to throw it out and build a new protocol that avoids all the problems identified with TLS over the years. Who’ll go first?

Clive Robinson February 11, 2013 5:09 PM

@ Alan Kaminsky,

Time to throw it out and build a new protocol that avoids all the problems identified with TLS over the years. Who’ll go first?

Opps there goes baby with the bath water…

Two of the things I warn against regularly are,

1, Protocol Attacks.
2, Fall back Attacks.

The likes of the NSA are aware that putting the fix in on standards and protocols is going to make a happy hunting ground for their activities.

But even more devistating is the “Fall Back” effect caused by the legacy issue of trying to fix broken protocols and standards.

Put simply there are always going to be systems using the old insecure protocols that cannot be upgraded or patched (and the closer to the physical infrastructure the more likely it is). Thus all new software will need to be “backwards compatible” and this means you need a working “fallback mechanism”. Unfortunatly many will insist that this mechanism will need too be as transparant as possible so as not to cause users to worry their pretty little heads over. And the software developers will thus take it as read that “out of sight is out of mind” for the user.

The problem is if the user cannot see the protocol fall back happening they will not be aware that what they are doing is insecure at best. Or as is increasingly more likely at worst having the fall back being initiated and controled by a Man In the Middle (MITM) attack which deliberatly forces the negotiation to the weakest combination which could (and has been) plain text…

mik February 11, 2013 5:17 PM

I think a good pruning of the default ciphersuites is due, at the very least.

It’s quite frightening the surface area that a typical TLS handshake has.

Jonathan Wilson February 11, 2013 5:21 PM

I think the biggest problem we have with TLS/SSL is the number of broken clients and servers that say “I support ” but will fail if you actually try to use it. Which means many clients and servers cant default to enabling the secure versions.

Paul Suhler February 11, 2013 5:49 PM

This inspired me to look for information on upgrading Chrome to TLS 1.2 (which it turns out isn’t supported). Way down one of the paths I followed, I ran across a developer who reported ( that an unnamed financial institution he’d worked with was not willing to incur the expense to upgrade their security unless there was a guarantee that it would prevent their being hacked. Of course, no such guarantee is possible. The institution would prefer to spend millions of dollars per quarter for insurance to cover themselves against regulatory fines and to fix customer problems resulting from a breach.

That was one voice. Does anyone else have related cost/benefit stories?

Clive Robinson February 11, 2013 6:49 PM

Oh “Obligitory word of Caution”

For those that read what Bruce describes as the “good plain language” descriiption. In the remidies section it mentions the use of a “stream cipher” to remove the need for “plain text padding” for using with a block cipher.

If you are going to use a stream cipher be aware that you have to amour the plaintext to stop bit flipping attacks.

Bruce has mentioned many years ago in his book on cryptography why you should not use a stream cipher without first armouring the plain text.

However tiime has moved on (around one sixth of a century… 😉 and people forget. Worse they also are lazy with what they write. In the easy to read version the author talks about “checksums” in a fairly free way and neglects to mention that there are a whole variety of checksums out there to use, only a few of which are “Secure Checksums” that could be used to armour plaintext against stream cipher bit flipping attacks. Whilst those with specialist knowledge correctly expand in their heads the authors use of “checksum” others without the specialist knowledge might well see “checksum” and think of CRC or additive checksums….

Further even if the plaintext is correctly armourd it won’t of necesity stop time based attacks such as the one described in the Royal Holloway paper, because they are an “implementation dependent” attack chosen inadvertantly by the person writing the software.

One of the things I’ve mentioned on this blog quite a few times is “Efficiency -v- Security” time based attacks are a direct consiquence of this. That is the software writter does not take the time to ensure that response times are the same regardles of success or fail, because “fail fast” is seen as more efficient than “fail at the end”.

As I’ve also said on this blog a number of times with regards to TEMPEST/EmSec and ComSec in general is “Clock the inputs and clock the outputs” and also “Fail hard on error” as both help remove time based attacks as well as reducing the information leak bandwidth of any incidental side channels.

Untill code cutters and those who manage them grasp these thorny issues and address them properly then we will continue to read papers such as these.

And don’t be fooled into thinking that the 8million or so tries makes this an impractical attack.

Because where there is one fault there is almost always another. And the use of one can be levreged by the use of another.

That is on a busy site 8million failures might well be down in the noise floor of the overall number of failures. If you can find a way to spread these failures across all the traffic then it might well not be noticed. Then by correlating the results you may gain sufficient information about another weakness in the system (such as say synchronising to the random number generator) which enables you to significantly reduce the number of tries required in other areas or on a single target or all targets.

That is we tend to think about “our communications being secure” not “all communications being secure” represive and other nation state regimes usually don’t care about speciffic communications, but any communications they can get/weaken/read. That is we think “target and scalpel” they think “hoover and sort”, and the differences in these mindsets give rise to different attack viewpoints and thus attack vectors.

As has been pointed out the sports fisherman dreams of that special catch, but will be happy with a decent catch thus they bait their hook. The commercial fisherman cares only for the total catch thus spreads a net far and wide.

martinr February 12, 2013 3:15 AM

It’s really a pity that the 2006 change of GenericCipherBlock PDU in TLSv1.1 only added an explicit CBC-IV, but did not follow Serge Vaudenay’s explicit recommendation to pad-mac-encrypt rather than the SSLv3/TLSv1.0 mac-pad-encrypt.

Section 5.1 SSL/TLS (2nd paragraph):

TLS v1.0 also provides an optional MAC which failed to thwart the attack:
when the server figures out that the MAC is wrong, it yields the bad_record_mac
error. However, the message padding is performed after the MAC algorithm, so
the MAC does not preclude our attack since it cannot be checked before the
padding in the decryption.

Section 6.5 CBC-PAD with Integrity Check

One can propose to add a cryptographic checkable redundancy code
(crypto-CRC) of the whole padded message (like a hashed value)
in the plaintext and encrypt


This way, any forged ciphertext will have a negligible probability
to be accepted as a valid ciphertext. Basically, attackers are no
longer able to forge valid cipher-texts, so the scheme is virtually
resistant against chosen ciphertext attacks.

Obviously it is important to pad before hashing: padding after
hashing would lead to the a similar attack. The right enciphering
sequence is thus

            pad, hash, encrypt

Conversely, the right deciphering sequence consists of decrypting,
checking the hashed value, then checking the padding value.
Invalid hashed value must abort the decipherment.

Peter February 12, 2013 3:36 AM

What about ARIA and Camellia? They’re included in the TLS cipher suite options, but not mentioned in the paper: is this because they use a different MAC/padding construction?

RobertT February 12, 2013 5:53 AM

Where is that guy last week that was so absolutely certain that smart meter communications were safe and secure.

It’s simple timing attacks like this that make MiTM attacks so difficult to defend against (even for a wired comms system). When the comms link is RF, as in a Zigbee mesh meter system, you have to expect attackers to will attempt to interfere with the link (bit-flipping / bit jamming attacks). These bit attacks can be used to jam the mesh link communications and observe the comms packet jitter. Meaning you do not necessarily need to be sitting in the middle (as in MitM) you can safely sit on the side lines and still implement your attack.

Garrett February 12, 2013 9:22 AM

I’m not an expert in the field, but from looking at the “simplified” explanation, it seems to me that if the padding was before the MAC and thus covered by it, the MAC would be checked before the padding is even evaluated. The result would be then that the tampering would be detected right away and in deterministic time.
Only after the MAC verification has passed would the padding then be evaluated for length and stripped.
Am I missing something here?

Nick P February 12, 2013 11:55 AM

I think I might be missing something here. In the past, the crypto people taught me to encrypt, then MAC. This allows both plaintext and ciphertext integrity, maybe with other benefits. They also talked about padding issues. So, I used Counter modes, fixed packet lengths, and total fail error handling to counter them. My designs seem to still be safe. One plexing question.

Why don’t they encrypt and then MAC?

Clive Robinson February 13, 2013 6:55 AM

@ Nick P,

Why don’t they encrypt and then MAC?

The short answer is two words “History & Perspective”, the long answer is explaining why…

Depending on how you look at it the history of “cryptograhy” is very old more simply people have “been keeping secrets since befor they could communicate”. It goes back even prior to “Hunter Gatherer” tribal society and can be seen in the behaviour of animals. In essence it’s a survival trait to not communicate to competitors sources of food etc.

This continued through tribal life and promoted the ability to “cheat” or “lie” to protect “you and yours”. And it’s still going strong today, we all likes society for what it brings us but when push comes to shove and things come down to basic survival and a choice between “you and yours and society” you know the preference order.

Whilst the basic rules of the game have not changed technology has repeatedly moved the goal posts and thus the perspective.

Prior to writing to communicate a secret required a person to go to the person they were going to share the secret with and tell them in person or… involve a third party as a go between who would also become privy to the secret for as long as they lived. Which brought up various issues of “trust”…

These trust issues were partialy solved with writing and the invention of tamper evident ways of hiding them from the go between (baked clay tablets that were then put in clay enevelops that had seals impressed and then baked again).

However there were still trust issues in that the messenger was party to the fact of communication and thus could reveal that a problem we would now call “traffic analysis”.

The idea of courier services and multiple envelops partialy resolved this in a way we would now associate with Onion Routing. However it introduced other problems such as “nodes” where all communications went through and thus were vulnerable so we got the various “Black Chambers” where envelops could be opened and resealed without being (easily) detected.

And as we know the ciphers were not upto hiding the information except in isolated cases as Mary Queen of Scots exemplified resulting in her losing her head. Even code books were fairly easily broken if sufficient traffic was available for examination, and rather less if the communications context was known.

So the idea of “Super Encryption” came about where the message was first coded via a code book that only rarely changed, and this was then encrypted via a cipher system. But all the while the special papers and envelopes with seals etc were retained which still gave the impression of authenticating the communication.

Thus the idea of what we now refer to as the CIA triade was inplace several hundred years ago with Confidentiality provided by codes and ciphers, Authentication provided by special papers and seals, with Integraty given by the design and use of the envelopes and seals.

Then we had the application of electricity to communications giving rise to firstly the telegraph, which had the side effect of destroying the existing Authentication and Integrity methods and also made the passing of traffic obvious. Then the telephone which did atleast alow some (all be it faux) authentication to return. But as WWI showed the use of the telephone alowed the start of what was later to be called TEMPEST attacks. But there was also the new fangled Radio Telegraphy using “spark gap transmitters” where authentication and any kind of integrity were just not possible in anything aproaching a reliable way.

But worse it was also painfully obvious that the existing codes and ciphers were just not upto the job and this was made quite public in the writings of Winston Churchill after the war who put a lot of the UK Navies successes to code breaking in the English Admiralty Office.

Thus it was clear to all major Governments that the whole security of millitary communications rested on these inadiquate, difficult and thus untimely codes and ciphers.

It however had come to the notice of one or two people that whilst the various military organisations had basicaly stayed in the past commercial organisations had been much more forward thinking and in a limiited way had embraced machine cryptography and thus in some cases had more security than the equivalent military communications. And it would appear had also solved the authentication issue with such machines.

The result as we know was the German Enigma the British Typex and the later US and Nato SIGBA systems. But as we now know all the pre-war mechanical systems could be easily replaced with paper analogs and worse the use of mechanics for ciphering had been taken a step further with automated attacks by motorised machines and early electronic devices that were the forrunners of computers.

However what it did put in way to many minds was the belief that basic crypto alone could give you the CIA triad. And thus emphasis was given to Basic Crypto as being the magical solution even where various people knew very much otherwise (GCHQ,NSA etc) but for operational reasons wanted to keep it very very hush hush.

But due to government policy and a whole load of other issues prior to the DES competition civilian/comercial crypto was to be blunt a joke.

DES in effect kick started civilian and academic interest in Crypto again and arguably it has now in some areas beaten the governmant agencies at their own game and is in other areas progressing more rapidly than such agencies worst nightmares.

Back in the 1980’s I and a very few others were banging on about how to attack systems via what you might call “Reverse TEMPEST” and slowely but surely the academic world is catching up.

The academic world has likewise realised that there is one heck of a lot more to do with security than just algorithms. Part of this can be seen with the HASH competition, where the “so what” attitude has hit home because the game has moved on significantly in other areas. The pigeons are coming home to roost on protocols and the resulting chinks in the armour which we call side channels, that like tiny holes in the bucket drain secrecy away not instantly but still very very effectivly.

We are finding out that contrary to theoretical ideas in the practical world it realy does matter in what order you do things and how.

But importantly we are finaly waking upto the fact that our preconceptions and assumptions inherited from a previous age are wrong.

Thus the long held assumption by many that the final Crypto Envelope gives the CIA triad is wrong it’s not “magical pixie dust” you sprinkle on and all is right with the world. Other considerations apply. Just as with real world envelops you can feel through them and under certain conditions even see through them to get a good idea of what’s inside, all you need are the right tools.

We currently talk glibly of “time based” side channels but forget there are many others that the likes of electronic engineers are only too aware of.

As I’ve indicated in the past “Efficiency -v- Security” is a very real issue. If you do things the right way you can make all theoreticaly secure algorithms used to construct crypto enevelopes transparent…

Matt Blaze and some of his students ably demonstrated this several years ago by implementing a bug device in a keyboard that simply by delaying key stroke information in a predictable way made the entire computer and it’s OS and applications transparent and thus leaked information directly onto the network.

If the hardware designers or even the OS or App software writers had thought about it it would not have been that easily possible because they would have “clocked the input” in the right way.

Even now many theoretical frameworks and the resulting design verification systems don’t take clocking of the inputs and clocking of the outputs into consideration.

Those doing TEMPEST / EmSec design at indepth levels are only too well aware of this requirment but many are not.

Back in the 1980’s I was demonstrating “active fault injection” and how it could be synchronised to the operation of a CPU outside of the case with no direct electrical connection. The result was I could make a “pocket gambling device” pay out way way more than it should. Others have used the same ideas to (supposadly) “defraud” the likes of Casinos in Nevada etc in much more recent times.

As far as I’m aware the only academic paper on RF fault injection is by a couple of people at the UK Cambridge labs who pointed an unmodulated carrier at a supposedly secure TRNG and reduced its entropy from 32bits to less than 8bits. They however did not put envelope or phase modulation on the carrrier or make any attempt to synchronize it to the devices operation. The chances are if they did they may well reduce the entropy to zero and actually force the TRNG to output what ever thirty two bit number they desired, thus being able to defeat any defenders statistical analysis of the output…

Then what about Quantum Key Distrubution, the designers of many of the practical devices appear too be compleatly unaware that the order you put certain components in was important to stop active fault injection. If you can some how read Alices polariser (or it’s random generator) then it’s game over for the security. Some designers failed to realise that you could couple out of band energy back into Alice’s transmitter and read the state of the very broad band polariser simply due to what it reflected or did not reflect of the out of band signal…

There are a whole load more interesting fault injection attacks you can make on QKD equipment some of which have been published none of which so far are (as far as I’m aware) what I would call synchronised fault injection attacks.

Every time you can trace the root of these failings back to assumptions that often are actually sufficiently well known to be false but like our DNA appear hard coded into us by our forebears..

I’m reminded of the mem of a once frequent poster (RSH) here 😉

sparkygsx February 15, 2013 9:21 AM

Encrypting first and adding the checksum afterwards seems quite useless to me, because the attacker CAN calculate the correct checksum for any forged or modified packet anyway. The “plain language” article didn’t mention any secret included in the checksum (without any secret, a it’s not really a MAC).

Wouldn’t it be easier and better in many ways to add the padding at the beginning of the packet, using 16-31 bytes of random data, before encryping? That would have prevented the transmission of separate packets with identical cyphertext, making this and many other statistical attacks much more difficult or impossible?

I’d think any reasonable amount of packet switching would make such timing attacks very difficult; if the natural timing variation is much larger than the time difference being measured, it becomes very difficult to measure the signal that is way below the noise floor.

Jonadab February 16, 2013 6:55 AM

In practice, an attack that relies on perfectly measurable server-response timing over thousands of iterations is effectively meaningless on the public internet.

You know what’s a lot easier to pull off than this attack? Just go down the (entirely too lengthy) list of trusted certificate authorities, find one that doesn’t have adequate security, infiltrate it, and issue yourself whatever fake key you need. The user’s browser will not complain when the certificate is suddenly unexpectedly different from the one presented last time. As long as it’s signed by anybody on the list, the browser is happy and the user suspects nothing.

Guru June 24, 2014 10:44 AM

@sparkygsx: Encrypting first and adding the checksum afterwards seems quite useless to me, because the attacker CAN calculate the correct checksum for any forged or modified packet anyway

An attacker CAN’T calculate the correct HMAC checksum for a VALID packet.

CipherText is calculated:

CipherText = iv || aes(key1, iv, message) // || symbol is concatenation
tag = hmac(key2, CipherText)

You should use a different key for the hmac. In practice taking a secure hashing value of your encryption key is good enough.
Of course, there are authenticated modes which do not need an hmac. Take a look on GCM.

see details here

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.