Comments

bcs September 15, 2014 10:11 AM

What would it take to update the certificate format to allow multiple different signatures using multiple different hashes (and multiple CAs)?

Larry Seltzer September 15, 2014 10:29 AM

SHA-2 is the same algorithm as SHA-1, but with larger hashes, and IIRC SHA-256 is all that’s really being used right now. So I assume that any attacks against SHA-1 would work, in theory, against SHA-2, but would require far more compute power. Is there a sense out there of how far we are from successful attacks against SHA-256?

Nick P September 15, 2014 11:02 AM

@ bcs

My approach in the past was to use multiple hash functions on the same data, then xor them together and chop off enough to fit it into the space. So, if the space was 128-bit for a MD5, mine might use SHA-256 and RIPEMD-160 to produce a 128-bit value. There’s theoretical security risk here but I haven’t seen a practical attack on such a scheme yet. I also obfuscated with session-specific secret salts, iteration counts, and algorithm combinations. These along with the key can be generated cryptographically from a shared secret.

bcs September 15, 2014 11:12 AM

@Nick P.

The suggestion is that, by allowing multiple signatures, the client can search them for ones that use things that it knows and trusts. A valid certificate can continue to include broken and breached hashes and CA’s (to avoid breaking clients that are only able to use them, and such clients are SOL-until-updated no matter what is done) while also including secure hashes and CA’s that are unknown to those old clients.

The ability to check ALL the signatures is just a neat side effect.

Peter Pearson September 15, 2014 11:25 AM

If, as in the example in Eric Mill’s article, the certificate you’re verifying traces back to a COMODO root certificate, the possibility that SHA-1 can be circumvented for a few million dollars is less of a threat than your reliance on a certificate authority that was caught renting its keys to a sloppy intermediary who would sign anything. Keywords: 2008, Eddy Nigg, Comodo, Certstar.

We need better browser tools for a world with incompetent certificate authorities.

Anura September 15, 2014 11:37 AM

It’s worth noting that this websites certificate uses SHA-1, although I’m not sending anything sensitive to it, so it probably isn’t a big deal.

Anura September 15, 2014 11:52 AM

@Larry Seltzer

There are significant differences between SHA-1 and SHA-256; SHA-256 does have a larger hash and a larger state, but more importantly SHA-256 uses much more complex operations (although SHA-256 has fewer rounds, 63 instead of 80, SHA-256 does much more in each round).

Personally, I would rather keep with SHA-512, even if it’s truncated versions like SHA-512/256; the larger internal state improves security even if you truncate the hash (which also prevents length extension attacks), and it also has more rounds (80). More importantly, it’s faster than SHA-256 on a 64-bit processor (although possibly not for messages smaller than 448 bits).

Mike Amling September 15, 2014 11:58 AM

@Peter Pearsonm, I agree.

While finding a relevant SHA-1 collision could be done, it would be a lot cheaper to

Get a pliable CA onto the browsers’ lists.
Fool a CA into issuing a certificate.
Obtain the private key for a genuine certificate. The relevant keys have to be used operationally. They aren’t in a safety deposit box. Some sites keep private keys in plaintext files. Many have them in RAM.
Obtain the private key of an intermediate CA.

Switching to SHA-2 is less like locking the barn door than like locking the barn’s window, or maybe a skylight.

Bob S. September 15, 2014 12:08 PM

The article refers to COMODO certificates. Which causes me to wonder if COMODO can be trusted as well as the rest of the issuing authorities. Seems to me a self signed certificate would be better…these days.

Anura September 15, 2014 12:30 PM

@Bob S.

I used to work for a company that resold SSL certificates, and I can assure you that it would have been trivial for me to get a certificate for any minor website. The model itself is flawed because there is financial incentive to be as leniant as possible in your checks; the more people you deny, the less revenue you take in.

On top of the financial incentive, the fact that they are issuing a certificate for a domain, but have absolutely no authority over the domain means the entire model is flawed from the beginning. Let’s say the certificate authority was very good at verifying authenticity, to the point where they never issued a certificate to someone who didn’t own the domain… Well, that only verifies that they owned the domain when the certificate was issued.

Let’s say I’m a domain squatter living in a country that is not going to prosecute me for internet crimes. I have a shell company under a stolen identity, and I buy domains so that I can sell them. I buy SSL certificates for my squatting page and then when I sell the domain, I also bundle the certificates I already own with the private keys and sell them to hackers. When they catch on and revoke the certificates I purchased, I start up a new business using a new identity.

I get completely valid certificates for domains my shell company owns, but when I sell the domains attackers can perform MITM attacks by using an old, yet apparently completely valid certificate.

Name (required) September 15, 2014 12:58 PM

[off-topic, but human duty calls]

The_Moment_Of_Truth with Edward Snowden (for the 50% of Americans who DON’T think he’s just a traitor to America), Julian Assange, Kim Dotcom, Bob Amsterdam (Cdn. Int. lawyer, defending Kim Dotcom), and host Laila Harre.

Kick back and have loads of fun watching them attempt to affect the outcome of New Zealots, sorry New Zealand’s, current ELECTION (they might even succeed).

In particular, listen to Assange’s statement (intro starts 59:10) – it is the meat of the entire ‘show’. He explains the ultimate (probably penultimate, really) goal of the USA Intelligence Services.

Snowden’s intro starts at 59:10
Assange’s intro starts at 1:17:44
Amsterdam’s intro starts at 1:37:20
Dotcom’s brief comments (interesting to crypto types) are at 1:35:54

Links to other versions on YT, in case the one above gets broked (yeah, be a real shame if anything were to happen to it:)…
https://www.youtube.com/watch?v=Pbps1EwAW-0#t=1300
https://www.youtube.com/watch?v=A6ZbGi-J6Rk
https://www.youtube.com/watch?v=szGkFazYp5I

Another Peter September 15, 2014 2:19 PM

@Peter Pearson

Good comment.

I work with dozens of computing professionals outside the security realm who have no idea how the certificate process works. We can’t expect the average consumer to understand either.

Clive Robinson September 15, 2014 3:17 PM

@ Peter Pearson,

You forgot to mention, that performing any type of validity test costs money.

The last time I had a look at it, the banks were claiming that the cost to do the validation of a simple bank account for an individual to the level required by proposed anti money laundering legislation it would cost aproximatly 100USD.

Thus by the time you add the other bits and bobs for admin advertising and profit for what would not be a volume business you would be looking at around 150USD per certificate…

So it’s fairly safe to say that the level of checking done is not going to be much more than “payment received OK”…

So I would say “self signed” certs ,currently are probably more trust worthy –just– than payed for certs, because fraudsters don’t want the dialog box poping up and potentialy warning their scam victims.

We need another solution to the CA issue, that is not so easily “got arround” by both crooks and governments.

Anura September 15, 2014 3:33 PM

@Clive Robinson

As I mentioned before, anyone that doesn’t have authority over the domain should not be issuing a certificate for that domain. Therefore, the only place that makes sense is to store a self-signed certificate in DNS and have it secured with DNSSEC; it’s far from ideal, but it’s far and away better than what we have today.

Alternatively, if we had DNSSEC, and a new internet protocol was released and had a certificate associated with every IP address, then that solves the problem as well (DNSSEC certifies that the IP address is associated with the domain, and the the IP certificate authenticates that you are connecting to the right address).

Now, that solves one question anyway “Am I connecting to the right server for this domain” it doesn’t certify “Is this website trustworthy” and for that, the best you could really do is to have EV-SSL on top of that, with the certificate still stored in DNS and secured with DNSSEC.

Justin Case September 15, 2014 5:34 PM

So now the whispering campiaign starts…

We need a new hash because our existing hash is almost broken!!!

Quick everybody, switch to a NEW hash, and while you are at it, why not consider our nifty new official NIST [NSA] approved hashing function SHA3.

You, don’t need to worry because it was chosen in EXACTLY the same kind of ‘open’ and ‘public’ process, in which the NSA examined all the candidates, then picked their preselected ‘ringer’ SHA3, which wonder of wonder, uses some clever matrix transforms (and lots of arm waiving) just like AES (and undoubtedly has a back-door – just like AES).

NSA didn’t used to be so concerned about hashing, but has since made them a high-priority target, since certificate forgery is critical to their massive automated man-in-the-middle attack campaign on SSL.

The solution to this is simple (if anyone cares) and that solution is to codify the simple notion of ‘no single point of failure’ in our internet security protocols by adopting a belt-and-suspenders approach to security.

For encryption, that means super-encryption options in the protocols, for hashing, that means multi-hashing, for RNG’s that means XOR or hashing multiple RNG’s.

This is NOT rocket science folks, many web sites that offer software downloads already have adopted the multi-hash paradigm and offer MD5, SHA1, and SHA256 hashes for their packages.

Though somewhat compromised cryptographically, MD5 still adds strength to a multi-hash bundle because differences in internal structure make it virtually impossible for and adversary to develop a backtracking attack that will allow them to simultaneously defeat multiple hashing algorithms at the same time. So, MD5 by itself = bad idea … but MD5 plus SHA1, plus SHA256 = VERY good idea.

Unfortunately, when it comes to adding things like super-encryption and multi-hashing to our Internet protocols, the ‘security professional’ community has been less then helpful, insisting that a cipher is either secure or NOT from a lofty ‘information-theoric’ perspective, and if it’s secure, then well fine, and if not, well it shouldn’t be used in the first place!

This sounds reasonable, but ignores the principle of ‘no single point of failure’ that is well established in other engineering disciplines.

So in mechanical and electrical engineering, where the stakes are high and where redundancy CAN be engineered in, it WILL be engineered in, so aircraft have backup hydraulic systems, backup APU’s, backup Nav instruments, Backup radios, etc.

But NOT in the area of cryptography where the ‘experts’ mumble about information-theoric principles proving that super-encrypting a secure cipher on top of a secure cipher adds no value (because the cipher was already secure) and super-encrypting an insecure-cipher on top of a secure-cipher is obviously a fools errand.

So obviously, anyone supporting super-encryption protocols is a snake-oil salesman who doesn’t understand basic cryptographic principles.

These arguments against super-encryption protocols are just bonehead stupid.

First, quite-obviously, super-encryption of a strong-cipher on top of another different strong-cipher DOES add value by preventing the crypto system from having single-point failure of security in the case of a hidden cryptographic flaw in one of the constituent ciphers.

Second, if the ciphers are selected and combined correctly, the combination will be stronger than either component. For example the current implementation of RC4 in SSL has a small but exploitable bias, and AES has a relatively small internal state of only sixteen bytes per transform, along with possible mathematical weaknesses [or intentional backdoor] – but if you CBC encrypt AES with a hidden IV, under a super-encryption layer of RC4, the two ciphers enhance each others properties. The HUGE internal state of RC4 (equivalent to more than 1700 bits) makes a mathematical based attack on AES infeasible, and AES in turn randomly ‘whitens’ the input to RC4, making it impossible to detect the tiny biases needed to attack RC4.

Third, with widespread concern about ubiquitous universal data collection in violation of the law being practiced by the NSA, and with the very real prospect that recently approved cryptographic functions like Dual_EC_DRBG, AES, and SHA3 were cryptographically compromised by adding a backdoor; super-encryption with an unrelated RNG, HASH, or CIPHER in a properly constructed protocol will almost certainly throw a monkey wrench into the NSA’s carefully constructed backdoor. This is because, it is difficult to create a hide-in-plain-sight exploitable weakness in the first place – and creating one that will survive super-encryption with another strong cryptographic function is probably impossible.

Given the currently available mix of ciphers in the SSL protocol super encryption AES128-CBC(hidden IV)-> RC4(128 bit hashed key) should be quite secure. So the end-to-end cryptographic pipeline would look like this:

plaintext -> AES-128-CBC -> RC4 —ciphertext—> RC4->AES-> plaintext

The above order of encryption and modes should be optimal from a security perspective, but even the relatively weak AES CCM counter mode, should be secure with RC4 layered on top.

This should give about the same speed and efficiency in software as AES256 (discounting hardware acceleration) but much greater overall security.

Tom Zych September 15, 2014 6:11 PM

@bcs

If certificates allowed multiple signatures, that would not just be a neat side effect. It would allow the browser maintainers to implement a quick fix by checking all the signatures. MD5 is broken wide open, and SHA-1 is too broken for comfort, but finding a simultaneous collision on both is almost certainly still infeasible, and will remain so for some time.

Alas, I suspect they do not allow multiple signatures; or if they do, I suspect most CAs don’t issue them. So we still have to wait for the CAs to clean up their act (both the hash algorithm they use, and their generally sloppy validation procedures).

Tom Zych September 15, 2014 6:13 PM

The cost estimates are based on Marc Stevens’ 2012 attack, which takes 260 hashes. Does anyone want to bet that NSA, the most advanced and best-funded cryptological organization in the world, which designed SHA-1 and has strong motivation to break it, has not come up with even stronger unpublished attacks?

Anura September 15, 2014 6:23 PM

I’m not that suspicious of SHA-2, SHA-3, or AES, however I do think it’s a good idea to have a new competition. It should involve international governments, especially those that respect their citizens privacy, like Iceland, but also those that are irked at the US like Brazil, as well as private organizations dedicated to protecting the public, like the EFF and ACLU. It should select exactly one of each of the following (potentially with parameters for key/block/output sizes):

Block Cipher (and I would go at least 256-bit block size)
Stream Cipher
Cryptographic Hash Function
Authenticated Block Cipher Encryption Mode
Standalone Message Authentication Code
Key-Stretching Algorithm
Key-Derivation-Function
Asymmetric Key-Exchange Algorithm
Ephemeral Asymmetric Key-Exchange Algorithm (if necessary)
Asymmetric Signing Algorithm
Psuedo-Random Number Generator with Entropy Gathering

Along-side that, there should be a second set of competitions to design cryptographic protocols and standards to replace things like SSL/TLS, PGP, SMIME, PKCS, ASN.1, etc. as deemed necessary; they should be chosen to be broken into small, easily-verifiable and reusable modules (e.g. a generic encryption container should be usable by the PGP replacement, key-protection replacement, and SSL/TLS replacement).

Tom Zych September 15, 2014 6:56 PM

PGP fingerprints use SHA-1 exclusively. This seems impossible to exploit in the usual case, where a PGP user generates their own key; it would seem to require a preimage attack, and no practical attack has been published.

Can anyone think of a way to use SHA-1 collisions to attack PGP?

Anura September 15, 2014 7:24 PM

@Tom Zych

The only way I can think of is this:

You notice communications between $Journalist and $Leaker.
You intercept $Journalist’s connection as they download GPG
You swap GPG with a modified version that uses a hard-coded key that has a collision
You intercept the message from $Journalist to $Leaker and swap the $Journalists key for your key
$Leaker then sends $Journalist the documents encrypted with your PGP key, the thumprint of which has been verified over another channel, which $Journalist cannot decrypt but you can
You arrest $Leaker before they can figure out what happened

Eric Mill September 15, 2014 7:44 PM

@Anura

You make a good point about people selling domains after buying certs for them. Keep an eye on Google and Mozilla’s proposal for short-lived (2-3 days) certificates. There’s one discussion here: https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/T11up58JkFc

I like the idea a lot, because it’s essentially like automatic revocation without having to plan it, and means that the domain’s ownership is valid within the last couple of days. There are performance benefits (dropping revocation info) too, making it doubly attractive to browsers.

Tom Zych September 15, 2014 8:23 PM

@Anura

But if you can feed $Journalist a modified GPG, of course there are far easier ways to exploit that. For example, you can restrict the possible keys generated to a set small enough to search. Then you’ll have $Journalist’s secret key, and you can eavesdrop or make a unidirectional MITM attack, without going to the trouble of finding a collision.

Nick P September 15, 2014 8:29 PM

@ Anura

In my days as an operator I’d have done something like that but used covert channels instead. You can have the system write the private key to a file or device. It might try to open it first to see if security blocks it. Additionally, you can leak it to a networked proxy process via covert timing channels. The proxy might be a compromised application or even your own. Deprivileged or not, most system security schemes don’t block (or detect) timing channels. Machines I configured to deal with them in part by flushing (or disabling) cache showed very quickly why: they slowed down so much it made interpreted Java look fast.

Anura September 15, 2014 9:00 PM

@Tom Zych

The method I posted has the advantage of keeping the journalist from receiving the leaked data.

However, I should note that even though collisions are possible, it doesn’t mean you can find two public keys with the same signature without a full brute force, because methods that increase the probability of collisions tend to rely on directly constructing inputs with certain properties, but public keys have to be derived from a separate secret. Of course, the fact that it’s a public key doesn’t mean there isn’t a modified method that exploits it either.

Dodgson September 15, 2014 9:36 PM

Anura, it’s not necessary to replace IPv4/IPv6 to associate a certificate with each IP address. That can be done by attaching DNSSEC-signed records to the reverse lookup data under in-addr.arpa and ip6.arpa. (RFC 4025 documents this for IPSEC.)

Thoth September 15, 2014 10:54 PM

The whole problem could be mitigated if the protocol or probably the browser makers make it easy to deploy alternate non-NIST/non-TLA’ed algorithms that are not broken. What is holding back security is the clumsy deployment and trouble required to make a change. If it is easy to change the algorithms or to provide alternate algorithm lists like a set of different types of hashes (RIPEMD, SHA3, Whirlpool …) and crypto (Twofish, Serpent …) then it makes switching much easier.

Anura September 15, 2014 11:21 PM

@Thoth

Blame Microsoft… Every single major browser had support for SHA-256 in their latest version since about 2006, but with XP you didn’t get SHA-256 support in IE until SP3 in 2008. Everyone who was under XP SP1 or SP2 could only use SHA-1. It wasn’t until very recently that there were a lot of people using Pre-SP3 versions of Windows XP without support for anything but MD5 and SHA-1. The standards don’t matter, it’s the users with unpatched Windows machines that held everything back.

Clive Robinson September 16, 2014 3:12 AM

@ Thoth,

With regards,

The whole problem could be mitigated if the protocol or probably the browser makers make it easy to deploy alternate … algorithms that are not broken.

I have said for some considerable time –here and on other blogs etc–, that NIST should come up with a framework that allows for the easy replacment not just of crypto primatives, but for protocols as well.

I’ve further mentioned that building such a framework is rather more important than running primative competitions, that is you sometimes need to build the cart you are going to put the horse in front of…

I’ve specifically mentioned that we need this in place before we go much further with Implantable medical devices, Smart Meters and more recently the Internet of Things.

Historicaly it’s been seen that neither prorocols or primatives have lasted as long as Implantable Medical Devices and Smart Meters are planed to be in service (25+ years). Thus it’s reasonable to assume that many of our current protocols and primatives will reach a point where they should be replaced long before such embedded devices have reached even close to the end of their service life. Personally if I had been fitted with “jump leads to my heart” I would not want it to be controllable remotely without having secure protocols for it’s entire service life… Apparently I’m not alone in this thinking in that a well known “Bush Buddy” had his cardiac assist device fitted with the remote control disabled at his insistence after getting “professional advice” from the same people who are responsable for the US Presidents life.

The thing is that if you think about it for a few moments, most people will realise that although their might be a small increase in upfront costs on embedded devices, the savings on down stream costs will be immense. So it is actually in the industries long term interests to have such a frame work…

Anura September 16, 2014 5:47 AM

@Clive Robinson

I’ve discussed something similar, although my primary goal is simplicity and verifiability. The idea is that you define a set of interfaces, and then modules conform to that interface. Now your code can be broken out into those modules and verified separately. For the programmers out there, think interfaces and dependency injection – it’s what we do to make unit testing significantly easier by abstracting away the details. The TLS protocol is 100 pages, and it’s complex, instead, this should be your TLS protocol replacement:

1) Client Sends Init Request
2) Server Picks a Signed Document Module and uses it to send their key and a list of supported modules
3) Client picks the preferred modules and sends it using a One-Pass Encryption Module
4) Server calls the Handshake Module to exchange keys with the client
5) Client and server transmit data using the Authenticated Encryption module

And that’s it, nothing else, that is your entire TLS-replacement specification. The signed document module, one-pass encryption module (by one-pass I mean you can generate a key and send without a back-and-forth, basically like PGP), handshake module, and Authenticated Encryption Module are all reusable for other purposes. The idea is that each spec should be no more than around 5-10 pages and less than, or not significantly more than about 1000 lines of code, and completely independently verifiable. The fact that you can switch out algorithms easily is just a bonus.

Anura September 16, 2014 5:50 AM

I should clarify that by independently verifiable, I don’t mean verifiable by a third party (that depends on other factors), I mean that each module can be verified independent of any modules that call it or that it calls, as it has well-defined interfaces and it only needs to verify that it handles the interfaces correctly.

Tom Zych September 16, 2014 8:25 AM

So, Bruce, I guess you’ll be upgrading your own site Real Soon Now? (Of course, it doesn’t mean very much in practice until your upstream CA does, too. But consider the precedent! “Why should we upgrade when even noted security guru Bruce Schneier hasn’t?”)

Dave September 16, 2014 10:00 AM

@Mike Amling: That’s what annoys me about the Google scaremongering. You don’t need to break SHA-1 (which, despite the Google blog comment that it’s “dangerously weak”, is still a lot stronger than they imply). If I want to sign my malware or set up an SSL-secured phishing site, I go to a commercial CA and buy a fake cert using a stolen credit card, as malware authors and phishers have been doing for years. Total cost to me, approximately zero. So it doesn’t matter whether the CAs use SHA-256, SHA-1, MD5, or even the totally broken (really, not the way Google uses the term) MD4, it’s so easy to get genuine fake certs from commercial CAs that it’s not worth attacking the hash. Saying things like “SHA-1 has got to go, and no one else is taking it as seriously as it deserves” is just scaremongering when criminals are buying their malware/phishing certs regardless of whether they use SHA-1 or SHA-256 (or MD5, or a CRC32). It’s not that SHA-1 doesn’t need upgrading (it does, in time), it’s that it’s such a minor issue compared to other problems like the more or less complete lack of accountability of CAs that it’s just distracting people from the real issues.

nobody@localhost September 16, 2014 12:34 PM

Forcing transition to SHA-2 is good. But around the edges, are some very disturbing discussions.

Short-lived certs with 2-3 day expiration mean any SSL website can be removed from the internet within 2-3 days by the CA cartel. Consider if a politically inconvenient website has physical server in the freedom-loving Elbonia, and is widely known with hostname in the Elbonian ccTLD. But no CA “trusted” by all major browsers does business only in Elbonia. Revocation lists are not always checked, but expiration time is.

Key word is identity, key issue is control of identity as a strategic matter. Very convenient confusion results from too much focus on tactical matters with obvious solutions such as hash algo, online key rotation, etc. (while major players ignore or drag feet on best solutions).

Smart people here, please fill in the blanks. (I maybe put longer post… but I do not like long posts.) Smart people also mentioned how it doesn’t matter to “certify” with a CRC32, when TLAs and criminals (redundant term) can so easily get a “valid” cert trusted by some trust root in major browsers’ CA bundle. Now take it to the next level!

nobody@localhost September 16, 2014 12:47 PM

@Dave, hah. I honestly skimmed past your post before putting mine. I thought I had best illustrative hyperbole with CRC32. Cheers!

This message has a hidden signature using multiple very strong hashes (but my ownership of the key is “certified” by the NSA, via its human asset or compromised computer in a basement-operation CA reseller somewhere-in-the-world).

Sorry all for double post. I blame the NSA.

Chris Abbott September 16, 2014 8:30 PM

@Anura

I like the idea of another competition to stay ahead of the curve on things. I have a question though:

Do we really need stream ciphers?

It seems that they are inherently less secure than block ciphers and it seems to me like you could use a block cipher for anything you’d use a stream cipher for…

Nick P September 16, 2014 8:53 PM

@ Chris

That they don’t need padding, need IV’s, or work like block ciphers are all good. Mentally simpler to use, work over data in tiny pieces, and give us diversity in crypto. They’re typically very fast, as well. eSTREAM gave us plenty of them to toy with, including hardware optimized. Salsa20 in NaCl is probably the best implementation to go with. I often used them in multi-cipher designs to wrap structured data, making it very random looking. Then that was fed into one or more block ciphers. I’ve also used them as CRNG’s, for encrypting OTP’s, and in a rare case stretch out what’s left of a OTP in a strong (although not info-theoretic secure) way.

Anura September 16, 2014 9:08 PM

@Nick P

Careful there, stream ciphers absolutely need IVs. However, I agree mostly with what you wrote. The fact is that these days we are moving to block ciphers in CTR mode; there have been recent exploits that took advantage of poor handling of padding, and it’s prudent to find a method that doesn’t have those issues. Stream ciphers have a distinct advantage over block ciphers, as they can have an arbitrary state and are allowed to repeat outputs; block ciphers, on the other hand, have a period that is dictated by the block size and after enough outputs fail the random oracle model because they cannot repeat.

There is nothing inherently less secure about stream ciphers, they just aren’t as simple to cryptanalyze, which is a bit of a double edged sword. That said, I think sponge functions like Keccak are interesting in this regard; it can function as a PRNG with entropy gathering, a hash, a stream cipher, and a KDF, allowing potentially one algorithm to be used for a lot of different things.

Nick P September 16, 2014 9:19 PM

@ Anura

Oops, my memory problems kicking in on the IV thing. Thanks for the catch. Yeah, there is extra risk. It can be mitigated by combining a fast stream cipher with a fast block cipher, applying block cipher second as I did above. Mitigates that. The Keccak function is very interesting in that it was made so multipurpose. Solid work.

Anura September 16, 2014 9:29 PM

Sorry, I’m home sick and have spent my day drinking hot toddies (as in I’m still sick, but too tipsy to care), yet I caught it. No excuses allowed.

Chris Abbott September 17, 2014 2:30 AM

@Nick

I know stream ciphers are faster and have heard all sorts of awesomeness about Salsa20. This may make me sound like a complete imbecile, but, I toyed with something once requiring no padding because it would simply stop generating output as soon as the amount of output matched the amount of input (like a stream cipher). For instance, where you would have an unknown amount of data going back and forth in real time, making a stream cipher seem appropriate, you could just have a buffer before/after where everything just stops rather than requiring padding. It seems to me you could use CBC instead of CTR by just “stopping everything” in order to not need padding, hence avoiding problems.

@Anura

I can recall the BEAST attack forcing people to go back to the medieval RC4 from AES-CBC for webservers. Didn’t it have something to do with padding? I can’t remember. I also seem to recall something about Bruce’s Skein being able to work as a hash and stream cipher.

Clive Robinson September 17, 2014 3:52 AM

@ Anura,

Remember “one mans meat is another mans poison”…

The problem with the term IV is it once had a very specific and limited meaning, now it’s broadened out considerably. Whilst it’s the same problem with other crypto terms like whitening, some such as nonce have been thought to have a broader meaning by implementers, than they realy do and thus insecurities in implementations have arisen.

Back many moons ago the start point of a stream cipher –atleast on the UK side of the puddle– was simply known as the message starting position and was assumed to be mainly irrelevant, as it was seperate from the KeyMat which was in effect the feedback taps on the SR and nonlinear logic or mapping function. The reason for this was the military habit of using self syncing stream ciphers on permanantly transmitting links designed to stop traffic analysis issues.

Since those days, stream cipher design has come on in leaps and bounds and “in theory” some stream ciphers –such as RC4– based on mixing functions not shift registers or counters don’t have an IV.

Clive Robinson September 17, 2014 4:14 AM

@ Chris Abbott,

In theory all hash, block ciphers and stream ciphers, can be used as each other. However in practice it’s not always simple.

The simple case is a hash function, it can be put in a Fiestel Round to turn it into a block cipher, or be driven by a counter or feedback loop to turn it into a stream cipher.

Whilst converting block ciphers to stream ciphers is usually relatively simple, turning them into one way functions suitable for hashes is not as simple… Likewise turning stream ciphers into either block ciphers or hashes is usually not simple.

mike~acker September 17, 2014 7:10 AM

The key is in user/customer participation: let us use our copy of PGP to sign those x.509 certificate we have validated and decided to trust. This will drastically reduce the attack surface.

Customer participation: Customers and providers need to learn what’s involved here: I should be able to stop at my local Credit Union and get the public key corresponding to the occasional certificate that trust is needed for.

sending this stuff out automatically over the net won’t work: that process would get hacked. customers will need to stop and get keys in person.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.