I was about to offer a small correction — that the idea is not strictly a myth — but I looked at US patent 4405829 A filed in 1977, and it does indeed specify that the decryption exponent be computed as the inverse of encryption exponent mod lcm(p-1, q-1), which is identical to lambda(p*q) [also known as the Carmichael function]. So I’m in your debt — I was not aware that RSA was originally specified that way!

That being said, computing the decryption exponent by the usual weaker criterion of inverse mod phi(p*q) is sufficient to guarantee the property of completeness; and further, to my knowledge, the potential non-uniqueness of the decryption exponent d does not give rise to any practical attack.

A purely academic question would be the historical inquiry into how phi came to be substituted for lamdba in the many textbooks, websites and other presentations.

In practice, security applications of RSA choose one exponent to be small (that is, representable in a number of bits far less than the length of the modulus n=p*q). With this constraint, the embarrassing collision e=d is not possible.

]]>Yeah, I noticed that, too. I daresay the idea that the RSA private exponent must be an inverse of the public exponent modulo phi(p*q) is the most enduring myth in all cryptography. Of course the truth is that the private exponent must be an inverse of the public exponent modulo lambda(p*q).

Perhaps better key selection would be MAX: 55, PUB: 7, PRIV: 3.

This again shows how easy it is to weaken the system by mistake in implementation (unless some three letter agency was selecting the particular coefficients ;-). ]]>

Vertigo! Vertigo!

]]>I’ve also never gotten over the fact that they chose 521 bit prime curves instead of 512 bit prime curves (where every other prime curve is a multiple of 32 or (in one case) 16) the only reason I can think of for that is that weak 512 bit curves are significantly less likely than for other prime curves.

I’d rather expect it the other way round (but that’s based on nothing but intuition; I think round numbers are more likely to lead to vulnerabilities). Remember that weakening crypto isn’t the NSA’s only interest; they want codes in use that

a) they can break if needed

b) nobody else can break

So they may well have fixed an obvious (to them) vulnerability they expected to become public knowledge soon, like they did with DES. That doesn’t mean they didn’t introduce another, less obvious, vulnerability…

That said I am considering using ECDSA (or the ec based digital SIG system Nick spoke of on Friday) for my blogsig project. This is due to the need to fit both a signature and metadata into an 80 character (a single standard length line) signature. Given I have stated that non-repudiation and absolute certainty are not part of the brief I think it is a reasonable enough choice. A blogsig is designed only to certify that there is a *high* (not absolute or legally provable) probability the signed post was composed by the keyholder and has not been modified (except for reformatting, a concession we must make with blogs).

I was citing a passage from the Ars Technica article. The reason it’s typically thought to be same is probably that the best

It also says “RC4 Yes NOT DESIRABLE” and “Forward Secrecy No NOT DESIRABLE”

That said, my bank did recently roll out a new online baking upgrade with better password requirements (not only do they actually treat upper and lower case letters differently but they actually require upper case, lower case AND numbers in the password.

]]>Is there a list of financial institutions using ECC and/or PFS? My bank and CC provider are not.

]]>