Entries Tagged "cryptography"

Page 49 of 53

Snake-Oil Research in Nature

Snake-oil isn’t only in commercial products. Here’s a piece of research published (behind a paywall) in Nature that’s just full of it.

The article suggests using chaos in an electro-optical system to generate a pseudo-random light sequence, which is then added to the message to protect it from interception. Now, the idea of using chaos to build encryption systems has been tried many times in the cryptographic community, and has always failed. But the authors of the Nature article show no signs of familiarity with prior cryptographic work.

The published system has the obvious problem that it does not include any form of message authentication, so it will be trivial to send spoofed messages or tamper with messages while they are in transit.

But a closer examination of the paper’s figures suggests a far more fundamental problem. There’s no key. Anyone with a valid receiver can decode the ciphertext. No key equals no security, and what you have left is a totally broken system.

I e-mailed Claudio R. Mirasso, the corresponding author, about the lack of any key, and got this reply: “To extract the message from the chaotic carrier you need to replicate the carrier itself. This can only be done by a laser that matches the emitter characteristics within, let’s say, within 2-5%. Semiconductor lasers with such similarity have to be carefully selected from the same wafer. Even though you have to test them because they can still be too different and do not synchronize. We talk abut a hardware key. Also the operating conditions (current, feedback length and coupling strength) are part of the key.”

Let me translate that. He’s saying that there is a hardware key baked into the system at fabrication. (It comes from manufacturing deviations in the lasers.) There’s no way to change the key in the field. There’s no way to recover security if any of the transmitters/receivers are lost or stolen. And they don’t know how hard it would be for an attacker to build a compatible receiver, or even a tunable receiver that could listen to a variety of encodings.

This paper would never get past peer review in any competent cryptography journal or conference. I’m surprised it was accepted in Nature, a fiercely competitive journal. I don’t know why Nature is taking articles on topics that are outside its usual competence, but it looks to me like Nature got burnt here by a lack of expertise in the area.

To be fair, the paper very carefully skirts the issue of security, and claims hardly anything: “Additionally, chaotic carriers offer a certain degree of intrinsic privacy, which could complement (via robust hardware encryption) both classical (software based) and quantum cryptography systems.” Now that “certain degree of intrinsic privacy” is approximately zero. But other than that, they’re very careful how they word their claims.

For instance, the abstract says: “Chaotic signals have been proposed as broadband information carriers with the potential of providing a high level of robustness and privacy in data transmission.” But there’s no disclosure that this proposal is bogus, from a privacy perspective. And the next-to-last paragraph says “Building on this, it should be possible to develop reliable cost-effective secure communication systems that exploit deeper properties of chaotic dynamics.” No disclosure that “chaotic dynamics” is actually irrelevant to the “secure” part. The last paragraph talks about “smart encryption techniques” (referencing a paper that talks about chaos encryption), “developing active eavesdropper-evasion strategies” (whatever that means), and so on. It’s just enough that if you don’t parse their words carefully and don’t already know the area well, you might come away with the impression that this is a major advance in secure communications. It seems as if it would have helped to have a more careful disclaimer.

Communications security was listed as one of the motivations for studying this communications technique. To list this as a motivation, without explaining that their experimental setup is actually useless for communications security, is questionable at best.

Meanwhile, the press has written articles that convey the wrong impression. Science News has an article that lauds this as a big achievement for communications privacy.

It talks about it as a “new encryption strategy,” “chaos-encrypted communication,” “1 gigabyte of chaos-encrypted information per second.” It’s obvious that the communications security aspect is what Science News is writing about. If the authors knew that their scheme is useless for communications security, they didn’t explain that very well.

There is also a New Scientist article titled “Let chaos keep your secrets safe” that characterizes this as a “new cryptographic technique, ” but I can’t get a copy of the full article.

Here are two more articles that discuss its security benefits. In the latter, Mirasso says “the main task we have for the future” is to “define, test, and calibrate the security that our system can offer.”

And their project web page says that “the continuous increase of computer speed threatens the safety” of traditional cryptography (which is bogus) and suggests using physical-layer chaos as a way to solve this. That’s listed as the goal of the project.

There’s a lesson here. This is research undertaken by researchers with no prior track record in cryptography, submitted to a journal with no background in cryptography, and reviewed by reviewers with who knows what kind of experience in cryptography. Cryptography is a subtle subject, and trying to design new cryptosystems without the necessary experience and training in the field is a quick route to insecurity.

And what’s up with Nature? Cryptographers with no training in physics know better than to think they are competent to evaluate physics research. If a physics paper were submitted to a cryptography journal, the authors would likely be gently redirected to a physics journal—we wouldn’t want our cryptography conferences to accept a paper on a subject they aren’t competent to evaluate. Why would Nature expect the situation to be any different when physicists try to do cryptography research?

Posted on December 7, 2005 at 6:36 AMView Comments

Twofish Cryptanalysis Rumors

Recently I have been hearing some odd “Twofish has been broken” rumors. I thought I’d quell them once and for all.

Rumors of the death of Twofish has been greatly exaggerated.

The analysis in question is by Shiho Moriai and Yiqun Lisa Yin, who published their results in Japan in 2000. Recently, someone either got a copy of the paper or heard about the results, and rumors started spreading.

Here’s the actual paper. It presents no cryptanalytic attacks, only some hypothesized differential characteristics. Moriai and Yin discovered byte-sized truncated differentials for 12- and 16-round Twofish (the full cipher has 16 rounds), but were unable to use them in any sort of attack. They also discovered a larger, 5-round truncated differential. No one has been able to convert these differentials into an attack, and Twofish is nowhere near broken. On the other hand, they are excellent and interesting results—and it’s a really good paper.

In more detail, here are the paper’s three results:

  1. The authors show a 12-round truncated differential characteristic that predicts that the 2nd byte of the ciphertext difference will be 0 when the plaintext difference is all-zeros except for its last byte. They say the characteristic holds with probability 2-40.9. Note that for an ideal cipher, we expect the 2nd byte of ciphertext to be 0 with probability 2-8, just by chance. Of course, 2-8 is much, much larger than 2-40.9. Therefore, this is not particularly useful in a distinguishing attack.

    One possible interpretation of their result would be to conjecture that the 2nd byte of ciphertext difference will be 0 with probability 2-8 + 2-40.9 for Twofish, but only 2-8 for an ideal cipher. Their characteristic is just one path. If one is lucky, perhaps all other paths behave randomly and contribute an additional 2-8 factor to the total probability of getting a 0 in the 2nd byte of ciphertext difference. Perhaps. One might conjecture that, anyway.

    It is not at all clear whether this conjecture is true, and the authors are careful not to claim it. If it were true, it might lead to a theoretical distinguishing attack using 275 chosen plaintexts or so (very rough estimate). But I’m not at all sure that the conjecture is true.

  2. They show a 16-round truncated differential that predicts that the 2nd byte of the ciphertext difference will be 0 (under the same input difference). Their characteristic holds with probability 2-57.3 (they say). Again, this is not very useful.

    Analogously to the first result, one might conjecture that the 2nd byte of the ciphertext difference will be 0 with probability 2-8 + 2-57.3 for Twofish, but probability 2-8 for an ideal cipher. If this were true, one might be able to mount a distinguishing attack with 2100 chosen plaintexts or so (another very rough estimate). But I have no idea whether the conjecture is true.

  3. They also show a 5-round truncated differential characteristic that predicts that the input difference that is non-zero everywhere except in its 9th byte will lead to an output difference of the same form. This characteristic has probability 2-119.988896, they say (but they also say that they have made some approximations, and the actual probabilities can be a little smaller or a little larger). Compared to an ideal cipher, where one would expect this to happen by chance with probability 2-120, this isn’t very interesting. It’s hard to imagine how this could be useful in a distinguishing attack.

The paper theorizes that all of these characteristics might be useful in an attack, but I would be very careful about drawing any conclusions. It can be very tricky to go from single-path characteristics whose probability is much smaller than the chances of it happening by chance in an ideal cipher, to a real attack. The problem is in the part where you say “let’s just assume all other paths behave randomly.” Often the other paths do not behave randomly, and attacks that look promising fall flat on their faces.

We simply don’t know whether these truncated differentials would be useful in a distinguishing attack. But what we do know is that even if everything works out perfectly to the cryptanalyst’s benefit, and if an attack is possible, then such an attack is likely to require a totally unrealistic number of chosen plaintexts. 2100 plaintexts is something like a billion billion DVDs’ worth of data, or a T1 line running for a million times the age of the universe. (Note that these numbers might be off by a factor of 1,000 or so. But honestly, who cares? The numbers are so huge as to be irrelevent.) And even with all that data, a distinguishing attack is not the same as a key recovery attack.

Again, I am not trying to belittle the results. Moriai and Yin did some great work here, and they deserve all kinds of credit for it. But even from a theoretical perspective, Twofish isn’t even remotely broken. There have been no extensions to these results since they were published five years ago. The best Twofish cryptanalysis is still the work we did during the design process: available on the Twofish home page.

Posted on November 23, 2005 at 12:15 PMView Comments

Oracle's Password Hashing

Here’s a paper on Oracle’s password hashing algorithm. It isn’t very good.

In this paper the authors examine the mechanism used in Oracle databases for protecting users’ passwords. We review the algorithm used for generating password hashes, and show that the current mechanism presents a number of weaknesses, making it straightforward for an attacker with limited resources to recover a user’s plaintext password from the hashed value. We also describe how to implement a password recovery tool using off-the-shelf software. We conclude by discussing some possible attack vectors and recommendations to mitigate this risk.

Posted on November 3, 2005 at 1:20 PMView Comments

NIST Hash Workshop Liveblogging (5)

The afternoon started with three brand new hash functions: FORK-256, DHA-256, and VSH. VSH (Very Smooth Hash) was the interesting one; it’s based on factoring and the discrete logarithm problem, like public-key encryption, and not on bit-twiddling like symmetric encryption. I have no idea if it’s any good, but it’s cool to see something so different.

I think we need different. So many of our hash functions look pretty much the same: MD4, MD5, SHA-0, SHA-1, RIPE-MD, HAVAL, SHA-256, SHA-512. And everything is basically a block cipher in Davies-Meyer mode. I want some completely different designs. I want hash functions based on a stream ciphers. I want more functions based on number theory.

The final session was an open discussion about what to do next. There was much debate about how soon we need a new hash function, how long we should rely on SHA-1 or SHA-256, etc.

Hashing is hard. At the ultra-high-level hand-waving level, it takes a lot more clock cycles per message byte to hash than it does to encrypt. No one has any theory to account for this, but it seems like the lack of any secrets in a hash function makes it a harder problem. This may be an artifact of our lack of knowledge, but I think there’s a grain of fundamental truth buried here.

And hash functions are used everywhere. Hash functions are the workhorse of cryptography; they’re sprinkled all over security protocols. They’re used all the time, in all sorts of weird ways, for all sorts of weird purposes. We cryptographers think of them as good hygiene, kind of like condoms.

So we need a fast answer for immediate applications.

We also need “SHA2,” whatever that will look like. And a design competition is the best way to get a SHA2. (Niels Ferguson pointed out that the AES process was the best cryptographic invention of the past decade.)

Unfortunately, we’re in no position to have an AES-like competition to replace SHA right now. We simply don’t know enough about designing hash functions. What we need is research, random research all over the map. Designs beget analyses beget designs beget analyses…. Right now we need a bunch of mediocre hash function designs. We need a posse of hotshot graduate students breaking them and making names for themselves. We need new tricks and new tools. Hash functions are a hot area of research right now, but anything we can do to stoke that will pay off in the future.

NIST is thinking of hosting another hash workshop right after Crypto next year. That would be a good thing.

I need to get to work on a hash function based on Phelix.

Posted on November 1, 2005 at 3:43 PMView Comments

NIST Hash Workshop Liveblogging (4)

This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.

Lots of interesting ideas, but no real discussion about trade-offs. But it’s the trade-offs that are important. It’s easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest’s dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I’m making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it’s the correct analysis here as well.

Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn’t be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we’ve just been too optimistic about how strong hash functions are.

The other thing we need to be doing is providing answers to developers. It’s not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They’ll need an answer in a year. They’ll need an answer in four years. Maybe the answers will be the same, and maybe they’ll be different. But if we don’t give them answers, they’ll make something up. They won’t wait for us.

And while it’s true that we don’t have any real theory of hash functions, and it’s true that anything we choose will be based partly on faith, we have no choice but to choose.

And finally, I think we need to stimulate research more. Whether it’s a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses…. We need a whole bunch of new hash functions to beat up; that’s how we’ll learn to design better ones.

Posted on November 1, 2005 at 11:19 AMView Comments

NIST Hash Workshop Liveblogging (3)

I continue to be impressed by the turnout at this workshop. There are lots of people here whom I haven’t seen in a long time. It’s like a cryptographers’ family reunion.

The afternoon was devoted to cryptanalysis papers. Nothing earth-shattering; a lot of stuff that’s real interesting to me and not very exciting to summarize.

The list of papers is here. NIST promises to put the actual papers online, but they make no promises as to when.

Right now there is a panel discussing how secure SHA-256 is. “How likely is SHA-256 to resist attack for the next ten years?” Some think it will be secure for that long, others think it will fall in five years or so. One person pointed out that if SHA-256 lasts ten years, it will be a world record for a hash function. The consensus is that any new hash function needs to last twenty years, though. It really seems unlikely that any hash function will last that long.

But the real issue is whether there will be any practical attacks. No one knows. Certainly there will be new cryptanalytic techniques developed, especially now that hash functions are a newly hot area for research. But will SHA-256 ever have an attack that’s faster than 280?

Everyone thinks that SHA-1 with 160 rounds is a safer choice than SHA-256 truncated to 160 bits. The devil you know, I guess.

Niels Ferguson, in a comment from the floor, strongly suggested that NIST publish whatever analysis on SHA-256 it has. Since this is most likely by the NSA and classified, it would be a big deal. But I agree that it’s essential for us to fully evaluate the hash function.

Tom Berson, in another comment, suggested that NIST not migrate to a single hash function, but certify multiple alternatives. This has the interesting side effect of forcing the algorithm agility issue. (We had this same debate regarding AES. Negatives are: 1) you’re likely to have a system that is as strong as the weakest choice, and 2) industry will hate it.)

If there’s a moral out of the first day of this workshop, it’s that algorithm agility is an essential feature in any Internet protocol.

Posted on October 31, 2005 at 4:00 PMView Comments

NIST Hash Workshop Liveblogging (2)

In the morning we had a series of interesting papers: “Strengthening Digital Signatures via Randomized Hashing,” by Halevi and Krawczyk; “Herding Hash Functions and the Nostradamus Attack,” by Kelsey and Kohno; and “Collision-Resistant usage of MD5 and SHA-1 via Message Preprocessing,” by Szydlo and Yin. The first and third papers are suggestions for modifying SHA-1 to make it more secure. The second paper discusses some fascinating and cool, but still theoretical, attacks on hash functions.

The last session before lunch was a panel discussion: “SHA-1: Practical Security Implications of Continued Use.” The panel stressed that these are collision attacks and not pre-image attacks, and that many protocols simply don’t care. Collision attacks are important for digital signatures, but less so for other uses of hash functions. On the other hand, this difference is only understood by cryptographers; there are issues if the public believes that SHA-1 is “broken.”

Niels Ferguson pointed out that the big problem is MD5, which is still used everywhere. (Hell, DES is still everywhere.) It takes much longer to upgrade algorithms on the Internet than most people believe; Steve Bellovin says it takes about one year to get the change through the IETF, and another five to seven years to get it depoloyed. And that’s after we all figure out which algorithm they should use.

Georg Illies gave a perspective from Germany, where there is a digital-signature law in effect. In addition to the technology, there are legal considerations that make it harder to switch.

The panel seemed to agree that it’s still safe to use SHA-1 today, but that we need to start migrating to something better. It’s way easier to change algorithms when you’re not in the middle of a panic.

There was more talk about algorithm agility. This problem is larger than SHA. Our Internet protocols simply don’t have a secure methodology for migrating from one cryptographic algorithm to another.

Bottom line: Don’t use SHA-1 for anything new, and start moving away from it as soon as possible. To SHA-256, probably.

And now it’s lunchtime.

Posted on October 31, 2005 at 11:50 AMView Comments

NIST Hash Workshop Liveblogging (1)

I’m in Gaithersburg, MD, at the Cryptographic Hash Workshop hosted by NIST. I’m impressed by the turnout; a lot of the right people are here.

Xiaoyun Wang, the cryptographer who broke SHA-1, spoke about her latest results. They are the same results Adi Shamir presented in her name at Crypto this year: a time complexity of 263.

(I first wrote about Wang’s results here, and discussed their implications here. I wrote about results from Crypto here. Here are her two papers from Crypto: “Efficient Collision Search Attacks on SHA-0” and “Finding Collisions in the Full SHA-1 Collision Search Attacks on SHA1.”)

Steve Bellovin is now talking about the problems associated with upgrading hash functions. He and his coauthor Eric Rescorla looked at S/MIME, TLS, IPSec (and IKE), and DNSSEC. Basically, these protocols can’t change algorithms overnight; it has to happen gradually, over the course of years. So the protocols need some secure way to “switch hit”: to use both the new and old hash functions during the transition period. This requires some sort of signaling, which the protocols don’t do very well. (Bellovin’s and Rescorla’s paper is here.)

Posted on October 31, 2005 at 9:02 AMView Comments

RFID Car Keys

RFID car keys (subscription required) are becoming more popular. Since these devices broadcast a unique serial number, it’s only a matter of time before a significant percentage of the population can be tracked with them.

Lexus has made what it calls the “SmartAccess” keyless-entry system standard on its new IS sedans, designed to compete with German cars like the BMW 3 series or the Audi A4, as well as rivals such as the Infiniti G35 or the U.S.-made Cadillac CTS. BMW offers what it calls “keyless go” as an option on the new 3 series, and on its higher-priced 5, 6 and 7 series sedans.

Volkswagen AG’s Audi brand offers keyless-start systems on its A6 and A8 sedans, but not yet on U.S.-bound A4s. Cadillac’s new STS sedan, big brother to the CTS, also offers a pushbutton start.

Starter buttons have a racy flair—European sports cars and race cars used them in the past. The proliferation of starter buttons in luxury sedans has its roots in theft protection. An increasing number of cars now come with theft-deterrent systems that rely on a chip in the key fob that broadcasts a code to a receiver in the car. If the codes don’t match, the car won’t start.

Cryptography can be used to make these devices anonymous, but there’s no business reason for automobile manufacturers to field such a system. Once again, the economic barriers to security are far greater than the technical ones.

Posted on October 5, 2005 at 8:13 AMView Comments

1 47 48 49 50 51 53

Sidebar photo of Bruce Schneier by Joe MacInnis.