Entries Tagged "academic papers"

Page 71 of 73

A Model Regime of Privacy Protection

Last year I blogged about an article by Daniel J. Solove and Chris Hoofnagle titled “A Model Regime of Privacy Protection.”

The paper has been revised a few times based on comments—some of them from readers of this blog and Crypto-Gram—and the final version has been published.

Abstract:
A series of major security breaches at companies with sensitive personal information has sparked significant attention to the problems with privacy protection in the United States. Currently, the privacy protections in the United States are riddled with gaps and weak spots. Although most industrialized nations have comprehensive data protection laws, the United States has maintained a sectoral approach where certain industries are covered and others are not. In particular, emerging companies known as “commercial data brokers” have frequently slipped through the cracks of U.S. privacy law. In this article, the authors propose a Model Privacy Regime to address the problems in the privacy protection in the United States, with a particular focus on commercial data brokers. Since the United States is unlikely to shift radically from its sectoral approach to a comprehensive data protection regime, the Model Regime aims to patch up the holes in existing privacy regulation and improve and extend it. In other words, the goal of the Model Regime is to build upon the existing foundation of U.S. privacy law, not to propose an alternative foundation. The authors believe that the sectoral approach in the United States can be improved by applying the Fair Information Practices—principles that require the entities that collect personal data to extend certain rights to data subjects. The Fair Information Practices are very general principles, and they are often spoken about in a rather abstract manner. In contrast, the Model Regime demonstrates specific ways that they can be incorporated into privacy regulation in the United States.

Definitely worth reading.

Posted on February 6, 2006 at 12:21 PMView Comments

The Topology of Covert Conflict

Interesting research paper by Shishir Nagaraja and Ross Anderson. Implications for warfare, terrorism, and peer-to-peer file sharing:

Abstract:

Often an attacker tries to disconnect a network by destroying nodes or edges, while the defender counters using various resilience mechanisms. Examples include a music industry body attempting to close down a peer-to-peer file-sharing network; medics attempting to halt the spread of an infectious disease by selective vaccination; and a police agency trying to decapitate a terrorist organisation. Albert, Jeong and Barabási famously analysed the static case, and showed that vertex-order attacks are effective against scale-free networks. We extend this work to the dynamic case by developing a framework based on evolutionary game theory to explore the interaction of attack and defence strategies. We show, first, that naive defences don’t work against vertex-order attack; second, that defences based on simple redundancy don’t work much better, but that defences based on cliques work well; third, that attacks based on centrality work better against clique defences than vertex-order attacks do; and fourth, that defences based on complex strategies such as delegation plus clique resist centrality attacks better than simple clique defences. Our models thus build a bridge between network analysis and evolutionary game theory, and provide a framework for analysing defence and attack in networks where topology matters. They suggest definitions of efficiency of attack and defence, and may even explain the evolution of insurgent organisations from networks of cells to a more virtual leadership that facilitates operations rather than directing them. Finally, we draw some conclusions and present possible directions for future research.

Posted on February 6, 2006 at 7:03 AMView Comments

Brian Snow on Security

Good paper (.pdf) by Brian Snow of the NSA on security and assurance.

Abstract: When will we be secure? Nobody knows for sure—but it cannot happen before commercial security products and services possess not only enough functionality to satisfy customers’ stated needs, but also sufficient assurance of quality, reliability, safety, and appropriateness for use. Such assurances are lacking in most of today’s commercial security products and services. I discuss paths to better assurance in Operating Systems, Applications, and Hardware through better development environments, requirements definition, systems engineering, quality certification, and legal/regulatory constraints. I also give some examples.

Posted on December 13, 2005 at 2:15 PMView Comments

Snake-Oil Research in Nature

Snake-oil isn’t only in commercial products. Here’s a piece of research published (behind a paywall) in Nature that’s just full of it.

The article suggests using chaos in an electro-optical system to generate a pseudo-random light sequence, which is then added to the message to protect it from interception. Now, the idea of using chaos to build encryption systems has been tried many times in the cryptographic community, and has always failed. But the authors of the Nature article show no signs of familiarity with prior cryptographic work.

The published system has the obvious problem that it does not include any form of message authentication, so it will be trivial to send spoofed messages or tamper with messages while they are in transit.

But a closer examination of the paper’s figures suggests a far more fundamental problem. There’s no key. Anyone with a valid receiver can decode the ciphertext. No key equals no security, and what you have left is a totally broken system.

I e-mailed Claudio R. Mirasso, the corresponding author, about the lack of any key, and got this reply: “To extract the message from the chaotic carrier you need to replicate the carrier itself. This can only be done by a laser that matches the emitter characteristics within, let’s say, within 2-5%. Semiconductor lasers with such similarity have to be carefully selected from the same wafer. Even though you have to test them because they can still be too different and do not synchronize. We talk abut a hardware key. Also the operating conditions (current, feedback length and coupling strength) are part of the key.”

Let me translate that. He’s saying that there is a hardware key baked into the system at fabrication. (It comes from manufacturing deviations in the lasers.) There’s no way to change the key in the field. There’s no way to recover security if any of the transmitters/receivers are lost or stolen. And they don’t know how hard it would be for an attacker to build a compatible receiver, or even a tunable receiver that could listen to a variety of encodings.

This paper would never get past peer review in any competent cryptography journal or conference. I’m surprised it was accepted in Nature, a fiercely competitive journal. I don’t know why Nature is taking articles on topics that are outside its usual competence, but it looks to me like Nature got burnt here by a lack of expertise in the area.

To be fair, the paper very carefully skirts the issue of security, and claims hardly anything: “Additionally, chaotic carriers offer a certain degree of intrinsic privacy, which could complement (via robust hardware encryption) both classical (software based) and quantum cryptography systems.” Now that “certain degree of intrinsic privacy” is approximately zero. But other than that, they’re very careful how they word their claims.

For instance, the abstract says: “Chaotic signals have been proposed as broadband information carriers with the potential of providing a high level of robustness and privacy in data transmission.” But there’s no disclosure that this proposal is bogus, from a privacy perspective. And the next-to-last paragraph says “Building on this, it should be possible to develop reliable cost-effective secure communication systems that exploit deeper properties of chaotic dynamics.” No disclosure that “chaotic dynamics” is actually irrelevant to the “secure” part. The last paragraph talks about “smart encryption techniques” (referencing a paper that talks about chaos encryption), “developing active eavesdropper-evasion strategies” (whatever that means), and so on. It’s just enough that if you don’t parse their words carefully and don’t already know the area well, you might come away with the impression that this is a major advance in secure communications. It seems as if it would have helped to have a more careful disclaimer.

Communications security was listed as one of the motivations for studying this communications technique. To list this as a motivation, without explaining that their experimental setup is actually useless for communications security, is questionable at best.

Meanwhile, the press has written articles that convey the wrong impression. Science News has an article that lauds this as a big achievement for communications privacy.

It talks about it as a “new encryption strategy,” “chaos-encrypted communication,” “1 gigabyte of chaos-encrypted information per second.” It’s obvious that the communications security aspect is what Science News is writing about. If the authors knew that their scheme is useless for communications security, they didn’t explain that very well.

There is also a New Scientist article titled “Let chaos keep your secrets safe” that characterizes this as a “new cryptographic technique, ” but I can’t get a copy of the full article.

Here are two more articles that discuss its security benefits. In the latter, Mirasso says “the main task we have for the future” is to “define, test, and calibrate the security that our system can offer.”

And their project web page says that “the continuous increase of computer speed threatens the safety” of traditional cryptography (which is bogus) and suggests using physical-layer chaos as a way to solve this. That’s listed as the goal of the project.

There’s a lesson here. This is research undertaken by researchers with no prior track record in cryptography, submitted to a journal with no background in cryptography, and reviewed by reviewers with who knows what kind of experience in cryptography. Cryptography is a subtle subject, and trying to design new cryptosystems without the necessary experience and training in the field is a quick route to insecurity.

And what’s up with Nature? Cryptographers with no training in physics know better than to think they are competent to evaluate physics research. If a physics paper were submitted to a cryptography journal, the authors would likely be gently redirected to a physics journal—we wouldn’t want our cryptography conferences to accept a paper on a subject they aren’t competent to evaluate. Why would Nature expect the situation to be any different when physicists try to do cryptography research?

Posted on December 7, 2005 at 6:36 AMView Comments

Twofish Cryptanalysis Rumors

Recently I have been hearing some odd “Twofish has been broken” rumors. I thought I’d quell them once and for all.

Rumors of the death of Twofish has been greatly exaggerated.

The analysis in question is by Shiho Moriai and Yiqun Lisa Yin, who published their results in Japan in 2000. Recently, someone either got a copy of the paper or heard about the results, and rumors started spreading.

Here’s the actual paper. It presents no cryptanalytic attacks, only some hypothesized differential characteristics. Moriai and Yin discovered byte-sized truncated differentials for 12- and 16-round Twofish (the full cipher has 16 rounds), but were unable to use them in any sort of attack. They also discovered a larger, 5-round truncated differential. No one has been able to convert these differentials into an attack, and Twofish is nowhere near broken. On the other hand, they are excellent and interesting results—and it’s a really good paper.

In more detail, here are the paper’s three results:

  1. The authors show a 12-round truncated differential characteristic that predicts that the 2nd byte of the ciphertext difference will be 0 when the plaintext difference is all-zeros except for its last byte. They say the characteristic holds with probability 2-40.9. Note that for an ideal cipher, we expect the 2nd byte of ciphertext to be 0 with probability 2-8, just by chance. Of course, 2-8 is much, much larger than 2-40.9. Therefore, this is not particularly useful in a distinguishing attack.

    One possible interpretation of their result would be to conjecture that the 2nd byte of ciphertext difference will be 0 with probability 2-8 + 2-40.9 for Twofish, but only 2-8 for an ideal cipher. Their characteristic is just one path. If one is lucky, perhaps all other paths behave randomly and contribute an additional 2-8 factor to the total probability of getting a 0 in the 2nd byte of ciphertext difference. Perhaps. One might conjecture that, anyway.

    It is not at all clear whether this conjecture is true, and the authors are careful not to claim it. If it were true, it might lead to a theoretical distinguishing attack using 275 chosen plaintexts or so (very rough estimate). But I’m not at all sure that the conjecture is true.

  2. They show a 16-round truncated differential that predicts that the 2nd byte of the ciphertext difference will be 0 (under the same input difference). Their characteristic holds with probability 2-57.3 (they say). Again, this is not very useful.

    Analogously to the first result, one might conjecture that the 2nd byte of the ciphertext difference will be 0 with probability 2-8 + 2-57.3 for Twofish, but probability 2-8 for an ideal cipher. If this were true, one might be able to mount a distinguishing attack with 2100 chosen plaintexts or so (another very rough estimate). But I have no idea whether the conjecture is true.

  3. They also show a 5-round truncated differential characteristic that predicts that the input difference that is non-zero everywhere except in its 9th byte will lead to an output difference of the same form. This characteristic has probability 2-119.988896, they say (but they also say that they have made some approximations, and the actual probabilities can be a little smaller or a little larger). Compared to an ideal cipher, where one would expect this to happen by chance with probability 2-120, this isn’t very interesting. It’s hard to imagine how this could be useful in a distinguishing attack.

The paper theorizes that all of these characteristics might be useful in an attack, but I would be very careful about drawing any conclusions. It can be very tricky to go from single-path characteristics whose probability is much smaller than the chances of it happening by chance in an ideal cipher, to a real attack. The problem is in the part where you say “let’s just assume all other paths behave randomly.” Often the other paths do not behave randomly, and attacks that look promising fall flat on their faces.

We simply don’t know whether these truncated differentials would be useful in a distinguishing attack. But what we do know is that even if everything works out perfectly to the cryptanalyst’s benefit, and if an attack is possible, then such an attack is likely to require a totally unrealistic number of chosen plaintexts. 2100 plaintexts is something like a billion billion DVDs’ worth of data, or a T1 line running for a million times the age of the universe. (Note that these numbers might be off by a factor of 1,000 or so. But honestly, who cares? The numbers are so huge as to be irrelevent.) And even with all that data, a distinguishing attack is not the same as a key recovery attack.

Again, I am not trying to belittle the results. Moriai and Yin did some great work here, and they deserve all kinds of credit for it. But even from a theoretical perspective, Twofish isn’t even remotely broken. There have been no extensions to these results since they were published five years ago. The best Twofish cryptanalysis is still the work we did during the design process: available on the Twofish home page.

Posted on November 23, 2005 at 12:15 PMView Comments

NIST Hash Workshop Liveblogging (5)

The afternoon started with three brand new hash functions: FORK-256, DHA-256, and VSH. VSH (Very Smooth Hash) was the interesting one; it’s based on factoring and the discrete logarithm problem, like public-key encryption, and not on bit-twiddling like symmetric encryption. I have no idea if it’s any good, but it’s cool to see something so different.

I think we need different. So many of our hash functions look pretty much the same: MD4, MD5, SHA-0, SHA-1, RIPE-MD, HAVAL, SHA-256, SHA-512. And everything is basically a block cipher in Davies-Meyer mode. I want some completely different designs. I want hash functions based on a stream ciphers. I want more functions based on number theory.

The final session was an open discussion about what to do next. There was much debate about how soon we need a new hash function, how long we should rely on SHA-1 or SHA-256, etc.

Hashing is hard. At the ultra-high-level hand-waving level, it takes a lot more clock cycles per message byte to hash than it does to encrypt. No one has any theory to account for this, but it seems like the lack of any secrets in a hash function makes it a harder problem. This may be an artifact of our lack of knowledge, but I think there’s a grain of fundamental truth buried here.

And hash functions are used everywhere. Hash functions are the workhorse of cryptography; they’re sprinkled all over security protocols. They’re used all the time, in all sorts of weird ways, for all sorts of weird purposes. We cryptographers think of them as good hygiene, kind of like condoms.

So we need a fast answer for immediate applications.

We also need “SHA2,” whatever that will look like. And a design competition is the best way to get a SHA2. (Niels Ferguson pointed out that the AES process was the best cryptographic invention of the past decade.)

Unfortunately, we’re in no position to have an AES-like competition to replace SHA right now. We simply don’t know enough about designing hash functions. What we need is research, random research all over the map. Designs beget analyses beget designs beget analyses…. Right now we need a bunch of mediocre hash function designs. We need a posse of hotshot graduate students breaking them and making names for themselves. We need new tricks and new tools. Hash functions are a hot area of research right now, but anything we can do to stoke that will pay off in the future.

NIST is thinking of hosting another hash workshop right after Crypto next year. That would be a good thing.

I need to get to work on a hash function based on Phelix.

Posted on November 1, 2005 at 3:43 PMView Comments

NIST Hash Workshop Liveblogging (4)

This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.

Lots of interesting ideas, but no real discussion about trade-offs. But it’s the trade-offs that are important. It’s easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest’s dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I’m making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it’s the correct analysis here as well.

Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn’t be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we’ve just been too optimistic about how strong hash functions are.

The other thing we need to be doing is providing answers to developers. It’s not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They’ll need an answer in a year. They’ll need an answer in four years. Maybe the answers will be the same, and maybe they’ll be different. But if we don’t give them answers, they’ll make something up. They won’t wait for us.

And while it’s true that we don’t have any real theory of hash functions, and it’s true that anything we choose will be based partly on faith, we have no choice but to choose.

And finally, I think we need to stimulate research more. Whether it’s a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses…. We need a whole bunch of new hash functions to beat up; that’s how we’ll learn to design better ones.

Posted on November 1, 2005 at 11:19 AMView Comments

NIST Hash Workshop Liveblogging (3)

I continue to be impressed by the turnout at this workshop. There are lots of people here whom I haven’t seen in a long time. It’s like a cryptographers’ family reunion.

The afternoon was devoted to cryptanalysis papers. Nothing earth-shattering; a lot of stuff that’s real interesting to me and not very exciting to summarize.

The list of papers is here. NIST promises to put the actual papers online, but they make no promises as to when.

Right now there is a panel discussing how secure SHA-256 is. “How likely is SHA-256 to resist attack for the next ten years?” Some think it will be secure for that long, others think it will fall in five years or so. One person pointed out that if SHA-256 lasts ten years, it will be a world record for a hash function. The consensus is that any new hash function needs to last twenty years, though. It really seems unlikely that any hash function will last that long.

But the real issue is whether there will be any practical attacks. No one knows. Certainly there will be new cryptanalytic techniques developed, especially now that hash functions are a newly hot area for research. But will SHA-256 ever have an attack that’s faster than 280?

Everyone thinks that SHA-1 with 160 rounds is a safer choice than SHA-256 truncated to 160 bits. The devil you know, I guess.

Niels Ferguson, in a comment from the floor, strongly suggested that NIST publish whatever analysis on SHA-256 it has. Since this is most likely by the NSA and classified, it would be a big deal. But I agree that it’s essential for us to fully evaluate the hash function.

Tom Berson, in another comment, suggested that NIST not migrate to a single hash function, but certify multiple alternatives. This has the interesting side effect of forcing the algorithm agility issue. (We had this same debate regarding AES. Negatives are: 1) you’re likely to have a system that is as strong as the weakest choice, and 2) industry will hate it.)

If there’s a moral out of the first day of this workshop, it’s that algorithm agility is an essential feature in any Internet protocol.

Posted on October 31, 2005 at 4:00 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.