Entries Tagged "Internet"

Page 18 of 20

Dept of Homeland Security Wants DNSSEC Keys

This is a big deal:

The shortcomings of the present DNS have been known for years but difficulties in devising a system that offers backward compatability while scaling to millions of nodes on the net have slowed down the implementation of its successor, Domain Name System Security Extensions (DNSSEC). DNSSEC ensures that domain name requests are digitally signed and authenticated, a defence against forged DNS data, a product of attacks such as DNS cache poisoning used to trick surfers into visiting bogus websites that pose as the real thing.

Obtaining the master key for the DNS root zone would give US authorities the ability to track DNS Security Extensions (DNSSec) “all the way back to the servers that represent the name system’s root zone on the internet”.

Access to the “key-signing key” would give US authorities a supervisory role over DNS lookups, vital for functions ranging from email delivery to surfing the net. At a recent ICANN meeting in Lisbon, Bernard Turcotte, president of the Canadian Internet Registration Authority, said managers of country registries were concerned about the proposal to allow the US to control the master keys, giving it privileged control of internet resources, Heise reports.

Another news report.

Posted on April 9, 2007 at 9:45 AMView Comments

Changing Generational Notions of Privacy

Interesting article.

And after all, there is another way to look at this shift. Younger people, one could point out, are the only ones for whom it seems to have sunk in that the idea of a truly private life is already an illusion. Every street in New York has a surveillance camera. Each time you swipe your debit card at Duane Reade or use your MetroCard, that transaction is tracked. Your employer owns your e-mails. The NSA owns your phone calls. Your life is being lived in public whether you choose to acknowledge it or not.

So it may be time to consider the possibility that young people who behave as if privacy doesn’t exist are actually the sane people, not the insane ones. For someone like me, who grew up sealing my diary with a literal lock, this may be tough to accept. But under current circumstances, a defiant belief in holding things close to your chest might not be high-minded. It might be an artifact—quaint and naïve, like a determined faith that virginity keeps ladies pure. Or at least that might be true for someone who has grown up “putting themselves out there” and found that the benefits of being transparent make the risks worth it.

Shirky describes this generational shift in terms of pidgin versus Creole. “Do you know that distinction? Pidgin is what gets spoken when people patch things together from different languages, so it serves well enough to communicate. But Creole is what the children speak, the children of pidgin speakers. They impose rules and structure, which makes the Creole language completely coherent and expressive, on par with any language. What we are witnessing is the Creolization of media.”

Posted on March 9, 2007 at 7:28 AMView Comments

Hacker-Controlled Computers Hiding Better

If you have control of a network of computers—by infecting them with some sort of malware—the hard part is controlling that network. Traditionally, these computers (called zombies) are controlled via IRC. But IRC can be detected and blocked, so the hackers have adapted:

Instead of connecting to an IRC server, newly compromised PCs connect to one or more Web sites to check in with the hackers and get their commands. These Web sites are typically hosted on hacked servers or computers that have been online for a long time. Attackers upload the instructions for download by their bots.

As a result, protection mechanisms, such as blocking IRC traffic, will fail. This could mean that zombies, which so far have mostly been broadband-connected home computers, will be created using systems on business networks.

The trick here is to not let the computer’s legitimate owner know that someone else is controlling it. It’s an arms race between attacker and defender.

Posted on October 25, 2006 at 12:14 PMView Comments

Ignoring the "Great Firewall of China"

Richard Clayton is presenting a paper (blog post here) that discusses how to defeat China’s national firewall:

…the keyword detection is not actually being done in large routers on the borders of the Chinese networks, but in nearby subsidiary machines. When these machines detect the keyword, they do not actually prevent the packet containing the keyword from passing through the main router (this would be horribly complicated to achieve and still allow the router to run at the necessary speed). Instead, these subsiduary machines generate a series of TCP reset packets, which are sent to each end of the connection. When the resets arrive, the end-points assume they are genuine requests from the other end to close the connection—and obey. Hence the censorship occurs.

However, because the original packets are passed through the firewall unscathed, if both of the endpoints were to completely ignore the firewall’s reset packets, then the connection will proceed unhindered! We’ve done some real experiments on this—and it works just fine!! Think of it as the Harry Potter approach to the Great Firewall—just shut your eyes and walk onto Platform 9¾.

Ignoring resets is trivial to achieve by applying simple firewall rules… and has no significant effect on ordinary working. If you want to be a little more clever you can examine the hop count (TTL) in the reset packets and determine whether the values are consistent with them arriving from the far end, or if the value indicates they have come from the intervening censorship device. We would argue that there is much to commend examining TTL values when considering defences against denial-of-service attacks using reset packets. Having operating system vendors provide this new functionality as standard would also be of practical use because Chinese citizens would not need to run special firewall-busting code (which the authorities might attempt to outlaw) but just off-the-shelf software (which they would necessarily tolerate).

Posted on June 27, 2006 at 1:13 PMView Comments

NIST Hash Workshop Liveblogging (5)

The afternoon started with three brand new hash functions: FORK-256, DHA-256, and VSH. VSH (Very Smooth Hash) was the interesting one; it’s based on factoring and the discrete logarithm problem, like public-key encryption, and not on bit-twiddling like symmetric encryption. I have no idea if it’s any good, but it’s cool to see something so different.

I think we need different. So many of our hash functions look pretty much the same: MD4, MD5, SHA-0, SHA-1, RIPE-MD, HAVAL, SHA-256, SHA-512. And everything is basically a block cipher in Davies-Meyer mode. I want some completely different designs. I want hash functions based on a stream ciphers. I want more functions based on number theory.

The final session was an open discussion about what to do next. There was much debate about how soon we need a new hash function, how long we should rely on SHA-1 or SHA-256, etc.

Hashing is hard. At the ultra-high-level hand-waving level, it takes a lot more clock cycles per message byte to hash than it does to encrypt. No one has any theory to account for this, but it seems like the lack of any secrets in a hash function makes it a harder problem. This may be an artifact of our lack of knowledge, but I think there’s a grain of fundamental truth buried here.

And hash functions are used everywhere. Hash functions are the workhorse of cryptography; they’re sprinkled all over security protocols. They’re used all the time, in all sorts of weird ways, for all sorts of weird purposes. We cryptographers think of them as good hygiene, kind of like condoms.

So we need a fast answer for immediate applications.

We also need “SHA2,” whatever that will look like. And a design competition is the best way to get a SHA2. (Niels Ferguson pointed out that the AES process was the best cryptographic invention of the past decade.)

Unfortunately, we’re in no position to have an AES-like competition to replace SHA right now. We simply don’t know enough about designing hash functions. What we need is research, random research all over the map. Designs beget analyses beget designs beget analyses…. Right now we need a bunch of mediocre hash function designs. We need a posse of hotshot graduate students breaking them and making names for themselves. We need new tricks and new tools. Hash functions are a hot area of research right now, but anything we can do to stoke that will pay off in the future.

NIST is thinking of hosting another hash workshop right after Crypto next year. That would be a good thing.

I need to get to work on a hash function based on Phelix.

Posted on November 1, 2005 at 3:43 PMView Comments

NIST Hash Workshop Liveblogging (4)

This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.

Lots of interesting ideas, but no real discussion about trade-offs. But it’s the trade-offs that are important. It’s easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest’s dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I’m making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it’s the correct analysis here as well.

Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn’t be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we’ve just been too optimistic about how strong hash functions are.

The other thing we need to be doing is providing answers to developers. It’s not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They’ll need an answer in a year. They’ll need an answer in four years. Maybe the answers will be the same, and maybe they’ll be different. But if we don’t give them answers, they’ll make something up. They won’t wait for us.

And while it’s true that we don’t have any real theory of hash functions, and it’s true that anything we choose will be based partly on faith, we have no choice but to choose.

And finally, I think we need to stimulate research more. Whether it’s a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses…. We need a whole bunch of new hash functions to beat up; that’s how we’ll learn to design better ones.

Posted on November 1, 2005 at 11:19 AMView Comments

NIST Hash Workshop Liveblogging (3)

I continue to be impressed by the turnout at this workshop. There are lots of people here whom I haven’t seen in a long time. It’s like a cryptographers’ family reunion.

The afternoon was devoted to cryptanalysis papers. Nothing earth-shattering; a lot of stuff that’s real interesting to me and not very exciting to summarize.

The list of papers is here. NIST promises to put the actual papers online, but they make no promises as to when.

Right now there is a panel discussing how secure SHA-256 is. “How likely is SHA-256 to resist attack for the next ten years?” Some think it will be secure for that long, others think it will fall in five years or so. One person pointed out that if SHA-256 lasts ten years, it will be a world record for a hash function. The consensus is that any new hash function needs to last twenty years, though. It really seems unlikely that any hash function will last that long.

But the real issue is whether there will be any practical attacks. No one knows. Certainly there will be new cryptanalytic techniques developed, especially now that hash functions are a newly hot area for research. But will SHA-256 ever have an attack that’s faster than 280?

Everyone thinks that SHA-1 with 160 rounds is a safer choice than SHA-256 truncated to 160 bits. The devil you know, I guess.

Niels Ferguson, in a comment from the floor, strongly suggested that NIST publish whatever analysis on SHA-256 it has. Since this is most likely by the NSA and classified, it would be a big deal. But I agree that it’s essential for us to fully evaluate the hash function.

Tom Berson, in another comment, suggested that NIST not migrate to a single hash function, but certify multiple alternatives. This has the interesting side effect of forcing the algorithm agility issue. (We had this same debate regarding AES. Negatives are: 1) you’re likely to have a system that is as strong as the weakest choice, and 2) industry will hate it.)

If there’s a moral out of the first day of this workshop, it’s that algorithm agility is an essential feature in any Internet protocol.

Posted on October 31, 2005 at 4:00 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.