Entries Tagged "cryptography"

Page 50 of 52

AES Timing Attack

Nice timing attack against AES.

For those of you who don’t know, timing attacks are an example of side-channel cryptanalysis: cryptanalysis using additional information about the inner workings of the cryptographic algorithm. I wrote about them here.

What’s the big idea here?

There are two ways to look at a cryptographic primitive: block cipher, digital signature function, whatever. The first is as a chunk of math. The second is a physical (or software) implementation of that math.

Traditionally, cryptanalysis has been directed solely against the math. Differential and linear cryptanalysis are good examples of this: high-powered mathematical tools that can be used to break different block ciphers.

On the other hand, timing attacks, power analysis, and fault analysis all makes assumptions about implementation, and uses additional information garnered from attacking those implementations. Failure analysis assumes a one-bit feedback from the implementation—was the message successfully decrypted—in order to break the underlying cryptographic primitive. Timing attacks assumes that an attacker knows how long a particular encryption operation takes.

Posted on May 17, 2005 at 10:05 AMView Comments

The Potential for an SSH Worm

SSH, or secure shell, is the standard protocol for remotely accessing UNIX systems. It’s used everywhere: universities, laboratories, and corporations (particularly in data-intensive back office services). Thanks to SSH, administrators can stack hundreds of computers close together into air-conditioned rooms and administer them from the comfort of their desks.

When a user’s SSH client first establishes a connection to a remote server, it stores the name of the server and its public key in a known_hosts database. This database of names and keys allows the client to more easily identify the server in the future.

There are risks to this database, though. If an attacker compromises the user’s account, the database can be used as a hit-list of follow-on targets. And if the attacker knows the username, password, and key credentials of the user, these follow-on targets are likely to accept them as well.

A new paper from MIT explores the potential for a worm to use this infection mechanism to propagate across the Internet. Already attackers are exploiting this database after cracking passwords. The paper also warns that a worm that spreads via SSH is likely to evade detection by the bulk of techniques currently coming out of the worm detection community.

While a worm of this type has not been seen since the first Internet worm of 1988, attacks have been growing in sophistication and most of the tools required are already in use by attackers. It’s only a matter of time before someone writes a worm like this.

One of the countermeasures proposed in the paper is to store hashes of host names in the database, rather than the names themselves. This is similar to the way hashes of passwords are stored in password databases, so that security need not rely entirely on the secrecy of the database.

The authors of the paper have worked with the open source community, and version 4.0 of OpenSSH has the option of hashing the known-hosts database. There is also a patch for OpenSSH 3.9 that does the same thing.

The authors are also looking for more data to judge the extent of the problem. Details about the research, the patch, data collection, and whatever else thay have going on can be found here.

Posted on May 10, 2005 at 9:06 AMView Comments

RFID Passport Security

According to a Wired article, the State Department is reconsidering a security measure to protect privacy that it previously rejected.

The solution would require an RFID reader to provide a key or password before it could read data embedded on an RFID passport’s chip. It would also encrypt data as it’s transmitted from the chip to a reader so that no one could read the data if they intercepted it in transit.

The devil is in the details, but this is a great idea. It means that only readers that know a secret data string can query the RFID chip inside the passport. Of course, this is a systemwide global secret and will be in the hands of every country, but it’s still a great idea.

It’s nice to read that the State Department is taking privacy concerns seriously.

Frank Moss, deputy assistant secretary for passport services, told Wired News on Monday that the government was “taking a very serious look” at the privacy solution in light of the 2,400-plus comments the department received about the e-passport rule and concerns expressed last week in Seattle by
participants at the Computers, Freedom and Privacy conference. Moss said recent work on the passports conducted with the National Institute of Standards and Technology had also led him to rethink the issue.

“Basically what changed my mind was a recognition that the read rates may have actually been able to be more than 10 centimeters, and also recognition that we had to do everything possible to protect the security of people,” Moss said.

The next step is for them to actually implement this countermeasure, and not just consider it. And the step after that is for us to get our hands on some test passports to see if they’ve implemented it well.

Posted on April 28, 2005 at 8:30 AMView Comments

The Doghouse: ExeShield

Yes, there are companies that believe that keeping cryptographic algorithms secret makes them more secure.

ExeShield uses the latest advances in software protection and encryption technology, to give your applications even more protection. Of course, for your security and ours, we won’t divulge the encryption scheme to anyone.

If anyone reading this needs a refresher on exactly why secret cryptography algorithms are invariably snake oil, I wrote about it three years ago.

Posted on April 13, 2005 at 9:19 AMView Comments

The Doghouse: Xavety

It’s been a long time since I doghoused any encryption products. CHADSEA (Chaotic Digital Signature, Encryption, and Authentication) isn’t as funny as some of the others, but it’s no less deserving.

Read their “Testing the Encryption Algorithm” section: “In order to test the reliability and statistical independency of the encryption, several different tests were performed, like signal-noise tests, the ENT test suite (Walker, 1998), and the NIST Statistical Test Suite (Ruhkin et al., 2001). These tests are quite comprehensive, so the description of these tests are subject of separate publications, which are also available on this website. Please, see the respective links.”

Yep. All they did to show that their algorithm was secure was a bunch of statistical tests. Snake oil for sure.

Posted on March 15, 2005 at 11:00 AMView Comments

Flaw in Winkhaus Blue Chip Lock

The Winkhaus Blue Chip Lock is a very popular, and expensive, 128-bit encrypted door lock. When you insert a key, there is a 128-bit challenge/response exchange between the key and the lock, and when the key is authorized it will pull a small pin down through some sort of solenoid switch. This allows you to turn the lock.

Unfortunately, it has a major security flaw. If you put a strong magnet near the lock, you can also pull this pin down, without authorization—without damage or any evidence.

The worst part is that Winkhaus is in denial about the problem, and is hoping it will just go away by itself. They’ve known about the flaw for at least six months, and have done nothing. They haven’t told any of their customers. If you ask them, they’ll say things like “it takes a very special magnet.”

From what I’ve heard, the only version that does not have this problem is the model without a built-in battery. In this model, the part with the solenoid switch is aimed on the inside instead of the outside. The internal battery is a weak spot, since you need to lift a small lid to exchange it. So this side can never face the “outside” of the door, since anyone could remove the batteries. With an external power supply you do not have this problem, since one side of the lock is pure metal.

A video demonstration is available here.

Posted on March 2, 2005 at 3:00 PMView Comments

Cryptanalysis of SHA-1

On Tuesday, I blogged about a new cryptanalytic result—the first attack faster than brute-force against SHA-1. I wrote about SHA, and the need to replace it, last September. Aside from the details of the new attack, everything I said then still stands. I’ll quote from that article, adding new material where appropriate.

One-way hash functions are a cryptographic construct used in many applications. They are used in conjunction with public-key algorithms for both encryption and digital signatures. They are used in integrity checking. They are used in authentication. They have all sorts of applications in a great many different protocols. Much more than encryption algorithms, one-way hash functions are the workhorses of modern cryptography.

In 1990, Ron Rivest invented the hash function MD4. In 1992, he improved on MD4 and developed another hash function: MD5. In 1993, the National Security Agency published a hash function very similar to MD5, called SHA (Secure Hash Algorithm). Then, in 1995, citing a newly discovered weakness that it refused to elaborate on, the NSA made a change to SHA. The new algorithm was called SHA-1. Today, the most popular hash function is SHA-1, with MD5 still being used in older applications.

One-way hash functions are supposed to have two properties. One, they’re one way. This means that it is easy to take a message and compute the hash value, but it’s impossible to take a hash value and recreate the original message. (By “impossible” I mean “can’t be done in any reasonable amount of time.”) Two, they’re collision free. This means that it is impossible to find two messages that hash to the same hash value. The cryptographic reasoning behind these two properties is subtle, and I invite curious readers to learn more in my book Applied Cryptography.

Breaking a hash function means showing that either—or both—of those properties are not true.

Earlier this week, three Chinese cryptographers showed that SHA-1 is not collision-free. That is, they developed an algorithm for finding collisions faster than brute force.

SHA-1 produces a 160-bit hash. That is, every message hashes down to a 160-bit number. Given that there are an infinite number of messages that hash to each possible value, there are an infinite number of possible collisions. But because the number of possible hashes is so large, the odds of finding one by chance is negligibly small (one in 280, to be exact). If you hashed 280 random messages, you’d find one pair that hashed to the same value. That’s the “brute force” way of finding collisions, and it depends solely on the length of the hash value. “Breaking” the hash function means being able to find collisions faster than that. And that’s what the Chinese did.

They can find collisions in SHA-1 in 269 calculations, about 2,000 times faster than brute force. Right now, that is just on the far edge of feasibility with current technology. Two comparable massive computations illustrate that point.

In 1999, a group of cryptographers built a DES cracker. It was able to perform 256 DES operations in 56 hours. The machine cost $250K to build, although duplicates could be made in the $50K-$75K range. Extrapolating that machine using Moore’s Law, a similar machine built today could perform 260 calculations in 56 hours, and 269 calculations in three and a quarter years. Or, a machine that cost $25M-$38M could do 269 calculations in the same 56 hours.

On the software side, the main comparable is a 264 keysearch done by distributed.net that finished in 2002. One article put it this way: “Over the course of the competition, some 331,252 users participated by allowing their unused processor cycles to be used for key discovery. After 1,757 days (4.81 years), a participant in Japan discovered the winning key.” Moore’s Law means that today the calculation would have taken one quarter the time—or have required one quarter the number of computers—so today a 269 computation would take eight times as long, or require eight times the computers.

The magnitude of these results depends on who you are. If you’re a cryptographer, this is a huge deal. While not revolutionary, these results are substantial advances in the field. The techniques described by the researchers are likely to have other applications, and we’ll be better able to design secure systems as a result. This is how the science of cryptography advances: we learn how to design new algorithms by breaking other algorithms. Additionally, algorithms from the NSA are considered a sort of alien technology: they come from a superior race with no explanations. Any successful cryptanalysis against an NSA algorithm is an interesting data point in the eternal question of how good they really are in there.

For the average Internet user, this news is not a cause for panic. No one is going to be breaking digital signatures or reading encrypted messages anytime soon. The electronic world is no less secure after these announcements than it was before.

But there’s an old saying inside the NSA: “Attacks always get better; they never get worse.” Just as this week’s attack builds on other papers describing attacks against simplified versions of SHA-1, SHA-0, MD4, and MD5, other researchers will build on this result. The attack against SHA-1 will continue to improve, as others read about it and develop faster tricks, optimizations, etc. And Moore’s Law will continue to march forward, making even the existing attack faster and more affordable.

Jon Callas, PGP’s CTO, put it best: “It’s time to walk, but not run, to the fire exits. You don’t see smoke, but the fire alarms have gone off.” That’s basically what I said last August.

It’s time for us all to migrate away from SHA-1.

Luckily, there are alternatives. The National Institute of Standards and Technology already has standards for longer—and harder to break—hash functions: SHA-224, SHA-256, SHA-384, and SHA-512. They’re already government standards, and can already be used. This is a good stopgap, but I’d like to see more.

I’d like to see NIST orchestrate a worldwide competition for a new hash function, like they did for the new encryption algorithm, AES, to replace DES. NIST should issue a call for algorithms, and conduct a series of analysis rounds, where the community analyzes the various proposals with the intent of establishing a new standard.

Most of the hash functions we have, and all the ones in widespread use, are based on the general principles of MD4. Clearly we’ve learned a lot about hash functions in the past decade, and I think we can start applying that knowledge to create something even more secure.

Hash functions are the least-well-understood cryptographic primitive, and hashing techniques are much less developed than encryption techniques. Regularly there are surprising cryptographic results in hashing. I have a paper, written with John Kelsey, that describes an algorithm to find second preimages with SHA-1 ­—a technique that generalizes to almost all other hash functions—in 2106 calculations: much less than the 2160 calculations for brute force. This attack is completely theoretical and not even remotely practical, but it demonstrates that we still have a lot to learn about hashing.

It is clear from rereading what I wrote last September that I expected this to happen, but not nearly this quickly and not nearly this impressively. The Chinese cryptographers deserve a lot of credit for their work, and we need to get to work replacing SHA.

Posted on February 18, 2005 at 11:24 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.