Crypto-Gram

March 15, 2005

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

Or you can read this issue on the web at <http://www.schneier.com/crypto-gram-0503.html>.

Schneier also publishes these same essays in his blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


SHA-1 Broken

SHA-1 has been broken. Not a reduced-round version. Not a simplified version. The real thing.

The research team of Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu (mostly from Shandong University in China) have been quietly circulating a paper describing their results:

collisions in the full SHA-1 in 2**69 hash operations, much less than the brute-force attack of 2**80 operations based on the hash length.

collisions in SHA-0 in 2**39 operations.

collisions in 58-round SHA-1 in 2**33 operations.

This attack builds on previous attacks on SHA-0 and SHA-1, and is a major, major cryptanalytic result: the first attack faster than brute-force against SHA-1.

I wrote about SHA, and the need to replace it, last September. Aside from the details of the new attack, everything I said then still stands. I’ll quote from that article, adding new material where appropriate.

“One-way hash functions are a cryptographic construct used in many applications. They are used in conjunction with public-key algorithms for both encryption and digital signatures. They are used in integrity checking. They are used in authentication. They have all sorts of applications in a great many different protocols. Much more than encryption algorithms, one-way hash functions are the workhorses of modern cryptography.

“In 1990, Ron Rivest invented the hash function MD4. In 1992, he improved on MD4 and developed another hash function: MD5. In 1993, the National Security Agency published a hash function very similar to MD5, called SHA (Secure Hash Algorithm). Then, in 1995, citing a newly discovered weakness that it refused to elaborate on, the NSA made a change to SHA. The new algorithm was called SHA-1. Today, the most popular hash function is SHA-1, with MD5 still being used in older applications.

“One-way hash functions are supposed to have two properties. One, they’re one way. This means that it is easy to take a message and compute the hash value, but it’s impossible to take a hash value and recreate the original message. (By ‘impossible’ I mean ‘can’t be done in any reasonable amount of time.’) Two, they’re collision free. This means that it is impossible to find two messages that hash to the same hash value. The cryptographic reasoning behind these two properties is subtle, and I invite curious readers to learn more in my book Applied Cryptography.

“Breaking a hash function means showing that either—or both—of those properties are not true.”

Last month, three Chinese cryptographers showed that SHA-1 is not collision-free. That is, they developed an algorithm for finding collisions faster than brute force.

SHA-1 produces a 160-bit hash. That is, every message hashes down to a 160-bit number. Given that there are an infinite number of messages that hash to each possible value, there are an infinite number of possible collisions. But because the number of possible hashes is so large, the odds of finding one by chance is negligibly small (one in 2^80, to be exact). If you hashed 2^80 random messages, you’d find one pair that hashed to the same value. That’s the “brute force” way of finding collisions, and it depends solely on the length of the hash value. “Breaking” the hash function means being able to find collisions faster than that. And that’s what the Chinese did.

They can find collisions in SHA-1 in 2^69 calculations, about 2,000 times faster than brute force. Right now, that is just on the far edge of feasibility with current technology. Two comparable massive computations illustrate that point.

In 1999, a group of cryptographers built a DES cracker. It was able to perform 2^56 DES operations in 56 hours. The machine cost $250K to build, although duplicates could be made in the $50K-$75K range. Extrapolating that machine using Moore’s Law, a similar machine built today could perform 2^60 calculations in 56 hours, and 2^69 calculations in three and a quarter years. Or, a machine that cost $25M-$38M could do 2^69 calculations in the same 56 hours.

On the software side, the main comparable is a 2^64 keysearch done by distributed.net that finished in 2002. One article put it this way: “Over the course of the competition, some 331,252 users participated by allowing their unused processor cycles to be used for key discovery. After 1,757 days (4.81 years), a participant in Japan discovered the winning key.” Moore’s Law means that today the calculation would have taken one quarter the time—or have required one quarter the number of computers—so today a 2^69 computation would take eight times as long, or require eight times the computers.

“The magnitude of these results depends on who you are. If you’re a cryptographer, this is a huge deal. While not revolutionary, these results are substantial advances in the field. The techniques described by the researchers are likely to have other applications, and we’ll be better able to design secure systems as a result. This is how the science of cryptography advances: we learn how to design new algorithms by breaking other algorithms. Additionally, algorithms from the NSA are considered a sort of alien technology: they come from a superior race with no explanations. Any successful cryptanalysis against an NSA algorithm is an interesting data point in the eternal question of how good they really are in there.”

For the average Internet user, this news is not a cause for panic. No one is going to be breaking digital signatures or reading encrypted messages anytime soon. The electronic world is no less secure after these announcements than it was before.

But there’s an old saying inside the NSA: “Attacks always get better; they never get worse.” Just as this week’s attack builds on other papers describing attacks against simplified versions of SHA-1, SHA-0, MD4, and MD5, other researchers will build on this result. The attack against SHA-1 will continue to improve, as others read about it and develop faster tricks, optimizations, etc. And Moore’s Law will continue to march forward, making even the existing attack faster and more affordable.

Jon Callas, PGP’s CTO, put it best: “It’s time to walk, but not run, to the fire exits. You don’t see smoke, but the fire alarms have gone off.” That’s basically what I said last August.

“It’s time for us all to migrate away from SHA-1.

“Luckily, there are alternatives. The National Institute of Standards and Technology already has standards for longer—and harder to break—hash functions: SHA-224, SHA-256, SHA-384, and SHA-512. They’re already government standards, and can already be used. This is a good stopgap, but I’d like to see more.

“I’d like to see NIST orchestrate a worldwide competition for a new hash function, like they did for the new encryption algorithm, AES, to replace DES. NIST should issue a call for algorithms, and conduct a series of analysis rounds, where the community analyzes the various proposals with the intent of establishing a new standard.

“Most of the hash functions we have, and all the ones in widespread use, are based on the general principles of MD4. Clearly we’ve learned a lot about hash functions in the past decade, and I think we can start applying that knowledge to create something even more secure.”

Hash functions are the least-well-understood cryptographic primitive, and hashing techniques are much less developed than encryption techniques. Regularly there are surprising cryptographic results in hashing. I have a paper, written with John Kelsey, that describes an algorithm to find second preimages with SHA-1—a technique that generalizes to almost all other hash functions—in 2^106 calculations: much less than the 2^160 calculations for brute force. This attack is completely theoretical and not even remotely practical, but it demonstrates that we still have a lot to learn about hashing.

It is clear from rereading what I wrote last September that I expected this to happen, but not nearly this quickly and not nearly this impressively. The Chinese cryptographers deserve a lot of credit for their work, and we need to get to work replacing SHA.

Summary of the paper (the full paper isn’t generally available yet):
<http://theory.csail.mit.edu/~yiqun/shanote.pdf>

My original essay:
<http://www.schneier.com/essay-074.html>

NIST standard for SHA-224, SHA-256, SHA-384, and SHA-512:
<http://csrc.nist.gov/CryptoToolkit/tkhash.html>

My second-preimages paper:
<http://eprint.iacr.org/2004/304>

More hash function news:
Two X-509 certificates with identical MD5 hashes:
<http://www.win.tue.nl/~bdeweger/CollidingCertificates/>
Faster MD5 collisions (eight hours on 1.6 GHz computer):
<http://cryptography.hyperlink.cz/md5/MD5_collisions.pdf>


The Failure of Two-Factor Authentication

Two-factor authentication isn’t our savior. It won’t defend against phishing. It’s not going to prevent identity theft. It’s not going to secure online accounts from fraudulent transactions. It solves the security problems we had ten years ago, not the security problems we have today.

The problem with passwords is that they’re too easy to lose control of. People give them to other people. People write them down, and other people read them. People send them in e-mail, and that e-mail is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. They’re also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can’t be sure who is typing that password in.

Two-factor authentication mitigates this problem. If your password includes a number that changes every minute, or a unique reply to a random challenge, then it’s harder for someone else to intercept. You can’t write down the ever-changing part. An intercepted password won’t be good the next time it’s needed. And a two-factor password is harder to guess. Sure, someone can always give his password and token to his secretary, but no solution is foolproof.

These tokens have been around for at least two decades, but it’s only recently that they have gotten mass-market attention. AOL is rolling them out. Some banks are issuing them to customers, and even more are talking about doing it. It seems that corporations are finally waking up to the fact that passwords don’t provide adequate security, and are hoping that two-factor authentication will fix their problems.

Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.

Here are two new active attacks we’re starting to see:

– Man-in-the-Middle Attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.

– Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.

See how two-factor authentication doesn’t solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.

The real threat is fraud due to impersonation, and the tactics of impersonation will change in response to the defenses. Two-factor authentication will force criminals to modify their tactics, that’s all.

Recently I’ve seen examples of two-factor authentication using two different communications paths: call it “two-channel authentication.” One bank sends a challenge to the user’s cell phone via SMS and expects a reply via SMS. If you assume that all your customers have cell phones, then this results in a two-factor authentication process without extra hardware. And even better, the second authentication piece goes over a different communications channel than the first; eavesdropping is much, much harder.

But in this new world of active attacks, no one cares. An attacker using a man-in-the-middle attack is happy to have the user deal with the SMS portion of the log-in, since he can’t do it himself. And a Trojan attacker doesn’t care, because he’s relying on the user to log in anyway.

Two-factor authentication is not useless. It works for local log-in, and it works within some corporate networks. But it won’t work for remote authentication over the Internet. I predict that banks and other financial institutions will spend millions outfitting their users with two-factor authentication tokens. Early adopters of this technology may very well experience a significant drop in fraud for a while as attackers move to easier targets, but in the end there will be a negligible drop in the amount of fraud and identity theft.

This essay will appear in the April issue of Communications of the ACM.


Crypto-Gram Reprints

Crypto-Gram is currently in its eighth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

“I am not a Terrorist” Cards:
<http://www.schneier.com/crypto-gram-0403.html#10>

The Security Risks of Centralization:
<http://www.schneier.com/crypto-gram-0403.html#11>

Practical Cryptography:
<http://www.schneier.com/crypto-gram-0303.html#1>

SSL flaw:
<http://www.schneier.com/crypto-gram-0303.html#3>

SSL patent infringement:
<http://www.schneier.com/crypto-gram-0303.html#8>

SNMP vulnerabilities:
<http://www.schneier.com/crypto-gram-0203.html#1>

Bernstein’s factoring breakthrough?
<http://www.schneier.com/crypto-gram-0203.html#6>

Richard Clarke on 9/11’s Lessons
<http://www.schneier.com/crypto-gram-0203.html#7>

Security patch treadmill:
<http://www.schneier.com/crypto-gram-0103.html#1>

Insurance and the future of network security:
<http://www.schneier.com/crypto-gram-0103.html#3>

The “death” of IDSs:
<http://www.schneier.com/crypto-gram-0103.html#9>

802.11 security:
<http://www.schneier.com/crypto-gram-0103.html#10>

Software complexity and security:
<http://www.schneier.com/…>

Why the worst cryptography is in systems that pass initial cryptanalysis:
<http://www.schneier.com/crypto-gram-9903.html#initial>


ChoicePoint

The ChoicePoint fiasco has been news for over a month now, and there are only a few things I can add. For those who haven’t been following along, ChoicePoint mistakenly sold personal credit reports for about 145,000 Americans to criminals.

This story would have never been made public if it were not for SB 1386, a California law requiring companies to notify California residents if any of a specific set of personal information is leaked.

ChoicePoint’s behavior is a textbook example of how to be a bad corporate citizen. The information leakage occurred in October, and it didn’t tell any victims until February. First, ChoicePoint notified 30,000 Californians and said that it would not notify anyone who lived outside California (since the law didn’t require it). Finally, after public outcry, it announced that it would notify everyone affected.

But actually, according to ChoicePoint’s 8-K SEC filing, there may very well be more than 145,000 victims. “These numbers were determined by conducting searches of our databases that matched searches conducted by customers who we believe may have had unauthorized access to our information products on or after July 1, 2003, the effective date of the California notification law.”

The clear moral here is that first, SB 1386 needs to be a national law, since without it ChoicePoint would have covered up their mistakes forever. And second, the national law needs to force companies to disclose these sorts of privacy breaches immediately, and not allow them to hide for four months behind the “ongoing FBI investigation” shield.

More is required. Compare the difference in ChoicePoint’s public marketing slogans with its private reality.

>From “Identity Theft Puts Pressure on Data Sellers,” by Evan Perez, in the 18 Feb 2005 Wall Street Journal: “The current investigation involving ChoicePoint began in October when the company found the 50 accounts it said were fraudulent. According to the company and police, criminals opened the accounts, posing as businesses seeking information on potential employees and customers. They paid fees of $100 to $200, and provided fake documentation, gaining access to a trove of personal data including addresses, phone numbers, and social security numbers.”

>From ChoicePoint Chairman and CEO Derek V. Smith: “ChoicePoint’s core competency is verifying and authenticating individuals and their credentials.”

The reason there is a difference is purely economic. Identity theft is the fastest-growing crime in the U.S., and an enormous problem elsewhere in the world. It’s expensive—both in money and time—to the victims. And there’s not much people can do to stop it, as much of their personal identifying information is not under their control: it’s in the computers of companies like ChoicePoint.

ChoicePoint protects its data, but only to the extent that it values it. The hundreds of millions of people in ChoicePoint’s databases are not ChoicePoint’s customers. They have no power to switch credit agencies. They have no economic pressure that they can bring to bear on the problem. Maybe they should rename the company “NoChoicePoint.”

The upshot of this is that ChoicePoint doesn’t bear the costs of identity theft, so ChoicePoint doesn’t take those costs into account when figuring out how much money to spend on data security. In economic terms, it’s an “externality.”

The point of regulation is to make externalities internal. SB 1386 did that to some extent, since ChoicePoint now must figure the cost of public humiliation when they decide how much money to spend on security. But the actual cost of ChoicePoint’s security failure is much, much greater.

Until ChoicePoint feels those costs—whether through regulation or liability—it has no economic incentive to reduce them. Capitalism works, not through corporate charity, but through the free market. I see no other way of solving the problem.

News stories:
<http://www.msnbc.msn.com/id/6969799/>
<http://www.epic.org/privacy/choicepoint/>
<http://www.latimes.com/business/…>
<http://wired.com/news/privacy/0,1848,66632,00.html>
<http://searchsecurity.techtarget.com/…>

ChoicePoint’s 8K filing:
<http://phx.corporate-ir.net/phoenix.zhtml?…>

Interesting paper from EPIC that offers suggested proposals for privacy reform in the wake of all the recent privacy breaches: ChoicePoint, Lexis/Nexis, Bank of America, DWS, etc.
<http://papers.ssrn.com/sol3/papers.cfm?…>


News

Microsoft will not patch pirated versions of Windows:
<http://www.msnbc.msn.com/id/6868504/>
I’ve written about this security risk before:
<http://www.schneier.com/crypto-gram-0406.html#4>

False arrest: a security risk of frequent shopper cards:
<http://www.komotv.com/stories/32785.htm>
<http://seattletimes.nwsource.com/html/localnews/…>
<http://www.komotv.com/news/story.asp?ID=35019>

Hacking a German bicycle rental system:
<http://www.ccc.de/hackabike/>

Excellent article on high-tech passports:
<http://www.economist.com/science/displaystory.cfm?…>

Airport screeners cheat to pass tests:
<http://sfgate.com/cgi-bin/article.cgi?file=/c/a/…>

A very nicely written analysis of the recent DMCA-related court decisions in the Lexmark and Chamberlain cases.
<http://www.gesmer.com/newsletter/winter2005.pdf>

An essay that argues that regulation, not liability, is the correct way to solve the underlying economic problems, using the analogy of high-pressure steam engines in the 1800s.
<http://www.safeware-eng.com/index.php/publications/…>

A Pennsylvania judge is caught trying to sneak a knife aboard an aircraft:
<http://www.usatoday.com/travel/news/…>
There are two points worth making here. One: ridiculous rules have a way of turning people into criminals. And two: this is an example of a security failure, not a security success. Security systems fail in one of two ways. They can fail to stop the bad guy, and they can mistakenly stop the good guy. The TSA likes to measure its success by looking at the forbidden items they have prevented from being carried onto aircraft, but that’s wrong. Every time the TSA takes a pocketknife from an innocent person, that’s a security failure. It’s a false alarm. The system has prevented access where no prevention was required. This, coupled with the widespread belief that the bad guys will find a way around the system, demonstrates what a colossal waste of money it is.

Yet another article about sensitive information being found on used hard drives:
<http://www.timesonline.co.uk/article/…>

Interesting links on physical locks, including an interesting break of the Winkhaus Blue Chip lock:
<http://connectmedia.waag.org/toool/21c3.wmv>

A very impressive analysis of the Texas Instruments RFID technology used in a variety of security systems, such as vehicle immobilizers and ExxonMobil’s SpeedPass system. Mistake number 1: The cryptographic algorithm is a proprietary 40-bit cipher.
<http://rfidanalysis.org/>

RFID Washer: wishful thinking.
<http://www.rfidwasher.com/>

Garbage cans that spy on you:
<http://www.guardian.co.uk/online/story/…>
I call this kind of thing “embedded government”: hardware and/or software technology put inside of a device to make sure that we conform to the law. Of course there are security risks as well.

The U.S. government will ban cigarette lighters on airplanes:
<http://www.washingtonpost.com/wp-dyn/articles/…>

The Government Accountability Office (GAO) released a report titled “Aviation Security: Measures for Testing the Impact of Using Commercial Data for the Secure Flight Program.”
<http://www.gao.gov/cgi-bin/getrpt?GAO-05-324>

Very interesting research on remote device fingerprinting. Basically, the paper shows how you can identify individual computer devices from across the Internet by their clock skews.
<http://www.cse.ucsd.edu/users/tkohno/papers/PDF/>

Melbourne’s water supply can be controlled over the Internet:
<http://theage.com.au/articles/2005/03/07/…>

Speech-activated password resets:
<http://www.microsoft.com/speech/solutions/password/…>
The real beauty of this system is that it doesn’t require a customer support person to deal with the user. I’ve seen statistics showing that 25% of all help desk calls are by people who forget their password, they cost something like $20 a call, and they take an average of 10 minutes. A system like this provides good security and saves money. It’s not perfect, but neither are passwords.

Non-military satellite data made secret for “security” reasons, really for no good reason:
<http://spaceflightnow.com/news/n0503/02observing/>

Another “movie-plot” security threat: unmanned aircraft:
<http://jef.raskincenter.org/unpublished/…>

Good technical paper on bot networks: how they work, who uses them and how, and how to track them:
<http://www.honeynet.org/papers/bots/>

This is a fascinating—and detailed—analysis of what would be required to destroy the earth: materials, methods, feasibility, schedule. While the DHS might view this as a terrorist manual and get it removed from the Internet, the good news is that obliterating the planet isn’t an easy task.
<http://ned.ucam.org/~sdh31/misc/destroy.html>


Unicode URL Hack

A long time ago I wrote about the security risks of Unicode. This is an example of the problem.

Here’s a demo: it’s a Web page that appears to be www.paypal.com but is not PayPal. Everything from the address bar to the hover-over status on the link says www.paypal.com.

It works by substituting a Unicode character for the second “a” in PayPal. That Unicode character happens to look like an English “a,” but it’s not an “a.” The attack works even under SSL.

Here’s the source code of the link: <http://www.pаypal.com/>

The Unicode community is working on fixing these problems. They have a draft technical report that they’re looking for comments on. A solution will take some concerted efforts, since there are many different kinds of issues involved. (In some ways, the hack described here is one of the simpler cases.)

Demo website:
<http://www.shmoo.com/idn/>

More information:
<http://secunia.com/multiple_browsers_idn_spoofing_test/>
<http://www.boingboing.net/2005/02/06/…>

My original essay:
<http://www.schneier.com/crypto-gram-0007.html#9>

Draft Unicode report:
<http://unicode.org/reports/tr36/>


GhostBuster

Microsoft Research has developed something called GhostBuster, a prototype program detecting arbitrary persistent and stealthy software, such as rootkits, Trojans, and software keyloggers. It’s a really elegant idea, based on a simple observation: the rootkit must exist on disk to be persistent, but must lie to programs running within the infected OS in order to hide.

Here’s how it works: The user has the GhostBuster program on a CD. He sticks the CD in the drive, and from within the (possibly corrupted) OS, the checker program runs: stopping all other user programs, flushing the caches, and then doing a complete checksum of all files on the disk and a scan of any registry keys that could autostart the system, writing out the results to a file on the hard drive.

Then the user is instructed to press the reset button, the CD boots its own OS, and the scan is repeated. Any differences indicate a rootkit or other stealth software, without the need for knowing what particular rootkits are or the proper checksums for the programs installed on disk.

Simple. Clever. Elegant.

In order to fool GhostBuster, the rootkit must 1) detect that such a checking program is running and either not lie to it or change the output as it’s written to disk (in the limit this becomes the halting problem for the rootkit designer), 2) integrate into the BIOS rather than the OS (tricky, platform specific, and not always possible), or 3) give up on either being persistent or stealthy. Thus this doesn’t eliminate rootkits entirely, but is a pretty mortal blow to persistent rootkits.

Of course, the concept could be adopted for any other operating system as well.

This is a great idea, but there’s a huge problem. GhostBuster is only a research prototype, so you can’t get a copy. And, even worse, Microsoft has no plans to turn it into a commercial tool.

This is too good an idea to abandon. Microsoft, if you’re listening, you should release this tool to the world. Make it public domain. Make it open source, even. It’s a great idea, and you deserve credit for coming up with it.

Any other security companies listening? Make and sell one of these. Anyone out there looking for an open source project? Here’s a really good one.

Note: I have no idea if Microsoft patented this idea. If they did and they don’t release it, shame on them. If they didn’t, good for them.

Technical Report:
<http://research.microsoft.com/research/pubs/…>


Counterpane News

Schneier is speaking at the Computers, Freedom, and Privacy Conference in Seattle on April 13th, on a panel about RFID passports:
<http://www.cfp2005.org/>


Security Notes from All Over: Identity Theft out of Golf Lockers

When someone goes golfing in Japan, he’s given a locker in which to store his valuables. Generally, and at the golf course in question, these are electronic combination locks. The user selects a code himself and locks his valuables. Of course, there’s a back door—a literal one—to the lockers, in case someone forgets his unlock code. Furthermore, the back door allows the administrator of these lockers to read all the codes to all the lockers.

Here’s the scam: A group of thieves worked in conjunction with the locker administrator to open the lockers, copy the golfers’ debit cards, and replace them in their wallets and in their lockers before they were done golfing. In many cases, the golfers used the same code to lock their locker as their bank card PIN, so the thieves got those as well. Then the thieves stole a lot of money from multiple ATMs.

Several factors make this scam even worse. One, unlike the U.S., ATM cards in Japan have no limit. You can literally withdraw everything out of the account. Two, the victims don’t know anything until they find out they have no money when they use their card somewhere. Three, the victims, since they play golf at these expensive courses, are
usually very rich. And four, unlike the United States, Japanese banks do not guarantee loss due to theft.

Link:
<http://www.asahi.com/english/opinion/…>


The Doghouse: Xavety

It’s been a long time since I doghoused any encryption products. CHADSEA (Chaotic Digital Signature, Encryption, and Authentication) isn’t as funny as some of the others, but it’s no less deserving.

Read their “Testing the Encryption Algorithm” section: “In order to test the reliability and statistical independency of the encryption, several different tests were performed, like signal-noise tests, the ENT test suite (Walker, 1998), and the NIST Statistical Test Suite (Ruhkin et al., 2001). These tests are quite comprehensive, so the description of these tests are subject of separate publications, which are also available on this website. Please, see the respective links.”

Yep. All they did to show that their algorithm was secure was a bunch of statistical tests. Snake oil for sure.

The algorithm:
<http://www.xavety.com/Method.htm>

Snake oil warning signs:
<http://www.schneier.com/crypto-gram-9902.html#snakeoil>


Sensitive Security Information (SSI)

For decades, the U.S. government has had systems in place for dealing with military secrets. Information is classified as either Confidential, Secret, Top Secret, or one of many “compartments” of information above Top Secret. Procedures for dealing with classified information were rigid: classified topics could not be discussed on unencrypted phone lines, classified information could not be processed on insecure computers, classified documents had to be stored in locked safes, and so on. The procedures were extreme because the assumed adversary was highly motivated, well-funded, and technically adept: the Soviet Union.

You might argue with the government’s decision to classify this and not that, or the length of time information remained classified, but if you assume the information needed to remain secret, than the procedures made sense.

In 1993, the U.S. government created a new classification of information—Sensitive Security Information. The information under this category, as defined by a D.C. court, was limited to information related to the safety of air passengers. This was greatly expanded in 2002, when Congress deleted two words, “air” and “passengers,” and changed “safety” to “security.” Currently, there’s a lot of information covered under this umbrella.

Again, you might argue with what the government chooses to place under this classification, and whether this classification is more designed to hide the inner workings of government from the public, but that’s a separate discussion. You can’t discuss the mechanics of a lock without discussing what the lock is supposed to protect, but you can discuss the lock and what it protects without discussing whether protecting it is a good idea. SSI is designed to protect information against a terrorist threat. Assume for a moment that there is information that needs to be protected, and that terrorists are who it needs to be protected from.

The rules for SSI information are much more relaxed than the rules for traditional classified information. Before someone can have access to classified information, he must get a government clearance. Before someone can have access to SSI, he simply must sign an NDA. If someone discloses classified information, he faces criminal penalties. If someone discloses SSI, he faces civil penalties.

SSI can be sent unencrypted in e-mail; a simple password-protected attachment is enough. A person can take SSI home with him, read it on an airplane, and talk about it in public places. People entrusted with SSI information shouldn’t disclose it to those unauthorized to know it, but it’s really up to the individual to make sure that doesn’t happen. It’s really more like confidential corporate information than government military secrets.

Of course, SSI information is easier to steal than traditional classified information. That’s the security trade-off. The threat is less, so the security countermeasures are less.

The U.S. government really had no choice but to establish this classification level, given the kind of information they needed to work with. for example, the terrorist “watch” list is SSI. If the list falls into the wrong hands, it would be bad for national security. But think about the number of people who need access to the list. Every airline needs a copy, so they can determine if any of their passengers are on the list. That’s not just domestic airlines, but foreign airlines as well—including foreign airlines that may not agree with American foreign policy. Police departments, both within this country and abroad, need access to the list. My guess is that more than 10,000 people have access to this list, and there’s no possible way to give all them a security clearance. Either the U.S. government relaxes the rules about who can have access to the list, or the list doesn’t get used in the way the government wants.

On the other hand, the threat is completely different. Military classification levels and procedures were developed during the Cold War, and reflected the Soviet threat. The terrorist adversary is much more diffuse, much less well-funded, much less technologically advanced. SSI rules really make more sense in dealing with this kind of adversary than the military rules.

I’m impressed with the U.S. government SSI rules. You can always argue about whether a particular piece of information needs to be kept secret, and how classifications like SSI can be used to conduct government in secret. Just about everything that the government keeps secret should not be kept secret, and openness actually improves security in most cases. But if you take secrecy as an assumption, SSI defines a reasonable set of secrecy rules against a new threat.

Background on SSI:
<http://www.cjog.net/…>

TSA’s Regulation on the Protection of SSI:
<http://www.fas.org/sgp/news/2004/05/fr051804.html>

Controversies surrounding SSI:
<http://www.fas.org/sgp/crs/RS21727.pdf>

My essay explaining why secrecy is often bad for security:
<http://www.schneier.com/crypto-gram-0205.html#1>

The Director of the National Security Archive at George Washington University on the problems of too much secrecy:
<http://www.gwu.edu/~nsarchiv/news/20050302/index.htm>


Comments from Readers

From: “Marc A.” <marcusi mac.com>
Subject: Priority in GSM

GPS has become a standard timing mechanism for SDH/SONET transport networks. Completely shutting down the GPS system would force the multiplexers in such networks to revert to an internal clock, with accuracy figures that would cause it to be out sync with the rest of the network within a day or so. This would effectively deny the operation of major telecommunication links (+150 Mb/s) for ALL users—including emergency services—and would, therefore, factor into the decision of any informed emergency response group. Switching the Selective Availability mechanism back on and introducing jitter for non-S/A receivers is, therefore, more likely an option.

Regarding GSM, the system has for years had a call setup and handoff priority mechanism (the same capability is exploited during a 112 emergency call) which would give preference to selected users. This would not only mean the possibility of prioritized availability, but of completely denying access to all but the selected emergency services, police, ambulances etc.

The 3GPP has a plan for implementing such a mechanism in WCDMA/UMTS networks (Multi-Level Precedence and Pre-emption—MLPP), but also includes a facility for preemption that would kick a user off a current call/data connection to make network resources available for a priority user.

With reference to the Spanish rail bombings, while I agree that victims and the general public have a legitimate need for telecommunications services in times of crisis, denying them that capability does not technically have to mean disconnecting the “good guys” at the same time.

Of course, implementing this would simply be a bad idea. It would likely cause additional trauma for the public, while doing little to affect well-planned terrorists who, in anticipation of this response, will make contingencies anyway. However my point is just one of technical clarification—the GSM system (which Spain primarily uses), well-planned and engineered as it is, offers selective availability as a feature.

From: Matthew Rubenstein <email mattruby.com>
Subject: Controlling Personal Data

The problem of control of one’s data, once it leaves one’s physical control, is central to our Information Age. People keep stealing each other’s data. And, inevitably, sharing it with someone else who’s also unauthorized, and who poses the actual threat. This is true of media data, like music albums, and personal data, like recorded identity info. The legal problem, who has the right to copy data, is addressed by copyright law, and is now enforced with unprecedented aggression by media companies and the governments they operate under. Because they recognize that copyright is their main tool of retaining control, and that it addresses the heart of their problem: who can copy the data that is their entire product and business value. Personal data is also copyable only by those with the right to do so, deriving from the source of the data: the individual. But we haven’t got the same legal documentation of our rights, the same organization to protect them, the same legal zeal to enforce them, as do the corporate data owners. The center of the entire problem is exactly there: getting our legal system to protect our copyrights on our personal data with at least the effectiveness that it protects corporate data copyrights. It should be much easier, as the personal copyright violations are much less frequent, much less valuable per violation, and perpetrated by a much smaller group in a more centralized system, much more like the traditional copyright violations before peer-to-peer file sharing. We should get more protection for less effort, more valuable to us in protecting our lives than it is even in protecting some transient business models.

Our copyright defaults to a single copy of the personal data transmitted to the recipient. That copy can be recopied and stored only for the duration and scope of the transaction in which it was transmitted. After that transaction is complete, it cannot be copied or retained; it cannot be copied to any other recipient—even within the same organization—unless strictly required to complete the original transaction. It cannot be stored beyond those restrictions, either. The scope of recipients and duration of storage implicit in the definition of that transaction can be requested by the sender of the copy, prior to the transfer and its limited copyright license. Any distribution beyond those limits requires the express permission of the sender, and is non-transferable beyond that new transaction—with the same restrictions as the original transaction, on the new scope and duration. Finite degrees of separation from the owner of the copyright can be specified in that original transaction, or requested subsequently, or an unlimited license can be granted, but the default is the scope and duration only of the completion of the original transaction. We own this data; it has value when shared, and we are giving away too much value when we share it without constraints—not to mention the vast damage when further sharing, expected or otherwise, is widespread.

The basic rules are in place to protect our personal data with copyright law. European privacy laws already demonstrate how business can thrive without unfettered access to everyone’s personal data. The kinds of identity theft we’re getting, like the T-Mobile crack, as well as the ChoicePoint and SAIC cracks reported this month, are surely contributing to a global groundswell for demanding rights protection. Combined with spam, phishing and other identity thefts, there’s certainly huge popular support for a mandate for government protection that would strike at the heart of all these attacks. Copyright is already a right—we need the government we pay for to start protecting it, and us.

From: Jake Appelbaum <jacob appelbaum.net>
Subject: T-Mobile Hack

Late last year I was in contact with T-Mobile’s CSO because of a major flaw in their voicemail platform. As of early February 2005, it was still vulnerable. Using my own custom PBX system I am able to dump into any of their voicemail boxes by *default*. When I explained this to their CSO, he said that they were aware of this issue, they did not plan to fix it within any reasonable time frame and then he offered me a job. It was half amusing to get a job offer that sounded like a nightmare, but as a customer I was horrified.

So I did a survey of around 30+ cellular/landline companies in the USA and Canada. The results: T-Mobile was the weakest in voicemail security with this type of exploit. And they know it. And they don’t care.

It’s quite simple in how it works. It’s both a denial of service to the user (they can’t check their voicemail while you’re using it) and it’s a major privacy violation.

Anyway, forge the ANI of your target, call their open tree access number and *poof* access. You’re the user, you can set things, send messages as other users and basically, you’re in control of their account. Absolutely terrible.

The CSO seemed to think that privacy and security do not go hand in hand; he seemed to write the entire episode off as an issue of mere “privacy only.”

From: charles werbick <heihosha yahoo.com>
Subject: Death of Carnivore

The FBI had another incentive besides cost-effectiveness to kill Carnivore in favor of 3rd party software solutions for Internet surveillance.

Carnivore was subject to congressional oversight. The FBI was required to report to Congress quarterly as to the extent and nature of their net-spying activities. Now that Carnivore is dead, the surveillance continues without oversight. All they really got rid of was the prying eyes of our representatives in Washington.

From: Michael Hammer <MHammer ag.com>
Subject: Curse of the Secret Question

Your hope that things would be more difficult if you forgot the answer to the secret question is a false hope. I recently had this experience when I received my new AMEX card. When I went to the website to activate the card it had “the secret question” (mother’s birthday). Having put in junk before and not remembering what I put in, I ended up having to call the 800 number. The stupid voice tree wanted me to put in the card number and then my mother’s birthday. I eventually got a live operator.

Here’s where it gets interesting. She wanted my address for verification. Then she wanted the 4-digit number on the card (just above the CC#). That was it.

So if someone had stolen the card they would have been able to get it activated simply by having the card and knowing my address (which is on the envelope/card carrier).

The process is significantly less secure when someone does not know the answer to the secret question.

From: Anonymous
Subject: Response to Bank Sued for Unauthorized Transaction

Banks cannot be responsible for customers mishandling their login information, any more than IT staff can be responsible for users giving out their corporate login information. We can look for fraud based on many things, but short of personally verifying every transaction I don’t see a reasonable solution to this problem. There are technologies in place to allow customers to verify electronically the transactions that are going to post to their accounts, but the problem with these technologies is that require customer interaction—if they don’t stop the transaction, it goes through; if they don’t log in that day or can’t get to a computer, then the transactions would go through regardless.

A transaction coming from someone’s online account looks legitimate to the bank systems if it is accompanied by valid login credentials. The good news is that these transactions can be traced to IP addresses, which can assist in prosecution. Big banks like BOA are notorious for being unhelpful when a consumer has fraud on his account, but not all banks operate this way. The real solution is for banks to get more involved in educating their customers on security, even if it means holding classes and maybe even linking to a certain security expert’s website from the bank website.

From: “Steven Shaer” <steves videosave.net>
Subject: Secure Flight

The point of Secure Flight is to protect the airline industry from another disaster that will cripple it and thus weaken the nation. If we had another 9/11 mass hijack and ensuing drop in airline traffic, the already fragile airline industry would crash and cease to exist as we know it—not a good thing. Moving the terrorists to shopping malls accomplishes this.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.

Sidebar photo of Bruce Schneier by Joe MacInnis.