Crypto-Gram

March 15, 2004

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com.

Crypto-Gram is also available as an RSS feed: <http://www.schneier.com/crypto-gram-rss.xml>


In this issue:


Microsoft Source Code Leak

On 13 February, it became known that Windows 2000 and Windows NT source code was circulating on the Internet. Microsoft soon confirmed the leak, saying that “incomplete portions of Windows 2000 and NT 4.0 source code was illegally made available on the Internet.” Microsoft downplayed the loss, and said it represented approximately 15% of Windows source code. The leak was soon traced to a Microsoft partner, Mainsoft. The Windows NT code that was leaked consisted of all of NT 4.0 Service Pack 3—more than 27,000 files. The Windows 2000 code only contained select portions of the source code, but did include the PKI module.

I am stunned that Microsoft didn’t immediately know exactly who leaked the code. There are easy techniques to give each version of the Microsoft source code files a unique watermark, such that any copy can be traced back to its source. The fact that they didn’t bother doing this says a lot about their own internal security.

It is interesting to speculate who might make use of the code. The obvious group are hackers, who could pore through the code looking for vulnerabilities to exploit. These could be hackers working on their own, in the employ of spammers, or maybe as part of organized crime. I believe that there will be some of this, but not that much. It’s not as if Microsoft vulnerabilities are hard to find, and that people need the source code in order to find them.

Another possible group are companies writing compatible software. I doubt there’s much use here. It’s just not worth the money for a team of programmers to pore through the source code looking for hidden system calls and programming tricks, especially since there’s no guarantee that those tricks will still work in the next revision of the software.

A third group are attorneys looking for lawsuits. It has long been rumored that Windows contains shortcuts that only Microsoft software has access to, and that are denied to competing products. It might be worth it for an attorney to hire a team of programmers to look for a smoking gun, code that specifically helps Microsoft Office or hinders StarOffice, for example. But even so, my guess is that it’s too risky a gamble.

National intelligence organizations are a fourth group that might be interested in the code. It’s certainly possible, but I believe that any intelligence organization worth its salt that wants a copy of the code already has it.

Microsoft’s reaction demonstrates that they’ve thought about this, too. According to an Information Week article, “Microsoft said Wednesday that it has sent warning letters to people who’ve illegally downloaded Windows source code.” If you only think about the hacker threat, this is an extraordinarily dumb move. The code is already out there. It’s public. There’s no taking it back. Any bad guys who want the code now have it, and won’t be deterred by any lawyer letter. The only thing Microsoft’s lawyers are doing is preventing any good guys from looking at the code, and maybe finding vulnerabilities that Microsoft can then fix.

But if you realize that Microsoft’s primary fear is probably other attorneys, then their move makes sense. They want to limit the number of good guys that can access the code, because they’re afraid of what might be found.

A company that truly understands data security would respond by admitting and trying to fix the security breach that caused the leak, and by proactively poring over the released code to quickly patch as many of the inevitable bugs as possible. They would realize that the hackers have the code and might use it, and not prevent the good guys from helping defend themselves.

I even think they would have gotten better PR by doing that than they did by calling in the lawyers.

<http://www.informationweek.com/story/…>
<http://www.winnetmag.com/windowspaulthurrott/…>
<http://www.cnn.com/2004/TECH/internet/02/13/…>
<http://news.com.com/2100-7349_3-5158496.html>

Report that Mainsoft is the source of the leak:
<http://www.eweek.com/article2/0,4149,1526831,00.asp>


A Social Engineering Virus

Years ago I talked about the rise of semantic attacks: computer attacks that target the user instead of semantics in the computer software. One obvious example of this is malicious e-mails that try to entice the user to click on the attachment. They’ve been around for a while, and they continue to get better. This is one I recently received. (The attachment is the Bagle.J virus.) Although it still has some grammatical errors that seem to be the hallmark of this sort of thing—are any virus spreaders competent English writers?—it’s very convincing:

Dear user, the management of DOMAIN.COM mailing system wants to let you know that,

Some of our clients complained about the spam (negative e-mail content) outgoing from your e-mail account. Probably, you have been infected by a proxy-relay Trojan server. In order to keep your computer safe, follow the instructions.

Please, read the attach for further details.

Attached file protected with the password for security reasons. Password is 64003.

The Management,
The DOMAIN.COM team
http://www.DOMAIN.COM

[Attachment called “message.zip”]

My essay on semantic attacks:
<http://www.schneier.com/crypto-gram-0010.html#1>


News

Very good article about the mathematics behind Rijndael, the Advanced Encryption Standard.
<http://research.sun.com/people/slandau/maa1.pdf>

Here’s an obvious twist on the “bad guys smuggle a bomb on an airplane” story. The bad guys smuggle the bomb on in parts, one at a time, through security, and then assemble the device on board. I am reminded of the MIT group that managed to win millions at casinos by counting cards in blackjack. The casinos knew how to spot card counters, but the group divided the tasks up among several people, such that none of them individually was suspicious. This tactic of distributing an attack works in several different security domains, and can be very difficult to prevent.
<http://observer.guardian.co.uk/international/story/…>

Honeypots in wireless networks.
<http://www.securityfocus.com/infocus/1761>

Exploit code for a recent ASN.1 vulnerability is available:
<http://searchsecurity.techtarget.com/…>

Ben Cohen of Ben & Jerry’s Ice Cream has launched “The Computer Ate My Vote” campaign, to lobby for increased security in electronic voting machines.
<http://www.wired.com/news/business/0,1367,62294,00.html>

The German police are using SMS to distribute information on missing persons and fugitives. Presumably the next step will be pictures.
<http://www.siliconvalley.com/mld/siliconvalley/news/…>

There are now automated tools for Bluetooth hacking. This means that it’ll increasingly be done by people with less skill, and fewer ethics.
<http://www.silicon.com/networks/mobile/…>

Opinion on the futility of anti-spam laws:
<http://www.silicon.com/research/specialreports/…>

Meanwhile, AOL, EarthLink, Microsoft, and Yahoo have filed separate suits against spammers in the US, under the CAN-SPAM law. They’re working together to build their case:
<http://www.internetretailer.com/dailyNews.asp?id=11500>
<http://seattlepi.nwsource.com/business/…>
<http://www.pcworld.com/news/article/…>

A movie industry group is suing a company that sells DVD-copying software:
<http://www.siliconvalley.com/mld/siliconvalley/news/…>
<http://news.zdnet.co.uk/business/legal/…>
<http://www.usatoday.com/tech/world/…>

Sad story of the aftermath of identity theft.
<http://msnbc.msn.com/id/4264051/>

Low-tech credit-card scam. Restaurant workers steal credit card numbers from patrons, and then pass them to others who manufacture fake credit cards.
<http://www.mercurynews.com/mld/mercurynews/news/…>

The courts have ruled that JetBlue did not violate any laws when it gave passenger information to U.S. defense contractors. This doesn’t surprise me. It was a violation of trust, to be sure, but not of law.
<http://www.usatoday.com/tech/news/techpolicy/…>

“If hundreds of thousands of people are still blindly clicking on attachments in their email, is there any hope of mitigating the threat of hundreds of thousands of compromised systems with open backdoors?”
<http://www.securityfocus.com/columnists/221>

More evidence that technology has made photographs unreliable as evidence of truth. Someone doctored a photo of John Kerry at an anti-war rally to add Jane Fonda.
<http://sfgate.com/cgi-bin/article.cgi?f=/c/a/2004/…>

The Department of Homeland Security’s (DHS) Protected Critical Infrastructure Information (PCII) program causes security problems. By allowing corporations to submit details about security vulnerabilities and keep that information secret from the public, it can be used to hide negligence or criminal behavior.
<http://www.securityfocus.com/news/8090>

Another idea for Web surfing privacy:
<http://zdnet.com.com/2100-1104_2-5164413.html>

There’s a new lobbying group in the U.S.: the Cyber Security Industry Alliance (CSIA) was formed by eleven security companies.
<http://www.washingtonpost.com/wp-dyn/articles/…>

Patching is still much too difficult, and too many network owners still don’t do it.
<http://news.zdnet.co.uk/internet/security/…>

More cyber-terrorism fear mongering
<http://www.latimes.com/technology/…>

Risks of using hotel networks:
<http://edition.cnn.com/2004/TRAVEL/02/25/…>

Freeware password recovery utilities for Windows:
<http://freehost14.websamba.com/nirsoft/utils/index.html>

Interesting IDS research:
<http://www.gcn.com/vol1_no1/daily-updates/25155-1.html>

Another commentary on the open-source vs. closed-source security debate.
<http://www.theregister.co.uk/content/55/36029.html>

Some companies are trying to limit their liability in the event that your personal information gets stolen.
<http://www.washingtonpost.com/wp-dyn/articles/…>

How anonymous cell phones used by terrorists were tracked by police:
<http://www.iht.com/articles/508783.html>
<http://www.theregister.co.uk/content/28/36060.html>


Counterpane News

NEW: Crypto-Gram now has an RSS feed:
<http://www.schneier.com/crypto-gram-rss.xml>
Anyone who’s having trouble getting Crypto-Gram through a spam filter might want to consider this option.

Schneier’s essay on security and terrorism appeared in the March issue of Wired:
<http://www.schneier.com/essay-wired.html>

Schneier is speaking at PC Forum on March 22nd in Scottsdale.
<http://www.edventure.com/pcforum/index.cfm>

Schneier is speaking, and will be signing books, at Stacy’s Bookstore in San Francisco.

Another “Beyond Fear” review:
<http://www.nwfusion.com/newsletters/sec/2004/…>


Port Knocking

Port knocking is a clever new computer security trick. It’s a way to configure a system so that only systems who know the “secret knock” can access a certain port. For example, you could build a port-knocking defensive system that would not accept any SSH connections (port 22) unless it detected connection attempts to closed ports 1026, 1027, 1029, 1034, 1026, 1044, and 1035 in that sequence within five seconds, then listened on port 22 for a connection within ten seconds. Otherwise, the system would completely ignore port 22.

It’s a clever idea, and one that could easily be built into VPN systems and the like. Network administrators could create unique knocks for their networks—family keys, really—and only give them to authorized users. It’s no substitute for good access control, but it’s a nice addition. And it’s an addition that’s invisible to those who don’t know about it.

<http://www.linuxjournal.com/article.php?sid=6811>
<http://www.portknocking.org/>


Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

Practical Cryptography:
<http://www.schneier.com/crypto-gram-0303.html#1>

SSL flaw:
<http://www.schneier.com/crypto-gram-0303.html#3>

SSL patent infringement:
<http://www.schneier.com/crypto-gram-0303.html#8>

SNMP vulnerabilities:
<http://www.schneier.com/crypto-gram-0203.html#1>

Bernstein’s factoring breakthrough?
<http://www.schneier.com/crypto-gram-0203.html#6>

Richard Clarke on 9/11’s Lessons
<http://www.schneier.com/crypto-gram-0203.html#7>

Security patch treadmill:
<http://www.schneier.com/crypto-gram-0103.html#1>

Insurance and the future of network security:
<http://www.schneier.com/crypto-gram-0103.html#3>

The “death” of IDSs:
<http://www.schneier.com/crypto-gram-0103.html#9>

802.11 security:
<http://www.schneier.com/crypto-gram-0103.html#10>

Software complexity and security:
<http://www.schneier.com/…>

Why the worst cryptography is in systems that pass initial cryptanalysis:
<http://www.schneier.com/crypto-gram-9903.html#initial>


Security Notes from All Over: USPTO

The ricin patent is no longer available from the U.S. Patent Office website.

In October 1962, the U.S. Patent Office granted patent 3,060,165 regarding the use of ricin as a biological weapon. Published patents are, of course, publicly available. That’s the whole point of the patent process.

All U.S. patents are available from the USPTO website. As the site says: “full-text since 1976, full-page images since 1790.”

However, this particular patent is no longer in the database. Search for it, and you’ll get a “Patent not found” image.

<http://patft.uspto.gov/netacgi/nph-Parser?…>

The obvious reason for its removal is fear that it would fall into the wrong hands. But the patent is still available in foreign databases, so this seems like a rather pointless exercise. You can still get the patent from the European Patent Office. The German Patent Office also has a version.

More and more, we’re seeing the U.S. government take public information and try to hide it. Sometimes there are pretty obvious reasons why, like this one. Sometimes there are no obvious reasons why, and terrorism looks like an excuse. There’s resilient security in openness, and brittle security in secrecy.

European Patent Office copy:
<http://v3.espacenet.com/textdoc?…>

German Patent Office:
<http://depatisnet.dpma.de/>
Search for it manually: pick a language, choose beginner’s search and enter the patent number without commas.


Password Safe Version 2.0

The new version of Password Safe is ready for download.

For those of you who don’t know, Password Safe is my free Windows password-storage utility. The problem Password Safe addresses is that anyone who uses the Web regularly needs too many passwords, and it’s impossible to remember them all. The solution is a small program that secures all of your passwords using one passphrase. Password Safe is easy to use, and isn’t bogged down by lots of unnecessary features. Simplicity equals security. The website has details on what’s new in Version 2.0.

Password Safe is an open-source project at SourceForge, and is run by Rony Shapiro. Thank you to him and to all the other programmers who worked on the project. And we’re still looking for people willing to translate the program to Mac or Palm OS.

Project’s homepage:
<http://passwordsafe.sourceforge.net/>

URL for this release:
<https://sourceforge.net/project/showfiles.php?…>
Note that most users will want to download the bin.zip file, not the bin.src file.

Release notes:
<https://sourceforge.net/project/shownotes.php?…>

Linux version of Password Safe:
<http://www.semanticgap.com/myps/>


The Doghouse: Symbiot Security

Symbiot Security claims to have a product that identifies attackers, and then attacks back. Kind of scary, especially since many attacking computers are victims themselves and there are all sorts of ways to disguise the origin of an attack.

I know that vigilante justice is emotionally satisfying, but it’s unbecomming of a civilized society. This kind of thing could certainly get the user sued, and may be illegal. It certainly is immoral.

<http://www.symbiot.com>

My essay on “strike back” technologies:
<http://www.schneier.com/crypto-gram-0212.html#1>


“I am Not a Terrorist” Cards

Journalist and entrepreneur Steve Brill is developing a voluntary, fingerprint-based ID card with a background check attached. The company is called Verified Identity Card, Inc., and the card is called a V-ID.

The idea is for people who are not on a set of government watch lists to be able to subscribe to the service (or for organizations to buy it for their employees, customers, etc.), and then get faster treatment at security checkpoints around the country. Guards would be able to divide people into two categories: more-trusted people with a card and less-trusted people without a card—and concentrate their screening resources on the less-trusted category.

This topic has so many facets that it’s hard to keep them straight. There are actually two parallel systems here. The two systems use the same card and the same infrastructure, but the security analysis is different. The first is an outsourced corporate ID. I think this is a fine idea, and a pretty good business model. Better security at a cheaper cost—how could you not like that?

The second system is essentially a voluntary national ID card. This is the system I want to talk about in this essay. It’s a bad idea, both as a security countermeasure and as social policy. It’s bad for complicated reasons. There’s the question of whether the system will actually work—whether the identity card would reduce the risk of terrorism. There’s the question of whether the two categories of people—cardholders and everyone else—is a useful categorization, and why the government would trust a private company with the terrorist watch list. (Brill has presented this system as a solution to the problem of innocent people with names similar or identical to that of people on the terrorist watch list.) There’s the question of whether we as a nation want a system that divides people based on whether they can afford a card. And there’s the larger question of what in the world identity has to do with security.

First, let’s look at how the system works.

Any American can apply for a card. When you do, your name is compared against certain lists: “presence on any government watch list, citizenship or legal immigrant status, and the absence of any significant, relevant criminal record.” As long as your name isn’t on one of those lists, you are eligible for a card.

(It is not at all clear whether Brill will have access to the terrorist watch lists. He has said that the law mandates that he have access. He has said that the law mandates that he be able to pass names to the government, who will give a “yes or no” response. He has said all sorts of things, but the proof will be in the deployment. I’m not at all convinced that, at the end of the day, he will have any ability to check names against the list.)

To ensure that you are who you say you are, the system uses information from the Choicepoint database (Choicepoint is a for-profit company that does background checks) to construct a series of questions that only you can answer. “Which of these five banks did you get a loan from?” “Which of these addresses did you live at once?” You have to apply in person, and answer these questions on computer in front of a proctor. If you can answer these questions, the system assumes that you’re not an imposter.

Assuming everything checks out, the proctor records the fingerprints (some number of them) of the person and he is issued a V-ID card. This is a card that has information about the person; maybe a picture, maybe the fingerprint information, and definitely an identification number.

(Presumably the outsourced ID system is similar, with some added requirement about the corporation deciding who gets the card, and some corporate logo on the card itself. But corporate cards can be used in the general system, something Brill hopes will bootstrap that system. I’m not sure, though, about what happens when a person on a company’s payroll is determined to be on a terrorist watch list. Presumably the card will work as a corporate ID, but not as a national ID.)

Security checkpoints that accept the card have some kind of reader device. This device may or may not have the complete database of fingerprints. The list of valid cards is definitely updated daily, as people get on to (or off, I suppose) the various government lists. It has a fingerprint reader and a slot for the card, and some kind of visual indicator to let the guards know that the cardholder is okay.

This card is meant to be multi-use. Brill envisions that airports, government buildings, stadiums, national monuments, and office buildings will screen entrants. People with a card can go into a special lane at any of these locations, verify their fingerprint, and go through security with less hassle. People without the card would have to go through the “unverified” lane, and presumably be more extensively screened.

That’s how the system is supposed to work. Let’s look at how likely it is to actually work.

The system hasn’t been fully designed yet, but it looks as if the fingerprint will be used to authenticate the cardholder, not identify him. That’s a good thing. I also assume that any data on the card will be well-protected. There are, of course, many ways to defeat fingerprint readers, but having a guard watch the person put his finger on the reader is the best way to ensure its proper use. My worries are not about how the system is used, but in the registration and the administration of the back-end database.

It certainly would be possible to get a card in a fake name, just as it is possible to get any other kind of ID card—including a passport—in a fake name. While the V-ID system won’t deliberately issue cards to people who should not have them, it will be designed to make is easy for people to get cards. The Choicepoint questions are a clever idea, but the database was developed to be secure against a very different sort of attacker. Someone needs to do a lot of thinking about the Choicepoint database as it relates to this new kind of attacker.

Trusted people within Choicepoint and V-ID are, of course, a potential problem. Several of the 9/11 terrorists had real Virginia driver’s licenses in fake names, issued by dishonest state employees. This system will not be immune to that sort of problem, although I’m sure the creators will take pains to minimize the risk.

I worry about the back-end system. Somewhere there will be a computer that generates the questions, matches identity information with government databases, and generally administers the system. The fingerprint database will be stored somewhere, possibly on every reader. These databases would be vulnerable to attack, from insiders and outsiders.

One counter-argument to this analysis is that most people won’t be able to subvert the system, either by defeating the card or the fingerprint reader or the back-end database, or by manipulating the system into giving them a card in a false name. Most people will either get a card (or not) honestly, and use the system correctly. And while a few might be able to successfully attack the system, that’s no reason to throw it out entirely. But the whole point of the system is to work in the face of a dedicated and well-funded adversary. Even the argument that most terrorists are stupid misses the point. It doesn’t matter whether or not average people can subvert the system; we want security systems that protect us against smart people, especially smart terrorists.

The system is designed to be decentralized, so that someone cannot be tracked through the use of the card. It is an open question as to whether law enforcement could force the company to change that design and use the system to track people. The infrastructure is all there to do that: software on the reader and a communications system between the readers and some central point. Brill has said that it would be impossible, but from his description of the system, that’s clearly not true. He has also said that this couldn’t happen because it would be a violation of the contract the V-ID company has with its customers, which makes no sense to me.

My primary security concerns surrounding this system stem from what it’s trying to do. In his writings and speaking, Brill is very careful to explain that these are not “trusted traveler cards.” He calls them “verified identity cards.” But the only purpose of his card is to divide people into two lines—a fast line and a slow line, a “search less” line and a “search more” line, or whatever. (Each security checkpoint that uses the card would develop its own procedures.) This division only makes sense if it’s based on a degree of trust. If you didn’t believe that people with the card were more trusted, you wouldn’t let them go in the fast lane. Here’s an example: if I designed a card that verified a person’s dental hygiene, you wouldn’t divide people into two security lines based on that card, because you know that people with good dental hygiene aren’t more trusted than people without. On the other hand, it would be valid to use that card to divide people into dental service lines based on the assumption that the people with good dental hygiene would be able to be treated faster. Brill’s plan is that people who have the card get a more lenient security treatment than people without. Call it what you will, but it means that people with the card are more trusted than people without.

The reality is that the existence of the card creates a third, and very dangerous, category: bad guys with the card. Timothy McVeigh would have been able to get one of these cards. The DC sniper and the Unabomber would have been able to get this card. Any terrorist mole who hasn’t done anything yet and is being saved for something big would be able to get this card. Some of the 9/11 terrorists would have been able to get this card. These are people who are deemed trustworthy by the system even though they are not.

And even worse, the system lets terrorists test the system beforehand. Imagine you’re in a terrorist cell. Twelve of you apply for the card, but only four of you get it. Those four not only have a card that lets them go through the easy line at security checkpoints; they also know that they’re not on any terrorist watch lists. Which four do you think will be going on the mission? By “pre-approving” trust, you’re building a system that is easier to exploit.

Moreover, any break in the system is much more serious because it has so many applications. The company’s literature considers it a problem that “Americans now need several identification/security cards.” But that is actually a security feature. If a terrorist can subvert the V-ID system, he can use it to gain access to any facility that uses the system. It’s a large single point of failure. Contrast that with a company ID, which only grants access to a company’s facilities. Subverting that system would only allow the attacker access to those facilities, and nothing else.

This brings up another fundamental question: Why should any security checkpoint accept a V-ID card? This system costs money to install in airports, sports stadiums, etc.—they have to buy the card readers—so there needs to be some benefit. The claimed benefit is customer service; people with the card can get better treatment. Airlines have long recognized the problem of forcing their best customers to wait in long security lines, and have implemented special lines for first-class passengers and high-tier frequent fliers. But for the owner of a sports stadium, a person with a V-ID card isn’t a higher-tier customer, he’s just a customer who paid for a V-ID card. What benefit does it give to the stadium to separate people on that basis? The only one I can think of is liability: by using the V-ID system, they receive some kind of shielding from liability issues if someone with the card does something nasty.

This is a big deal, and one that is very important to Verified Identity Card’s business. The company wants to be a voluntary national ID card, but it doesn’t want to accept any liability for being one. This is why they try very hard not to call people with the card “trusted” in any way. But why would a business accept the card if, when someone with a card caused a problem, the business was liable? The business trusted the card and the company backing it to tell it who is trusted and who is not. The reason businesses accept government-issued identification is that the courts consider it a reasonable check. A liquor store owner can stand up in court and say: “He had a driver’s license.” What does “he had a V-ID card” mean, unless the V-ID company is willing to accept liability?

From their point of view, I think the V-ID company is smart not to want to accept liability. The company is 100% correct when they say that a person with a card isn’t more trusted than a person without a card, even though by saying so they are exposing the huge hole in their business model. If a building’s management decides that it is going to run people through a metal detector, it makes no sense for them to only screen people without a V-ID card. If an airport wants to implement “extra” screening procedures on a few passengers, it makes no sense for them to make that decision based on whether or not a passenger has a V-ID card. Terrorists come in all shapes and sizes, and the last thing we want is a terrorist with a V-ID card to be able to operate with impunity.

Security is always a trade-off. The question to ask is not “Does this system make us safer?” Otherwise we’d all be wearing bulletproof vests and locking ourselves in our bedrooms. The question to ask is: “Is this system worth the trade-offs?” The V-ID system has some pretty serious trade-offs. The system collects a fingerprint database on everyone who applies for the card—a database that can be used and abused by anyone with access, legitimate or illegitimate.

The system creates a social division of haves and have-nots based on an ability to pay for the card. The system puts an infrastructure in place for surveillance; even though the proposed system takes pains to ensure that no information is collected about how people use the card, it’s not unreasonable to assume that this kind of data collection might be added in the future. And it’s expensive. The cost of the system won’t just be borne by those willing to pay for preferential treatment; infrastructure costs will be passed on to all consumers somehow.

And what do we get for those costs? We get a security system with built-in flaws. We get a system that divides people into two categories that don’t correlate very well with how dangerous they are. We get a system that’s a single point of failure, and one that terrorists can use to their advantage. We get a system that collects data on all users, innocent or not, with all the potential security problems that can cause.

And we get a system that concentrates security resources on terrorism, when the more serious problems are criminal. On 24 July 1998, Russell Weston Jr. walked into the U.S. Capitol and started shooting, killing two. Despite being known to the Secret Service and having been investigated previously, he was not a “terrorist” threat. He would likely have been able to get a V-ID card.

Identification has minimal security value, but it does have some. On the other hand, freedom, privacy, and liberty are all values we cherish, and they are the values that give our country its greatest security. Citizens have rightly refused a national ID card because they realize that the costs are simply not worth the security. A system that issues ID cards to only those wealthy enough to afford them is even worse.

Brill said that he doesn’t believe in democratizing security, that it doesn’t make sense to apply the same security scrutiny to everyone. I think that, at core, is the problem here. He thinks that he should be able to get into a building without waiting in line because he is more trustworthy. But by building a system that allows him to do so, we runs the risk of infringing on the rights of convicted felons who have already paid their debt to society. We move from an “innocent until proven guilty” society to a “treat some people as guilty, just in case” one. It’s a dangerous road to travel.

Press release:
<http://www.prnewswire.com/cgi-bin/stories.pl?…>

News articles:
<http://209.157.64.200/focus/f-news/1006499/posts>
<http://www.wired.com/news/business/0,1367,60965,00.html>


Security Risks of Centralization

In discussions with Brill, he regularly said things like: “It’s obviously better to do something than nothing.” Actually, it’s not obvious. Replacing several decentralized security systems with a single centralized security system can actually reduce the overall security, even though the new system is more secure than the systems being replaced.

An example will make this clear. I’ll postulate piles of money secured by individual systems. The systems are characterized by the cost to break them. A $100 pile of money secured by a $200 system is secure, since it’s not worth the cost to break. A $100 pile of money secured by a $50 system is insecure, since an attacker can make $50 profit by breaking the security and stealing the money.

Here’s my example. There are 10 $100 piles, each secured by individual $200 security systems. They’re all secure. There are another 10 $100 piles, each secured by individual $50 systems. They’re all insecure.

Clearly something must be done.

One suggestion is to replace all the individual security systems by a single centralized system. The new system is much better than the ones being replaced; it’s a $500 system.

Unfortunately, the new system won’t provide more security. Under the old systems, 10 piles of money could be stolen at a cost of $50 per pile; an attacker would realize a total profit of $500. Under the new system, we have 20 $100 piles all secured by a single $500 system. An attacker now has an incentive to break that more-secure system, since he can steal $2000 by spending $500—a profit of $1500.

The problem is centralization. When individual security systems are combined in one centralized system, the incentive to break that new system is generally higher. Even though the centralized system may be harder to break than any of the individual systems, if it is easier to break than ALL of the individual systems, it may result in less security overall.

There is a security benefit to decentralized security.


Comments from Readers

From: “Ryan Malayter” <rmalayter bai.org>
Subject: Identification and Security

I agree IDs are easy to forge, and don’t offer any real assurance as to the identity or intent of those being screened, for the reasons you mention.

However, I always assumed that the authorities knew this, and identification checks were designed to do something else: allow security officers to study the behavior of those while being screened.

I knew a guy who used to check IDs at a bar in college, and he was almost unbeatable. Some of the fakes were very good. Some were even real state-issued IDs obtained with false documents. He could still spot most of the would-be underage patrons, though, because of their behavior while they handed him the ID. Sweaty palms, looking at the ground, an overconfident smile, or an inane “cover conversation” gave many of the kids away.

Even a well-trained trained terrorist would have a hard time not showing *any* signs of anxiety while his ID was being checked by a uniformed security official. Unfortunately, I suspect many TSA employees have little or no training in identifying this type of behavior. In the bar example, such training was obtained only through years of observation and experience.

Of course, this “behavioral observation” is certainly an error-prone process, but it could be very useful for identifying a pool of people who might need further screening. Is it too much to hope that providing a forum for such “behavior study” is the real reason for the proliferation of ID checkpoints in our post-9/11 society, and not some mass delusion on the part of security officials?

From: DV Henkel-Wallace <gumby henkel-wallace.org>
Subject: Identification and Security

ID checks are more useless and pernicious than you state. In most cases you don’t need a false ID—a legitimate ID will do. These “ID checks” at hospitals, government buildings, trade shows (!) and the like usually don’t even involve any check to see if you’re on any list of any sort.

They merely check to see that you are carrying a document that looks like legitimate identification. I’ve successfully used my now-obsolete Price Club card, my National Shooting Club photo ID (a handwritten document, although it _is_ laminated) and the like to get into office buildings. And why not? The “check” doesn’t verify anything about me anyway.

What this DOES accomplish is 1) keep homeless people out of courthouses, 2) keep those who wish to be anonymous from leaving a message for their senator and 3) build a culture that accepts a routine request for “Your papers, please.”

Personally, I don’t consider any of those useful accomplishments. But perhaps I’m in the minority.

From: “Bruce Ediger” <eballen1 qwest.net>
Subject: The Economics of Spam

Hi. I read your Feb 15th “Crypto-Gram” newsletter with some interest, in particular your “Economics of Spam” article.

I like that you treated spamming as an economic fact, but I think you missed two points:

1. Of course Gates would decide that someone should pay for e-mails. That’s the only way that Microsoft can turn e-mail into a profit center. They already have plans in progress to put copy protection (DRM) on all Windows boxes, so Gates probably figures that the DRM infrastructure could have a second use in e-mail. Imposing a fee structure and copy protection on e-mails also allows them to overthrow the current open standard SMTP transport of e-mail. Gates has a keen awareness that commodity protocols get copied very rapidly.

2. The profitability of spam as advertising depends on very weak market forces on that form of advertising. Spam has the unique property that each and every recipient helps pay for the advertising (on-line time, CPU cycles, disk space, etc) *before* the spam victim gets a chance to decide to buy the advertised product or not. This differs completely from any other form of advertising except telemarketing and junk-faxing. Billboards, radio and TV spots, magazine and newspaper ads, and direct mailings all require the advertiser to bear 100% of the ad’s costs. Of course, the small percentage that decide to buy the advertised product end up paying for the advertising, but the key aspect of buyer’s choice remains. A conventional ad has to not offend almost all potential buyers. Otherwise, the Invisible Hand spanks the people who make the advertised product. The Invisible Hand of the Marketplace only weakly affects spammers, as some or most of the ad’s cost has already been borne by the advertised-to.

From: Ralf Holzer <rholzer cmu.edu>
Subject: US-VISIT Exemptions and Error Rates

You repeatedly mentioned that all but 27 countries are subject to the fingerprinting and photographing measures (US-VISIT) now in effect at most American ports of entry. I just wanted to point out that these exemptions are mostly for tourists. I am a graduate student from Germany with an F-1 visa and I have to go through the same fingerprinting and photographing procedures. Tourists from Germany and other European countries are only exempt because all European passports will be required to have biometric identification in order to be able to enter the U.S. beginning this fall.

A fellow student from a country requiring special registration has told me that he now has several different profiles registered with US-VISIT, because the system keeps falsely identifying his fingerprint. The immigration officer seemed to be clueless about how to correct this. Such a high error rate really makes me wonder about the effectiveness of US-VISIT.

From: rfleming cultdeadcow.com
Subject: Supermarket Club Card Databases

About a week ago, some junk mail arrived at my home from Albertson’s supermarket, announcing the creation of their new club card. The ad copy declares: “The labor dispute has been tough on everyone. But one thing we know for sure—the day it’s over, you’re going to save like never before. Great low prices and extra special values will be yours… with the new Albertsons Sav-on Preferred Savings Card. Sign up today!”

It got me thinking. Safeway and Ralph’s (the other two supermarkets affected by the strike) already have club cards. And one thing THEY now know for sure is which of their customers are willing to cross picket lines to buy groceries, and which aren’t.

In other words, the purchase patterns contained in the Safeway and Ralph’s club card databases could be EASILY mined for individual customers’ sympathies to organized labor.

Think about that. The next time somebody applies for a job at his neighborhood Safeway or Ralph’s, should he expect them to check his 2003-2004 shopping habits for hints that he might be pro- or antiunion? And what’s keeping the supermarkets from offering this data to other employers, or even the custodians of the Total Information Awareness program?


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on <http://www.schneier.com/crypto-gram.html>.

To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.

Sidebar photo of Bruce Schneier by Joe MacInnis.