Crypto-Gram

April 15, 2004

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com.


In this issue:


National ID Cards

As a security technologist, I regularly encounter people who say the United States should adopt a national ID card. How could such a program not make us more secure, they ask?

The suggestion, when it’s made by a thoughtful civic-minded person like Nicholas Kristof in the New York Times, often takes on a tone that is regretful and ambivalent: Yes, indeed, the card would be a minor invasion of our privacy, and undoubtedly it would add to the growing list of interruptions and delays we encounter every day; but we live in dangerous times, we live in a new world….

It all sounds so reasonable, but there’s a lot to disagree with in such an attitude.

The potential privacy encroachments of an ID card system are far from minor. And the interruptions and delays caused by incessant ID checks could easily proliferate into a persistent traffic jam in office lobbies and airports and hospital waiting rooms and shopping malls.

But my primary objection isn’t the totalitarian potential of national IDs, nor the likelihood that they’ll create a whole immense new class of social and economic dislocations. Nor is it the opportunities they will create for colossal boondoggles by government contractors. My objection to the national ID card, at least for the purposes of this essay, is much simpler.

It won’t work. It won’t make us more secure.

In fact, everything I’ve learned about security over the last 20 years tells me that once it is put in place, a national ID card program will actually make us less secure.

My argument may not be obvious, but it’s not hard to follow, either. It centers around the notion that security must be evaluated not based on how it works, but on how it fails.

It doesn’t really matter how well an ID card works when used by the hundreds of millions of honest people that would carry it. What matters is how the system might fail when used by someone intent on subverting that system: how it fails naturally, how it can be made to fail, and how failures might be exploited.

The first problem is the card itself. No matter how unforgeable we make it, it will be forged. And even worse, people will get legitimate cards in fraudulent names.

Two of the 9/11 terrorists had valid Virginia driver’s licenses in fake names. And even if we could guarantee that everyone who issued national ID cards couldn’t be bribed, initial cardholder identity would be determined by other identity documents… all of which would be easier to forge.

Not that there would ever be such thing as a single ID card. Currently about 20 percent of all identity documents are lost per year. An entirely separate security system would have to be developed for people who lost their card, a system that itself is capable of abuse.

Additionally, any ID system involves people… people who regularly make mistakes. We all have stories of bartenders falling for obviously fake IDs, or sloppy ID checks at airports and government buildings. It’s not simply a matter of training; checking IDs is a mind-numbingly boring task, one that is guaranteed to have failures. Biometrics such as thumbprints show some promise here, but bring with them their own set of exploitable failure modes.

But the main problem with any ID system is that it requires the existence of a database. In this case it would have to be an immense database of private and sensitive information on every American—one widely and instantaneously accessible from airline check-in stations, police cars, schools, and so on.

The security risks are enormous. Such a database would be a kludge of existing databases; databases that are incompatible, full of erroneous data, and unreliable. As computer scientists, we do not know how to keep a database of this magnitude secure, whether from outside hackers or the thousands of insiders authorized to access it.

And when the inevitable worms, viruses, or random failures happen and the database goes down, what then? Is America supposed to shut down until it’s restored?

Proponents of national ID cards want us to assume all these problems, and the tens of billions of dollars such a system would cost—for what? For the promise of being able to identify someone?

What good would it have been to know the names of Timothy McVeigh, the Unabomber, or the DC snipers before they were arrested? Palestinian suicide bombers generally have no history of terrorism. The goal is here is to know someone’s intentions, and their identity has very little to do with that.

And there are security benefits in having a variety of different ID documents. A single national ID is an exceedingly valuable document, and accordingly there’s greater incentive to forge it. There is more security in alert guards paying attention to subtle social cues than bored minimum-wage guards blindly checking IDs.

That’s why, when someone asks me to rate the security of a national ID card on a scale of one to 10, I can’t give an answer. It doesn’t even belong on a scale.

This essay originally appeared in the Minneapolis Star Tribune:
<http://www.startribune.com/stories/1519/4698350.html>

Kristof’s essay in the New York Times:
<http://www.nytimes.com/2004/03/17/opinion/…>

My earlier essay on National ID cards:
<http://www.schneier.com/crypto-gram-0112.html#1>

My essay on identification and security:
<http://www.schneier.com/crypto-gram-0402.html#6>


TSA-Approved Locks

Way back in 1993, the Clinton Administration proposed the Clipper Chip. The government was concerned that the bad guys would start using encryption, so they had a solution. The Clipper Chip would provide strong encryption that could not be broken, but there was a secret key—known only by the government—that could decrypt the traffic. That way legitimate users could be secure, but the bad guys could have their messages read by the government.

As you can imagine, this didn’t go over very well.

People didn’t like the idea of the government having a back door into their cryptography. Experts like me didn’t believe that the “back door” would remain secret, and didn’t think that deliberately crippled cryptography was a good idea. The general concept, known as key escrow, key recovery, or trusted third-party encryption, hung around for a few years and was eventually forgotten.

Who would have thought it would come back in the form of a luggage lock?

Since 9/11, airport security has started opening checked luggage more. If they find a locked suitcase, they break the lock. But some travelers lock their suitcases, as they don’t want the bags either accidentally opening up in transit or being opened up by some baggage handler looking for something to filch. In an attempt to satisfy both of these requirements, there’s now a key escrow lock. You lock and unlock your suitcase normally, but there’s a special TSA key that allows airport security to unlock it, too.

There are a couple of reasons why this is different. First, there’s no other option. Either you use one of these special locks, or you leave your suitcase unlocked. In this case, the solution might be better than nothing.

Second, it’s only a suitcase. We’re not trying to defend against a criminal cutting into it with a knife, or walking away with the entire bag. We’re trying to defend against an opportunist sticking his hand into the bag and grabbing something.

Sure, the bad guys will get copies of the special TSA keys and be able to unlock your suitcase, but they were able to open that sorry-ass luggage lock already.

I have never locked my suitcase, even when traveling in the Third World. The risk seems pretty small to me. But if someone is worried, I recommend this lock. Security theater plus a little bit of real security seems like a good combination in this case.

Lock website:
<http://www.travelsentry.org/travelers.htm>

News articles:
<http://www.pittsburghlive.com/x/tribune-review/news/…>
<http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/…>

My old writings on key recovery:
<http://www.schneier.com/essay-infosec-scrambled-ft.html>
<http://www.schneier.com/paper-key-escrow.html>


Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

Automated Denial-of-Service Attacks Using the U.S. Post Office:
<http://www.schneier.com/crypto-gram-0304.html#1>

National Crime Information Center (NCIC) Database Accuracy:
<http://www.schneier.com/crypto-gram-0304.html#7>

How to Think About Security:
<http://www.schneier.com/crypto-gram-0204.html#1>

Is 1028 Bits Enough?
<http://www.schneier.com/crypto-gram-0204.html#3>

Liability and Security
<http://www.schneier.com/crypto-gram-0204.html#6>

Natural Advantages of Defense: What Military History Can Teach Network Security, Part 1
<http://www.schneier.com/crypto-gram-0104.html#1>

UCITA:
<http://www.schneier.com/crypto-gram-0004.html#ucita>

Cryptography: The Importance of Not Being Different:
<http://www.schneier.com/crypto-gram-9904.html#different>

Threats Against Smart Cards:
<http://www.schneier.com/…>

Attacking Certificates with Computer Viruses:
<http://www.schneier.com/…>


Stealing an Election

There are major efforts by computer security professionals to convince government officials that paper audit trails are essential in any computerized voting machine. They have conducted actual examination of software, engaged in letter writing campaigns, testified before government bodies, and collectively, have maintained visibility and public awareness of the issue.

The track record of the computerized voting machines used to date has been abysmal; stories of errors are legion. Here’s another way to look at the issue: what are the economics of trying to steal an election?

Let’s look at the 2002 election results for the 435 seats in the House of Representatives. In order to gain control of the House, the Democrats would have needed to win 23 more seats. According to actual voting data (pulled off the ABC News website), the Democrats could have won these 23 seats by swinging 163,953 votes from Republican to Democrat, out of the total 65,812,545 cast for both parties. (The total number of votes cast is actually a bit higher; this analysis only uses data for the winning and second-place candidates.)

This means that the Democrats could have gained the majority in the House by switching less than 1/4 of one percent of the total votes—less than one in 250 votes.

Of course, this analysis is done in hindsight. In practice, more cheating would be required to be reasonably certain of winning. Even so, the Democrats could have won the house by shifting well below 0.5% of the total votes cast across the election.

Let’s try another analysis: What is it worth to compromise a voting machine? In contested House races in 2002, candidates typically spent $3M to $4M, although the highest was over $8M. The outcomes of the 20 closest races would have changed by swinging an average of 2,593 votes each. Assuming (conservatively) a candidate would pay $1M to switch 5,000 votes, votes are worth $200 each. The actual value is probably closer to $500, but I figured conservatively here to reflect the additional risk of breaking the law.

If a voting machine collects 250 votes (about 125 for each candidate), rigging the machine to swing all of its votes would be worth $25,000. That’s going to be detected, so is unlikely to happen. Swinging 10% of the votes on any given machine would be worth $2500.

This suggests that it is necessary to assume that attacks against individual voting machines are a serious risk.

Computerized voting machines have software, which means we need to figure out what it’s worth to compromise a voting machine software design or code, and not just individual machines. Any voting machine type deployed in 25% of precincts would register enough votes that malicious software could swing the balance of power without creating terribly obvious statistical abnormalities.

In 2002, all the Congressional candidates together raised over $500M. As a result, one can conservatively conclude that affecting the balance of power in the House of Representatives is worth at least $100M to the party who would otherwise be losing. So when designing the security behind the software, one must assume an attacker with a $100M budget.

Conclusion: The risks to electronic voting machine software are even greater than first appears.

This essay was written with Paul Kocher.


Counterpane News

Schneier is speaking at the CSO Perspectives Conference in San Diego on April 19th.
<http://www.csoonline.com/perspectives/april2004/>

Counterpane is exhibiting at NetWorld + Interop, taking place at the Las Vegas Convention Center, May 11 – 13. Visit us in booth #2021.

Counterpane has sponsored a webcast with Gartner. Both Gartner and Counterpane talk about network security and Managed Security Services.
<http://www.accelacomm.com/jlp/cryptogram/0/10001788/>

More Beyond Fear reviews:
<http://netsecurity.about.com/cs/bookreviews/gr/…>
<http://victoria.tc.ca/int-grps/books/techrev/…>
<http://www.securitymanagement.com/library/001598.html>


Security Notes from All Over: Man-in-the-Middle Attack

The phrase “man-in-the-middle attack” is used to describe a computer attack where the adversary sits in the middle of a communications channel between two people, fooling them both. It is an important attack, and causes all sorts of design considerations in communications protocols.

But it’s a real-life attack, too. Here’s a story of a woman who posts an ad requesting a nanny. When a potential nanny responds, she asks for references for a background check. Then she places another ad, using the reference material as a fake identity. She gets a job with the good references—they’re real, although for another person—and then robs the family who hires her. And then she repeats the process.

Look what’s going on here. She inserts herself in the middle of a communication between the real nanny and the real employer, pretending to be one to the other. The nanny sends her references to someone she assumes to be a potential employer, not realizing that it is a criminal. The employer receives the references and checks them, not realizing that they don’t actually belong to the person who is sending them.

It’s a nasty piece of crime.

<http://www.sfgate.com/cgi-bin/article.cgi?file=/…>


BeepCard

BeepCard is a technology company. They sell a sound authenticator for credit cards. The demo looks like a credit card—an actual credit card that passes all the credit card specs for bendability and reliability and everything—and contains a speaker and a sound chip. When you press a certain part of the card—the “button”—it spits out an audible 128-bit random string.

It’s a non-repeating string that’s reproduced in software at the other end, similar to a SecurID card, so an attacker can’t record one audible string and deduce the rest of them.

This is perhaps the coolest security idea I’ve seen in a long time. They have a demo application where you go to a website and purchase something with a credit card. To authenticate the transaction, you have to put the card up to your computer’s microphone and press the button. The sound is captured using a Java or ActiveX control—no plug-in required—and acts as an authenticator. It proves that the person making the transaction has the card in his hands, and doesn’t just know the number. In credit-card language, it changes the transaction from “card not present” to “card present.”

Even cooler, they are making an enhancement to the system that also includes a microphone on the card. This system will require the user to speak a password into the card before pressing the button.

Why do I like this? It’s a physical authentication system that doesn’t require any special reader hardware. You can use it on a random computer at an Internet cafe. You can use it on a telephone. I can think of all sorts of really easy, really cool applications. If the price is cheap enough, BeepCard has a winner here.

<http://www.beepcard.com>


Bluetooth Privacy Hack

Seems that Bluetooth cell phones are vulnerable to snooping: not the conversations, the contents of the phones.

Adam Laurie, a security researcher in the U.K. discovered the flaw, the the London Times broke the story yesterday. The hack is called “Bluesnarfing,” and allows a hacker to remotely download the contacts list, diary, and stored pictures in Bluetooth-enabled telephones.

I don’t have any of the technical details, but the implications are pretty scary. You could take your Bluetooth-enabled laptop into a trade show and download the contact list of your rivals’ salespeople’s cell phones. You could walk into a Congressional Press Conference and download information from the cell phones of Congressmen. This story broke in the UK; I know that reporters would love to know who is on Tony Blair’s, or Prince Charles’s, speed dial.

It’s unclear how many phones are affected—whether this is a Bluetooth problem or an implementation problem with some Bluetooth phones—or whether the problem is fixable. But it’s a big problem. People treat cell phones like their wallets; they keep all kind of sensitive information in them. For someone else to have the ability, remotely, of downloading the contents of the phone is disturbing.

News stories (paid access required):
<http://www.timesonline.co.uk/article/…>
<http://www.timesonline.co.uk/printFriendly/…>

My 2000 essay on Bluetooth security:
<http://www.schneier.com/crypto-gram-0008.html#8>


News

It’s possible—easy, even—to use Google to search for vulnerable computers to hack. Here are some of the possibilities:
<http://johnny.ihackstuff.com/index.php?…>

Once again, the U.S. Department of Interior has been told by the court to take its computers off the Internet because it cannot secure personal data:
<http://www.techweb.com/wire/story/TWB20040317S0005>

Worried about automatic cameras snapping a picture of your license plate as you speed through red lights? Here’s a company that sells a license plate cover that makes it hard to read at angles of more than five degrees. Not sure if it’s legal to use one, though.
<http://www.phantomplate.com>

A security breach at Equifax Canada leaked personal info about 1400 people. The criminals were “posing as legitimate credit grantors.” Sounds like a non-technical attack to me.
<http://www.cbc.ca/stories/2004/03/16/canada/…>
<http://www.ctv.ca/servlet/ArticleNews/story/CTVNews/…>
Equifax Canada’s response:
<http://www.equifax.com/EFX_Canada/…>

CSO Magazine grades government information security:
<http://www.csoonline.com/read/020104/grade.html>

Here’s a story of a man in the UK who flew to Italy and back, having his passport checked several times. However, he’d mistakenly picked up his wife’s passport. No one noticed. <http://news.bbc.co.uk/1/hi/england/oxfordshire/…>

The four big accounting companies, plus an insurance company, are working together on a cybersecurity risk measurement framework. Could be interesting.
<http://www.computerworld.com/securitytopics/…>

Political activists are on the U.S. government “no fly” list:
<http://www.commondreams.org/headlines02/0927-01.htm>
<http://www.truthout.org/docs_04/020904E.shtml>

ATMs outfitted with equipment to steal debit card numbers and PINs:
<http://www.utexas.edu/admin/utpd/atm.html>

Interesting article about anti-counterfeiting technologies in banknotes. The author expresses surprise that the general public doesn’t check banknotes for authenticity very much. To me this is perfectly reasonable. It’s not in anyone’s best interest to find counterfeit money in his wallet. Any anti-counterfeiting technology that relies on citizens checking money for authenticity is going to fail, because people just won’t do it. Now if banks gave 1.5x reward for counterfeit bills, citizens would get very good at spotting fake notes. But that would cause another set of problems.
<http://www.rbnz.govt.nz/currency/money/0116878.html>
Banks and governments need to detect counterfeits:
<http://www.bis.org/press/p040309.htm>
Different countries’ regulations on the use of images of money:
<http://www.rulesforuse.org>

A self-described psychic calls in a bomb tip and grounds an airplane. Guess you don’t have to be a terrorist to disrupt airline travel.
<http://www.cnn.com/2004/US/West/03/27/…>
<http://www.mercurynews.com/mld/mercurynews/8293055.htm>

Citing anonymous sources in the British intelligence community, the Sunday Times reported that an e-mail message intercepted by NSA spies precipitated a massive terrorism investigation.
<http://www.globetechnology.com/servlet/story/…>

Interesting interview with Gene Spafford:
<http://grep.law.harvard.edu/article.pl?sid=04/04/05/…>

Interesting, but long, essay on spam:
<http://www.colinfahey.com/spam_topics/…>

A couple of years I wrote about the security problems created by arming airplane pilots. Here’s an interesting related story: a sky marshal accidentally left her handgun in an airport bathroom…inside security. This isn’t a big deal. The chance that a terrorist would have found and used the gun is essentially zero. The chance that a random nutcase would have found and used the gun is larger, but still close enough to zero not to matter. But the story is a good lesson that even well-thought-out security systems fail sometimes.
<http://www.reuters.com/newsArticle.jhtml?…>


Virus Wars

We’re in the middle of a huge virus/worm epidemic. Dozens and dozens of different ones have been found in the past few weeks. Most of these are not new, but variants on others.

There seems to be an ongoing war between the people who write the Bagle worm and the people who write the Netsky worm. Many variants of each are running around the Internet, and more seem to be found all the time. Embedded in the different versions are comments and taunts to the other.

For example, this is what was found in Netsky.R: “Yes, true, you have understand it. Bagle is a shitty guy, he opens a backdoor, and he makes a lot of money. Netsky not, Netsky is Skynet, a good software, Good guys behind it. Believe me, or not. We will release thousands of our Skynet versions, as long as Bagle is there and the people… Thanks To Bruce Schneider. And to all people in cz and russia. Best regards. We are the only Skynet.”

(Yes, I used this as an example because I am mentioned.)

Unfortunately, this war shows no sign of ending. Presumably the various participants will eventually grow up and get tired, but until then we have to endure more of this nonsense.

News articles:
<http://www.itnews.com.au/storycontent.asp?…>
<http://slashdot.org/article.pl?sid=04/04/06/…>

Netsky.R information:
<http://www.sophos.com/virusinfo/analyses/…>

Bunches of variants described:
<http://www.f-secure.com/weblog/archives/…>


Comments from Readers

From: Todd Ellner <tellner cs.pdx.edu>
Subject: Thoughts on striking back

It’s always interesting when two things that one knows are in conflict with one another. Not always comfortable but always interesting.

Your essays on hack-back and similar schemes are right. Right morally and ethically, right technically. But there are parts missing, and I think it’s those absences which make me uneasy. Please bear with me here.

The first part is what you call “vigilantism.” You define it as any attempt by one party to harm another in response to a perceived wrong. You then combine it with the overlapping but most assuredly not identical idea of taking the law into one’s own hands.

Vigilantes or the Committee of Vigilance was a historical response to lawlessness in times and places where law enforcement was non-existent or incapable of responding to crimes. The vigilantes were groups of citizens. The idea of a “lone vigilante” is semantically null. Vigilantism was an expression of community will and mores for better or worse.

Bernard Goetz was not a “vigilante”. He was a guy with a gun.

When my wife was growing up in East Africa there wasn’t a lot of money for modern policing. Very often if there were robbers operating in the area, the men of the village would get together in a group, track them down, and either deliver them to the nearest police or military officials or mete out punishment, usually in the form of beatings. That is vigilantism.

You have, I fear, conflated vigilantism—unofficial and informal justice handled directly by a community rather than by the legal system—with something else entirely: self-defense. Self-defense is not about punishment. It’s not about revenge. It’s not about taking on the legal system’s job. It’s about doing what is necessary at the moment to prevent harm to the innocent.

From what little I can tell from their site and e-mails to their sales people, Symbiot is run by a bunch of idiots. But what they seem to be trying to accomplish is self defense. Stop the person who is trying to hurt you by removing his capacity to do harm at the time he is trying to inflict damage.

The difference is important and has been a fixture of the law since time immemorial.

In my other life, the one where I don’t write software and administer Linux machines, I do a lot of things related to self-defense—teach women’s self defense courses, get abused women out of dangerous situations, do firearms instruction, get certified as an expert witness for self-protection and use of force cases, that sort of thing.

In the physical world the issues are pretty clear and easy to understand. You can hit him to stop him from hitting you. You can hold him until the police arrives. You can not hit him if he’s not an immediate threat. You can’t keep hitting him if he’s not going to hit you any longer. You certainly can’t put the boot into him an extra time just because he deserves it.

There’s another principle in the physical world and the psychological one—deterrence. To a great degree, physical security and protection are a communication process. As the defender, you have to communicate to the attacker in language he understands and believes that whatever he wants is not going to be worth what he will have to pay for it. If you’ve ever had cats or dogs you will know exactly what I am talking about.

So if we de-conflate the two issues I think there’s something legitimate going on. Deterrence rests on perceived costs. The standard security measures are pretty much entirely passive. They increase the time it will take before the criminal gets what he wants.

Criminal penalties help. Highly publicized trials with stiff fines or jail sentences will certainly make many black hats think twice. But the law, as any cop will tell you, doesn’t act. It reacts after the fact. And it doesn’t do anything about attacks that originate in foreign countries.

What I see people like Symbiot reacting to is not just the desire for revenge. It’s a recognition of what we all know deep down about self-protection and deterrence. They are addressing a real concern, however clumsily, and are aware (even if they aren’t aware) of what may be an irremediable weakness in electronic security.

From: Drew Johnson <a2johnson yahoo.co.uk>
Subject: Security Benefits of Centralisation

I thought it was a great idea to try and model the economics of centralising security (Cryptogram 15th March 2004), but I would like to open some debate on your conclusion. In reality, security spending is constrained. With this in mind I think the opposite is true:

Centralising is good!

Lets reconsider the two scenarios assuming I have $1000 to protect and that I have a security budget of $10 to protect it with.

Scenario 1:

I go to a manufacturer and buy 10 $200-rated virtual vaults costing $1 each. I store $100 in each. Any attacker will need the cash flow to find $200 up front if they wish to launch an attack. After successfully attacking the first vault, the attacker is $100 in the red. However, as all the virtual vault computers have identical vulnerabilities, the same attack can be replayed at minimal marginal cost (e.g. $1). After attacking the 3rd vault, therefore, the attacker is $98 in the black.

Having attacked all 10 vaults the attacker would make a $791 profit. I lose $1010.

Scenario 2:

I spend my entire budget of $10 on one $500-rated virtual vault and store $1000 in it. (Note that this initially appears to be less value for money; each $1 only buys $50 of protection vs. $200 in scenario 1.) The barrier to entry for the attack has now been raised to $500. Any attacker will also have to consider the operational risks inherent in their attack. The $500-rated safe may have improved mechanisms to assist tracking and prosecution.

If successful the attacker would make a $500 profit whilst I again lose $1010.

So how has centralising security (on a fixed budget) affected risk?

The impact is identical in both scenarios, but the likelihood has been decreased in scenario 2. This is because the entry cost for the attack has increased (reducing the population of attackers) and the motivation for an attacker has been decreased by decreasing the profit and possibly increasing risk of detection.

I know how I’d spend my budget.

From: “A. L. Papadopoulos” <dp949 tutor.open.ac.uk>
Subject: V-ID

This system is open to abuse and incompetence. With regard to the former, I’m not convinced that everyone with access to potentially threatening information will refrain from using that information abusively, via blackmail, bigotry, red-lining and so forth. Abusive employees could create false negatives as well as false positives. This is borne out by the actual history of background-checking agencies in the UK, where there is currently a scandal about the hugely incompetent company that handles background checks. I haven’t found the page that details the case of teachers who were misidentified as criminals and refused jobs, so I won’t make a claim as to its veracity, but there are a variety of articles about the UK Criminal Records Bureau, and about an IT provider called Capita, all in a similar vein.

I think that these two factors—abuse and incompetence—are reason enough to scrap any system that relies on the strategies of the V-ID as reported.

From: “Bruce Kaalund” <bakaalund comcast.net>
Subject: I am not a terrorist cards

This is another case of the haves vs. the have-nots. The fact that someone has a card “verifying” that he is not a terrorist not only would be a vehicle for potential bad guys to slip into the system, it would also create a large number of suspects. Because you have to pay for the card, you run the risk of making the economically less-privileged into even bigger targets of the law enforcement community. You also include the people who are occasional travelers, or who only go to one football game a year, because of the cost of the ticket. As much as we may hate to talk about it, such a system would further relegate those who are ethnically, racially, socially and/or financially on the outside as potential terrorists. These people now become the 21st century members of those “without papers.” Those whose jobs require travel would get the card, because their company would reimburse them for the cost. These people tend to be in the more economically privileged status, which does cover ethnicity and racial categories, but not to the extent that it would be truly “color-blind.” I’m also sure that civil libertarians would raise grave concerns.

From: Nicholas Weaver <nweaver ICSI.Berkeley.EDU>
Subject: Microsoft Source Code and Security…

We must assume that a truly competent attacker already has access to the Windows source code. The Russian and Chinese governments have legitimate access, and therefore their intelligence services have access. A related interesting thought exercise would be how much cost would be required by a criminal organization in an attempt to exfiltrate the latest copy of the source code from Microsoft.

Physical access seems an obvious one, and probably would take only a few-hundred-dollar bribe and a USB key handed to a janitor in order to gain a network toehold. Network attacks also seem a possibility, specifically IE attacks. Corrupt some major banner server and, rather than being indiscriminate, respond with a Trojan only to Microsoft-owned IP addresses. In either case, the risk of capture is reasonably low, the cost in time is measured in man-months or less, and the dollar cost negligible.

Thus, in all cases, the motto is clear: We MUST assume that truly bad guys have the latest Windows source code, if the bad guys think they would benefit from it. Not a happy thought, especially when combined with the observation that Windows is Critical Infrastructure.

From: “Ian D. Eccles” <ide101 psu.edu>
Subject: The Economics of Spam

In the February issue of Crypto-gram, you printed an article titled “The Economics of Spam.” You make a good point about spammers responding to account shutdown by using stolen accounts. Because of this point, I strongly feel that Gates’ third point is perhaps his weakest. While forcing someone to pay for the e-mail they send may impose an incidental quality check on the message, it can only work if the person sending the e-mail is the one paying for it. Presumably, any form of sendmail that allows for billing would have some form of authentication (likely a username/password associated with a user’s account.) While this may make stealing accounts more difficult, it certainly does not make it impossible, or even infeasible; passwords are stolen on a fairly regular basis. So, if Alice’s e-mail account is hijacked by Bob, who then bulk e-mails the world about his new “natural male enhancement” product, Alice may be left footing the bill. Thus, the mass-messaging was still free for Bob, but not for Alice. The economics of spam seem worse off in this case because Alice has paid for a service she did not make use of. Granted, because there is a bill involved, Alice is more likely to raise complaints, and maybe get some sort of investigation going to find the real culprit. However, if a spammer employs the tactic of stealing multiple accounts and spamming only a little from each and costing the victim very little, the damage might go unnoticed. In this case, spamming is still free to the spammer.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on <http://www.schneier.com/crypto-gram.html>.

To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.

Sidebar photo of Bruce Schneier by Joe MacInnis.