Crypto-Gram

February 15, 2005

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

Or you can read this issue on the web at <http://www.schneier.com/crypto-gram-0502.html>.

Schneier also publishes these same essays in his blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


TSA’s Secure Flight

As I wrote last month, I am participating in a working group to study the security and privacy of Secure Flight, the U.S. government’s program to match airline passengers with a terrorist watch list. In the end, I signed the NDA allowing me access to SSI (Sensitive Security Information) documents, but managed to avoid filling out the paperwork for a SECRET security clearance.

Last month the group had its second meeting.

At this point, I have four general conclusions. One, assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place. (And by this I mean the matching program, not any potential uses of commercial or other third-party data.)

Two, the security system surrounding Secure Flight is riddled with security holes. There are security problems with false IDs, ID verification, the ability to fly on someone else’s ticket, airline procedures, etc. There are so many ways for a terrorist to get around the system that it doesn’t provide much security.

Three, the urge to use this system for other things will be irresistible. It’s just too easy to say: “As long as you’ve got this system that watches out for terrorists, how about also looking for this list of drug dealers…and by the way, we’ve got the Super Bowl to worry about too.” Once Secure Flight gets built, all it’ll take is a new law and we’ll have a nationwide security checkpoint system.

And four, a program of matching airline passengers with names on terrorism watch lists is not making us appreciably safer, and is a lousy way to spend our security dollars.

Unfortunately, Congress has mandated that Secure Flight be implemented, so it is unlikely that the program will be killed. And analyzing the effectiveness of the program in general, potential mission creep, and whether the general idea is a worthwhile one, is beyond the scope of the working group. In other words, my first conclusion is basically all that they’re interested in hearing.

But that means I can write about everything else.

To speak to my fourth conclusion: Imagine for a minute that Secure Flight is perfect. That is, we can ensure that no one can fly under a false identity, that the watch lists have perfect identity information, and that Secure Flight can perfectly determine if a passenger is on the watch list: no false positives and no false negatives. Even if we could do all that, Secure Flight wouldn’t be worth it.

Secure Flight is a passive system. It waits for the bad guys to buy an airplane ticket and try to board. If the bad guys don’t fly, it’s a waste of money. If the bad guys try to blow up shopping malls instead of airplanes, it’s a waste of money.

If I had some millions of dollars to spend on terrorism security, and I had a watch list of potential terrorists, I would spend that money investigating those people. I would try to determine whether or not they were a terrorism threat before they got to the airport, or even if they had no intention of visiting an airport. I would try to prevent their plot regardless of whether it involved airplanes. I would clear the innocent people, and I would go after the guilty. I wouldn’t build a complex computerized infrastructure and wait until one of them happened to wander into an airport. It just doesn’t make security sense.

That’s my usual metric when I think about a terrorism security measure: Would it be more effective than taking that money and funding intelligence, investigation, or emergency response—things that protect us regardless of what the terrorists are planning next. Money spent on security measures that only work against a particular terrorist tactic, forgetting that terrorists are adaptable, is largely wasted.

My previous essay on the topic:
<https://www.schneier.com/blog/archives/2005/01/…>


T-Mobile Hack

For at least seven months last year, a hacker had access to T-Mobile’s customer network. He’s known to have accessed information belonging to 400 customers—names, Social Security numbers, voicemail messages, SMS messages, photos—and probably had the ability to access data belonging to any of T-Mobile’s 16.3 million U.S. customers. But in its fervor to report on the security of cell phones, and T-Mobile in particular, the media missed the most important point of the story: The security of much of our data is not under our control.

This is new. A dozen years ago, if someone wanted to look through your mail, they would have to break into your house. Now they can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your house; now it’s on a computer owned by a telephone company. Your financial data is on websites protected only by passwords. The list of books you browse, and the books you buy, is stored in the computers of some online bookseller. Your affinity card allows your supermarket to know what food you like. Data that used to be under your direct control is now controlled by others.

We have no choice but to trust these companies with our privacy, even though the companies have little incentive to protect that privacy. T-Mobile suffered some bad press for its lousy security, nothing more. It’ll spend some money improving its security, but it’ll be security designed to protect its reputation from bad PR, not security designed to protect the privacy of its customers.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as that data is held by others. The police need a warrant to read the e-mail on your computer; but they don’t need one to read it off the backup tapes at your ISP. According to the Supreme Court, that’s not a search as defined by the 4th Amendment.

This isn’t a technology problem, it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office—the Supreme Court must recognize that reading e-mail at an ISP is no different.

This essay will appear in eWeek.


Crypto-Gram Reprints

Crypto-Gram is currently in its eighth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

Toward Universal Surveillance:
<http://www.schneier.com/crypto-gram-0402.html#1>

The Politicization of Security:
<http://www.schneier.com/crypto-gram-0402.html#2>

Identification and Security:
<http://www.schneier.com/crypto-gram-0402.html#6>

The Economics of Spam:
<http://www.schneier.com/crypto-gram-0402.html#9>

Militaries and Cyber-War:
<http://www.schneier.com/crypto-gram-0301.html#1>

The RMAC Authentication Mode:
<http://www.schneier.com/crypto-gram-0301.html#7>

Microsoft and “Trustworthy Computing”:
<http://www.schneier.com/crypto-gram-0202.html#1>

Judging Microsoft:
<http://www.schneier.com/crypto-gram-0202.html#2>

Hard-drive-embedded copy protection:
<http://www.schneier.com/crypto-gram-0102.html#1>

A semantic attack on URLs:
<http://www.schneier.com/crypto-gram-0102.html#7>

E-mail filter idiocy:
<http://www.schneier.com/crypto-gram-0102.html#8>

Air gaps:
<http://www.schneier.com/crypto-gram-0102.html#9>

Internet voting vs. large-value e-commerce:
<http://www.schneier.com/crypto-gram-0102.html#10>

Distributed denial-of-service attacks:
<http://www.schneier.com/crypto-gram-0002.html#ddos>

Recognizing crypto snake-oil:
<http://www.schneier.com/crypto-gram-9902.html#snakeoil>


Microsoft RC4 Flaw

One of the most important rules of stream ciphers is to never use the same keystream to encrypt two different documents. If someone does, you can break the encryption by XORing the two ciphertext streams together. The keystream drops out, and you end up with plaintext XORed with plaintext—and you can easily recover the two plaintexts using letter frequency analysis and other basic techniques.

It’s an amateur crypto mistake. The easy way to prevent this attack is to use a unique initialization vector (IV) in addition to the key whenever you encrypt a document.

Microsoft uses the RC4 stream cipher in both Word and Excel. And they make this mistake. According to a paper by Hongjun Wu: “In this report, we point out a serious security flaw in Microsoft Word and Excel. The stream cipher RC4 [9] with key length up to 128 bits is used in Microsoft Word and Excel to protect the documents. But when an encrypted document gets modified and saved, the initialization vector remains the same and thus the same keystream generated from RC4 is applied to encrypt the different versions of that document. The consequence is disastrous since a lot of information of the document could be recovered easily.”

This isn’t new. Microsoft made the same mistake in 1999 with RC4 in WinNT Syskey. Five years later, Microsoft has the same flaw in other products.

The report (PDF):
<http://eprint.iacr.org/2005/007.pdf>

Microsoft’s 1999 mistake:
<http://www.bindview.com/Support/RAZOR/Advisories/…>


News

This article from SIGNAL has some interesting quotes by me.
<http://www.afcea.org/signal/articles/anmviewer.asp?…>

The Department of Homeland Security is considering a biometric identification card for transportation workers. I’ve written extensively about the uses and abuses of biometrics (Beyond Fear, pages 197-200). The short summary is that biometrics are great as a local authentication tool and terrible as a identification tool. For a whole bunch of reasons, this DHS project is a good use of biometrics.
<http://www.dhs.gov/dhspublic/display?content=4119>

A scary story about American Airlines data collection:
<http://www.boingboing.net/2005/01/19/…>

The CIA has published a new report that attempts to identify emerging global issues that may require action by U.S. policy makers. It’s worth noting that there are many references to “privacy.”
<http://cia.gov/nic/NIC_2020_project.html>

Air marshals paralyzed by an inch of snow:
<http://www.washingtontimes.com/functions/print.php?…>

FBI retires Carnivore:
<http://www.securityfocus.com/news/10307>
Of course, they’re not giving up on Internet surveillance. They’ve just realized that commercial tools are better, cheaper, or both.

When we telephone a customer support line, we all hear the recording saying that the call may be monitored. What we don’t realize is that we may be monitored even when we’re on hold. Monitoring is intended to track the performance of call center operators, but the professional snoops are inadvertently monitoring callers, too. Most callers do not realize that they may be taped even while they are on hold. There’s an easy defense for those in offices and with full-featured phones: the “mute” button. But people believe their calls are being monitored “for quality or training purposes,” and assume that it’s only the part of the call where they’re actually talking to someone. Even easy defenses don’t work if people don’t know that they have to implement them.
<http://www.nytimes.com/2005/01/11/business/…>

A Rand study concluded that outfitting aircraft with missile defense is not a good security trade-off:
<http://abcnews.go.com/Technology/wireStory?id=441460>

RFID as automobile DNA:
<http://www.rfid2vin.com/>

Nice essay on election recounts:
<http://www.eff.org/deeplinks/archives/002222.php>

PS2 cheat codes hacked:
<http://www.aquick.org//2005/01/18/…>

“Election security chiefs in Iraq will set up decoy polling centres in an attempt to outwit insurgents who have vowed to target voters with suicide bombs and mortar rounds on Sunday.”
<http://news.scotsman.com/latest.cfm?id=4051297>
This is so ridiculous I have trouble believing it’s true. Everyone had to vote, right? This means one of two things. One, everyone knows the sites are decoys, so the insurgents know to avoid them. Or two, no one knows they’re decoys, voters flock to them, and it wouldn’t matter to the insurgents that they were decoys.

A great photo that illustrates the “weakest link” principle:
<http://www.syslog.com/~jwilson/pics-i-like/…>

Unconfirmed rumors of a virus that infects Lexus cars via Bluetooth.
<http://www.engadget.com/entry/1234000760029037/>

GovCon conference explores technologies for intelligence and terrorism prevention. There’s a track on “the technologies required to attain persistent surveillance and tailored persistence.”
<http://www.ncsi.com/govcon05/index.shtml>

In an attempt to protect us from terrorism, there are new restrictions on fertilizer sales in the Kansas (and elsewhere):
<http://www.kansasagconnection.com/story-state.cfm?…>

A company is selling liquid with a unique identifier. The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on *your* valuables, and then call the police.
<http://www.smartwater.com/products/…>


Flying on Someone Else’s Airline Ticket

Recently, Slate published a method for anyone to fly on anyone else’s ticket.

I wrote about this exact vulnerability a year and a half ago. The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass, and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler’s identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record—the third leg of the triangle—it’s possible to create two different boarding passes and have no one notice. That’s why the attack works.

Slate article:
<http://slate.msn.com/id/2113157/fr/rss/>

My previous essay:
<http://www.schneier.com/crypto-gram-0308.html#6>


Bank Sued for Unauthorized Transaction

An accountholder is suing his bank over $90,000 that was fraudulently transferred from his online account by someone who got hold of his password.

The typical press coverage of this story is along the lines of “Bank of America sued because customer’s PC was hacked.” But that’s not it. Bank of America is being sued because they allowed an unauthorized transaction to occur, and they’re not making good on that mistake. The transaction happened to occur because the customer’s PC was hacked.

I know nothing about the actual suit and its merits, but this is a problem that is not going away. And while I think that banks should not be held responsible for what’s on their customers’ machines, they should be held responsible for allowing unauthorized transactions to occur. The bank’s internal systems, however set up, for whatever reason, permitted the fraudulent transaction.

There is a simple economic incentive problem here. As long as the banks are not responsible for financial losses from fraudulent transactions over the Internet, banks have no incentive to improve security. But if banks are held responsible for these transactions, you can bet that they won’t allow such shoddy security.

<http://www.sun-sentinel.com/news/local/southflorida/…>


Counterpane News

Counterpane announced Enterprise Protection Suite 2.0, a comprehensive security service that includes our Managed Security Monitoring, carrier level protection, and e-mail scanning.
<http://www.counterpane.com/pr-20050215a.html>

Counterpane announced some really excellent 2004 results:
<http://www.counterpane.com/pr-20050201.html>

Gartner commented on the alliance between Counterpane and Getronics, including the opening of the new Counterpane/Getronics European SOC:
<http://www3.gartner.com/DisplayDocument?doc_cd=126055>

Schneier will be speaking at the RSA Conference in San Francisco. He’s on a panel on regulation (Wednesday morning at 8:00) and giving a solo talk on security thinking (Tuesday afternoon at 5:30).

Schneier will be speaking at the AAAS meeting in Washington DC on 2/21. The panel is called “Privacy and Security: Making Intelligent Tradeoffs.”

Counterpane is looking to hire an EMEA Systems Engineer:
<http://www.counterpane.com/jobs.html>


The Curse of the Secret Question

It’s happened to all of us: We sign up for some online account, choose a difficult-to-remember and hard-to-guess password, and are then presented with a “secret question” to answer. Twenty years ago, there was just one secret question: “What’s your mother’s maiden name?” Today, there are more: “What street did you grow up on?” “What’s the name of your first pet?” “What’s your favorite color?” And so on.

The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you. It’s a great idea from a customer service perspective—a user is less likely to forget his first pet’s name than some random password—but terrible for security. The answer to the secret question is much easier to guess than a good password, and the information is much more public. (I’ll bet the name of my family’s first pet is in some database somewhere.) And even worse, everybody seems to use the same series of secret questions.

The result is the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

What can one do? My usual technique is to type a completely random answer—I madly slap at my keyboard for a few seconds—and then forget about it. This ensures that some attacker can’t bypass my password and try to guess the answer to my secret question, but is pretty unpleasant if I forget my password. The one time this happened to me, I had to call the company to get my password and question reset. (Honestly, I don’t remember how I authenticated myself to the customer service rep at the other end of the phone line.)

Which is maybe what should have happened in the first place. I like to think that if I forget my password, it should be really hard to gain access to my account. I want it to be so hard that an attacker can’t possibly do it. I know this is a customer service issue, but it’s a security issue too. And if the password is controlling access to something important—like my bank account—then the bypass mechanism should be harder, not easier.

Passwords have reached the end of their useful life. Today, they only work for low-security applications. The secret question is just one manifestation of that fact.

This essay originally appeared on Computerworld:
<http://www.computerworld.com/securitytopics/…>


Authentication and Expiration

There’s a security problem with many Internet authentication systems that’s never talked about: there’s no way to terminate the authentication.

A couple of months ago, I bought something from an e-commerce site. At the checkout page, I wasn’t able to just type in my credit-card number and make my purchase. Instead, I had to choose a username and password. Usually I don’t like doing that, but in this case I wanted to be able to access my account at a later date. In fact, the password was useful because I needed to return an item I purchased.

Months have passed, and I no longer want an ongoing relationship with the e-commerce site. I don’t want a username and password. I don’t want them to have my credit-card number on file. I’ve received my purchase, I’m happy, and I’m done. But because that username and password have no expiration date associated with them, they never end. It’s not a subscription service, so there’s no mechanism to sever the relationship. I will have access to that e-commerce site for as long as it remembers that username and password.

In other words, I am liable for that account forever.

Traditionally, passwords have indicated an ongoing relationship between a user and some computer service. Sometimes it’s a company employee and the company’s servers. Sometimes it’s an account and an ISP. In both cases, both parties want to continue the relationship, so expiring a password and then forcing the user to choose another is a matter of security.

In cases with this ongoing relationship, the security consideration is damage minimization. Nobody wants some bad guy to learn the password, and everyone wants to minimize the amount of damage he can do if he does. Regularly changing your password is a solution to that problem.

This approach works because both sides want it to; they both want to keep the authentication system working correctly, and minimize attacks.

In the case of the e-commerce site, the interests are much more one-sided. The e-commerce site wants me to live in their database forever. They want to market to me, and entice me to come back. They want to sell my information. (This is the kind of information that might be buried in the privacy policy or terms of service, but no one reads those because they’re unreadable. And all bets are off if the company changes hands.)

There’s nothing I can do about this, but a username and password that never expire is another matter entirely. The e-commerce site wants me to establish an account because it increases the chances that I’ll use them again. But I want a way to terminate the business relationship, a way to say: “I am no longer taking responsibility for items purchased using that username and password.”

Near as I can tell, the username and password I typed into that e-commerce site puts my credit card at risk until it expires. If the e-commerce site uses a system that debits amounts from my checking account whenever I place an order, I could be at risk forever. (The US has legal liability limits, but they’re not that useful. According to Regulation E, the electronic transfers regulation, a fraudulent transaction must be reported within two days to cap liability at US$50; within 60 days, it’s capped at $500. Beyond that, you’re out of luck.)

This is wrong. Every e-commerce site should have a way to purchase items without establishing a username and password. I like sites that allow me to make a purchase as a “guest,” without setting up an account.

But just as importantly, every e-commerce site should have a way for customers to terminate their accounts and should allow them to delete their usernames and passwords from the system. It’s okay to market to previous customers. It’s not okay to needlessly put them at financial risk.

This essay also appeared in the Jan/Feb 05 issue of IEEE Security & Privacy.


Comments from Readers

From: tom <somefellowjp yahoo.co.jp>
Subject: Fingerprinting Students

An item of feedback regarding the ID badge concept for children getting on the bus.

A very important consideration with regards to this system, which I didn’t see mentioned in your piece, is the sensitivity and specificity of the test; that is, the rate of false positives and false negatives. That is, how likely is it that a badge registering as being on the bus means that a child has not been kidnapped? More importantly, in the event that the badge does not show up on the bus, how likely is it that the child has been lost or kidnapped?

It will be glaringly obvious to anyone with experience working with children that a large mass of children are completely incapable of keeping track of, and properly wearing, ID badges. The badges will be lost, found, traded amongst one another, discarded, etc. Friends will wear each other’s badges, kids will throw away their own badges and those of other kids they don’t like, they will attach the badge to a bag and forget/lose the badge, etc.

It would easily be predicted that multiple alarms will go off every day regarding whether or not a child had been kidnapped or lost. The enforcement of the system would then predictably escalate, with a scenario developing wherein every day the bus driver and school administrator gives all the children stern warnings to keep their ID badges on at all times when riding the bus. Children, being children (and—in this case—being entirely correct) will feel that the badges are an asinine and insulting measure, and will resent wearing them. This will exacerbate the problem of children not wearing badges.

Further, the registration of a badge as being on the bus would not mean that the child has actually gotten on the bus. A child playing hooky or getting intentionally lost will give their badge to a friend. Boyfriends and girlfriends will share badges. Friends will pool badges together on a common friend’s school bag.

The entire proposed system is quite preposterous. Among the various tests that a proposed security measure should pass to be considered worthwhile, this one doesn’t even pass one of the most basic criteria: “Would be expected to actually assist with the problem in question.”

From: Jeremy Epstein <jeremy.epstein cox.net>
Subject: Fingerprinting Students

In addition to your points, if the security system is evaded, then the error rate may be high enough to induce extra cost in tracking down missing kids. That could dramatically increase the cost of the system. Kids probably wouldn’t maliciously evade the system, but what’s their incentive to get scanned in/out? The bus driver may not want to slow down his/her route so each kid can get scanned, so there may be that incentive *not* to scan. [The cited article says they use RFID, so kids don’t need to scan, but could evade it by holding their badge far enough from the reader… providing they’re carrying the badge.]

As for motivation (or “agenda” as you call it), there may well be a simpler one: the Department of Homeland Security is giving money for all sorts of weird “security enhancing” systems. While the article you cite gives no indication of where the money is coming from, this may be a freebie to the local school system: it lets the local officials look like they’re doing something, and doesn’t cost *them* anything. So from their perspective, why not do it?

As a parent of three kids in public schools, I’d sure rather have the money spent on teachers or libraries than kid tracking on buses!

From: Anonymous
Subject: Shutting Down the GPS/Cell Network

In your last CRYPTO-GRAM under the “Shutting Down the GPS Network” article you mention comments from people that there should be some way to shut down the mobile (cell) phone network in the instances of a terrorist attack. Having worked in the telecoms industry I know that this technology exists and has been operational in a number of countries for some years; however, the pretence was never to combat terrorist activity but to assist the rescue services after a major incident.

If you have ever been caught in a train station or airport when there have been delays and you’ve tried to ring someone to tell them you are going to be late you will have noticed that it can sometimes take several attempts to connect the call. This is because there are only a certain number of cells available at the base station to handle all the calls within in area. Under normal operation an area will cover several thousand people who make and receive calls at basically random intervals over certain time periods. The base stations are scoped to handle a number of calls from a certain percentage of the people at any one time, and for the most part this works fine. When an event that affects a large number of people at once, like 300 people waiting for a delayed train/plane, the base station can very quickly reach full capacity and this prevents people who are trying to call out getting a cell. You know this, I know this, and journalists know this, and therein lies a problem.

When a major event occurs, journalists flock to the site to get stories for their respective broadcast organisations. These organisations are very competitive and each tries to be the ‘first on the ground’ with the ‘breaking headline’. In practice this means they get a journalist on site as fast as possible and they call back to the office on their mobile to phone in their story. They know how the cell network system works and once they successfully get a connection back to their office they never hang up as they may not get a connection in 15 minutes’ time when they have more news to report. Very quickly the area, and the cell network, fill up with journalists and this prevents people such as rescue services from using the cell network.

To counteract this, there is a system within some cell phone networks to restrict calls to designated phones in an emergency. This list usually contains the IMEI and SIM numbers of phones that have been issued to the emergency and security services. If and when this system is invoked it immediately limits the cell network to those phones; in most implementations this can be done at the base station level, so that those outside the area of the incident are not effected.

This does prevent other people from calling home to say they are OK, but this system is only for real emergencies and in such incidents the priority is to protect and rescue the injured; contacting home is a nice-to-have, being alive to call later is more important.

From: Mace Moneta <mmoneta optonline.net>
Subject: Linux vs Microsoft Security

I think you have oversimplified the Linux hacking issue. I think there is a philosophy and ethic involved as well. Hackers are the new-age anarchists, going after “the man.” Linux is created, maintained, and distributed by the “little guy,” for free. You only pay for support, not the software that comprises a typical Linux distribution. Even though companies are making billions of dollars off Linux-related services, Linux itself is ethically “clean.” The other aspect is that many hacking activities are performed for points/karma/recognition within the hacker community. Since Linux is open source, that same public recognition can be obtained via white hat activities—reporting and identifying security weaknesses.

Many people have said that once Linux becomes as big as Windows, it will be hacked more too. I don’t believe that. As long as Linux is free (beer + speech), and development is open, Linux will not be targeted. Look at Linus Torvalds’ <http://kerneltrap.org/node/4540> recent comments on the handling of security notifications. By keeping them open, those that discover a flaw obtain immediate widespread public recognition. That will have greater benefit than many in the Linux community appreciate. Complete transparency keeps links off the hacker target list. There are no “points” for shooting fish in a barrel.

From: Jon Tullett <Jon.Tullett haynet.com>
Subject: UK’s Chip and PIN

Chip and PIN is a good thing in that it cuts down on skimming. (However, criminals will remain criminals and if one avenue becomes hard to exploit, they’ll do something else. So it isn’t going to stop crime, or even the specific criminals.)

But, skimming aside, how good is it? I’m ignoring CNP transactions here.

In computer security (as you know, of course), we’ve got the notion of two-factor security. Something you know/have/are, pick two. Bank cards already have that—the physical card and a (weak) biometric in the form of a signature. We still have that with chip and PIN – the physical card and a weak password.

And in computer security we know all about how weak passwords are. People write them down and reuse them. In the case of someone with a multitude of bank cards, they’re almost certainly going to keep a record of the PIN in their wallet (i.e., steal the wallet and you steal the PIN) and they’re going to use the same PIN for multiple cards (i.e., shouldersurf one ATM transaction and you have broken every card). And, of course, you now have to type in your PIN _in_public_ every time you use the card.

So we’ve swapped a weak biometric for a tremendously weak password. How is that an improvement?

I also have to wonder, since we have the mechanisms and experience of handling signatures with cards, why they have been phased out? Why not use chip and PIN _and_ a signature? After all, it takes only a second, and every cardholder and every merchant is already familiar with the signature process.

One obvious reason I can see is to cover the collective asses of the banks and merchants. The signature is useful for repudiation, as the banks must produce on demand the signed slip for a transaction I dispute. If the signature is visibly fake, I get a refund from the merchant, facilitated by the bank. Merchants and banks don’t like this, for obvious reasons.

But how do I prove it was not me who entered the PIN? Haven’t I, in fact, just lost that facility to repudiate a transaction? If my bank card is stolen during one weekly shop and I only report it missing when I notice at the /next/ weekly shop, transactions in that (unreported) time will not be protected.

So skimming reductions aside, it seems that we’ve swapped a biometric for a password, and removed an important mechanism protecting users. A net gain for banks and merchants from the point of view of their exposure and risk, but a net loss to the consumer.

Am I missing something obvious? This is being punted as such a win for the cardholders, but it doesn’t really look that way. On the other hand, I know the reductions in skimming have been positive in many countries, so I’m not opposed per se. Just sceptical.

From: Jim Reid <jim rfc1035.com>
Subject: UK’s Chip and Pin

Your comments about the introduction of chip and pin by UK credit card companies are a little unfair. Yes, there is an advertising campaign which suggests cardholders can use easily guessed PINs. However this is really meant to explain to the public how chip and pin works and to allay fears from those who are uncomfortable using PINs. It’s largely a hearts and mind exercise. In that context, a simplified explanation of PIN selection helps. [Try explaining good key management practices in a newspaper or TV advert.] When PINs are issued for the new cards, they come with a leaflet explaining how the PIN can be changed. This generally includes good advice about selecting a PIN that’s easily remembered but not readily guessed: e.g. the birthday or phone number of a family member, not your own; don’t use the same PIN as you use for your ATM card; etc, etc.

BTW, the new cards still have a magnetic strip and signature on the back. When the PIN is used, there’s no sales voucher to sign and check. I’m not sure if this is a good thing or not.

From: Matthew Rubenstein <email mattruby.com>
Subject: Terrorism vs. Sabotage

Cyberterrorism: I was hoping you’d make your usual insightful distinctions among the essential factors of so-called “terrorism.” Blowing up dams, nuke plants, office towers or other essential or big infrastructure, in itself, is *sabotage*. Terrorism, like any other “ism,” is a practiced belief, not just a practice. Sabotage without an ideology, as you point out in different terms, is “merely” crime. But the important distinction between sabotage and terrorism, important in stopping it, is the practice itself. Terrorism is dependent on the value of *terror*, which can be communicated only in the media—including word of mouth. Planebombing the World Trade Center is terrorism, because of the endless media exposure its message will get. Not just because it’s big, killed a lot of people, was important to NYC’s economy—but because it will get scary pictures on TV for years, and people won’t stop talking about it.

We need a lot more sophistication in our media to survive the Terror War intact. A lot more than we need even security upgrades. Bin Laden and his ilk play our media like a fiddle, costing billions, dividing and destroying our society. The initial blast is a necessary fuse they light. But the bomb is our naive, self-undermining media. I’ve given your books, and recommendation of them, to network news execs and City government officials here in NYC and elsewhere. I’d like to hear more from you that focuses on the media vulnerabilities to terrorism, where we need the most help in rethinking our system.

From: “Kimberly Allen” <kimall mindspring.com>
Subject: Cyberwar

In your piece on cyberwar, you point out some of the unique qualities of cyberattacks despite the fact that their names are simply other attacks (war, terrorism, crime, vandalism) with the prefix cyber-. While I agree that it’s useful to think in terms of cyberattacks as we work out how to understand the potential threats, countermeasures, and strategic utility of them, I believe that ultimately these items should not be separated into their own “cybercategory.”

Cyberwar is part of war, not part of some new, general category called cyberattacks. The people who study war know all about trench warfare, jungle-based guerilla combat, desert warfare, surgical air strikes, etc., and now they will have to add cyberwar to their bailiwick. Those who work on terrorism will have to do likewise with cyberterrorism, and the same for cybercrime and cybervandalism. It’s just one more means to add to the portfolio.

Granted, in the early stages of understanding the issue, it may be useful to talk about cyberattacks as a category. It is most efficient for people from many departments to come together to understand a new means of attack. But ultimately, they will take this knowledge back to their own departments, where it will morph and specialize into the kind of detailed information they really need and can use. I predict that over time, the understanding of cyberwar will actually diverge from the understanding of cyberterrorism because the people holding this knowledge will go back to be being embedded in their own departments. And this is not entirely bad; the detailed refinement of cyberwar knowledge cannot happen if it is constantly being synchronized with cyberterrorism knowledge.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.

Sidebar photo of Bruce Schneier by Joe MacInnis.