Crypto-Gram

May 15, 2006

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
http://www.schneier.com
http://www.counterpane.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0604.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Movie Plot Threat Contest: Status Report

On the first of last month, I announced my (possibly First) Movie-Plot Threat Contest.

“Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with.

“Your goal: cause terror. Make the American people notice. Inflict lasting damage on the U.S. economy. Change the political landscape, or the culture. The more grandiose the goal, the better.

“Assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.”

As of the end of the month, the blog post has 782 comments. I expected a lot of submissions, but the response has blown me away.

Looking over the different terrorist plots, they seem to fall into several broad categories. The first category consists of attacks against our infrastructure: the food supply, the water supply, the power infrastructure, the telephone system, etc. The idea is to cripple the country by targeting one of the basic systems that make it work.

The second category consists of big-ticket plots. Either they have very public targets—blowing up the Super Bowl, the Oscars, etc.—or they have high-tech components: nuclear waste, anthrax, chlorine gas, a full oil tanker, etc. And they are often complex and hard to pull off. This is the 9/11 idea: a single huge event that affects the entire nation.

The third category consists of low-tech attacks that go on and on. Several people imagined a version of the DC sniper scenario, but with multiple teams. The teams would slowly move around the country, perhaps each team starting up after the previous one was captured or killed. Other people suggested a variant of this with small bombs in random public locations around the country.

(There’s a fourth category: actual movie plots. Some entries are comical, unrealistic, have science fiction premises, etc. I’m not even considering those.)

The better ideas tap directly into public fears. In my book, Beyond Fear, I discussed five different tendencies people have to exaggerate risks: to believe that something is more risky than it actually is.

1. People exaggerate spectacular but rare risks and downplay common risks.

2. People have trouble estimating risks for anything not exactly like their normal situation.

3. Personified risks are perceived to be greater than anonymous risks.

4. People underestimate risks they willingly take and overestimate risks in situations they can’t control.

5. People overestimate risks that are being talked about and remain an object of public scrutiny.

The best plot ideas leverage one or more of those tendencies. Big-ticket attacks leverage the first. Infrastructure and low-tech attacks leverage the fourth. And every attack tries to leverage the fifth, especially those attacks that go on and on. I’m willing to bet that when I find a winner, it will be the plot that leverages the greatest number of those tendencies to the best possible advantage.

I also got a bunch of e-mails from people with ideas they thought too terrifying to post publicly. Some of them wouldn’t even tell them to me. I also received e-mails from people accusing me of helping the terrorists by giving them ideas.

But if there’s one thing this contest demonstrates, it’s that good terrorist ideas are a dime a dozen. Anyone can figure out how to cause terror. The hard part is execution.

Some of the submitted plots require minimal skill and equipment. Twenty guys with cars and guns—that sort of thing. Reading through them, you have to wonder why there have been no terrorist attacks in the U.S. since 9/11. I don’t believe the “flypaper theory” that the terrorists are all in Iraq instead of in the U.S. And despite all the ineffectual security we’ve put in place since 9/11, I’m sure we have had some successes in intelligence and investigation—and have made it harder for terrorists to operate both in the U.S. and abroad.

But mostly, I think terrorist attacks are much harder than most of us think. It’s harder to find willing recruits than we think. It’s harder to coordinate plans. It’s harder to execute those plans. Terrorism is rare, and for all we’ve heard about 9/11 changing the world, it’s still rare.

The submission deadline was the end of April month, but please keep posting plots if you think of them. And please read through some of the others and comment on them; I’m curious as to what other people think are the most interesting, compelling, realistic, or effective scenarios.

I’m reading through them, and will have a winner by the next Crypto-Gram.

Contest:
https://www.schneier.com/blog/archives/2006/04/…

Flypaper theory:
http://en.wikipedia.org/wiki/…

The contest made The New York Times:
http://www.nytimes.com/2006/04/23/movies/…


Who Owns Your Computer?

When technology serves its owners, it is liberating. When it is designed to serve others, over the owner’s objection, it is oppressive. There’s a battle raging on your computer right now—one that pits you against worms and viruses, Trojans, spyware, automatic update features and digital rights management technologies. It’s the battle to determine who owns your computer.

You own your computer, of course. You bought it. You paid for it. But how much control do you really have over what happens on your machine? Technically you might have bought the hardware and software, but you have less control over what it’s doing behind the scenes.

Using the hacker sense of the term, your computer is “owned” by other people.

It used to be that only malicious hackers were trying to own your computers. Whether through worms, viruses, Trojans or other means, they would try to install some kind of remote-control program onto your system. Then they’d use your computers to sniff passwords, make fraudulent bank transactions, send spam, initiate phishing attacks and so on. Estimates are that somewhere between hundreds of thousands and millions of computers are members of remotely controlled “bot” networks. Owned.

Now, things are not so simple. There are all sorts of interests vying for control of your computer. There are media companies that want to control what you can do with the music and videos they sell you. There are companies that use software as a conduit to collect marketing information, deliver advertising or do whatever it is their real owners require. And there are software companies that are trying to make money by pleasing not only their customers, but other companies they ally themselves with. All these companies want to own your computer.

Some examples:

1. Entertainment software: In October 2005, it emerged that Sony had distributed a rootkit with several music CDs—the same kind of software that crackers use to own people’s computers. This rootkit secretly installed itself when the music CD was played on a computer. Its purpose was to prevent people from doing things with the music that Sony didn’t approve of: It was a DRM system. If the exact same piece of software had been installed secretly by a hacker, this would have been an illegal act. But Sony believed that it had legitimate reasons for wanting to own its customers’ machines.

2. Antivirus: You might have expected your antivirus software to detect Sony’s rootkit. After all, that’s why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.

3. Internet services: Hotmail allows you to blacklist certain e-mail addresses, so that mail from them automatically goes into your spam trap. Have you ever tried blocking all that incessant marketing e-mail from Microsoft? You can’t.

4. Application software: Internet Explorer users might have expected the program to incorporate easy-to-use cookie handling and pop-up blockers. After all, other browsers do, and users have found them useful in defending against Internet annoyances. But Microsoft isn’t just selling software to you; it sells Internet advertising as well. It isn’t in the company’s best interest to offer users features that would adversely affect its business partners.

5. Spyware: Spyware is nothing but someone else trying to own your computer. These programs eavesdrop on your behavior and report back to their real owners—sometimes without your knowledge or consent—about your behavior.

6. Update: Automatic update features are another way software companies try to own your computer. While they can be useful for improving security, they also require you to trust your software vendor not to disable your computer for nonpayment, breach of contract or other presumed infractions.

Adware, software-as-a-service and Google Desktop search are all examples of some other company trying to own your computer. And Trusted Computing will only make the problem worse.

There is an inherent insecurity to technologies that try to own people’s computers: They allow individuals other than the computers’ legitimate owners to enforce policy on those machines. These systems invite attackers to assume the role of the third party and turn a user’s device against him.

Remember the Sony story: The most insecure feature in that DRM system was a cloaking mechanism that gave the rootkit control over whether you could see it executing or spot its files on your hard disk. By taking ownership away from you, it reduced your security.

If left to grow, these external control systems will fundamentally change your relationship with your computer. They will make your computer much less useful by letting corporations limit what you can do with it. They will make your computer much less reliable because you will no longer have control of what is running on your machine, what it does, and how the various software components interact. At the extreme, they will transform your computer into a glorified boob tube.

You can fight back against this trend by only using software that respects your boundaries. Boycott companies that don’t honestly serve their customers, that don’t disclose their alliances, that treat users like marketing assets. Use open-source software—software created and owned by users, with no hidden agendas, no secret alliances and no back-room marketing deals.

Just because computers were a liberating force in the past doesn’t mean they will be in the future. There is enormous political and economic power behind the idea that you shouldn’t truly own your computer or your software, despite having paid for it.

This essay originally appeared on Wired.com.
http://www.wired.com/news/columns/1,70802-0.html

Trusted computing:
http://www.schneier.com/crypto-gram-0208.html#1


Crypto-Gram Reprints

Crypto-Gram is currently in its ninth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram-back.html>. These are a selection of articles that appeared in this calendar month in other years.

REAL-ID
http://www.schneier.com/crypto-gram-0505.html#2

Should Terrorism be Reported in the News?
http://www.schneier.com/crypto-gram-0505.html#3

Combating Spam
http://www.schneier.com/crypto-gram-0505.html#15

Warrants as a Security Countermeasure
http://www.schneier.com/crypto-gram-0405.html#1

National Security Consumers
http://www.schneier.com/crypto-gram-0405.html#9

Encryption and Wiretapping
http://www.schneier.com/crypto-gram-0305.html#1

Unique E-Mail Addresses and Spam
http://www.schneier.com/crypto-gram-0305.html#6

Secrecy, Security, and Obscurity
http://www.schneier.com/crypto-gram-0205.html#1

Fun with Fingerprint Readers
http://www.schneier.com/crypto-gram-0205.html#5

What Military History Can Teach Network Security, Part 2
http://www.schneier.com/crypto-gram-0105.html#1

The Futility of Digital Copy Protection
http://www.schneier.com/crypto-gram-0105.html#3

Security Standards
http://www.schneier.com/crypto-gram-0105.html#7

Safe Personal Computing
http://www.schneier.com/crypto-gram-0105.html#8

Computer Security: Will we Ever Learn?
http://www.schneier.com/crypto-gram-0005.html#1

Trusted Client Software
http://www.schneier.com/crypto-gram-0005.html#6

The IL*VEYOU Virus (Title bowdlerized to foil automatic e-mail filters.)
http://www.schneier.com/crypto-gram-0005.html#ilyvirus

The Internationalization of Cryptography
http://www.schneier.com/…

The British discovery of public-key cryptography
http://www.schneier.com/crypto-gram-9805.html#nonsecret


Identity-Theft Disclosure Laws

California was the first state to pass a law requiring companies that keep personal data to disclose when that data is lost or stolen. Since then, many states have followed suit. Now Congress is debating federal legislation that would do the same thing nationwide.

Except that it won’t do the same thing: The federal bill has become so watered down that it won’t be very effective. I would still be in favor of it—a poor federal law is better than none—if it didn’t also pre-empt more-effective state laws, which makes it a net loss.

Identity theft is the fastest-growing area of crime. It’s badly named—your identity is the one thing that cannot be stolen—and is better thought of as fraud by impersonation. A criminal collects enough personal information about you to be able to impersonate you to banks, credit card companies, brokerage houses, etc. Posing as you, he steals your money, or takes a destructive joyride on your good credit.

Many companies keep large databases of personal data that is useful to these fraudsters. But because the companies don’t shoulder the cost of the fraud, they’re not economically motivated to secure those databases very well. In fact, if your personal data is stolen from their databases, they would much rather not even tell you: Why deal with the bad publicity?

Disclosure laws force companies to make these security breaches public. This is a good idea for three reasons. One, it is good security practice to notify potential identity theft victims that their personal information has been lost or stolen. Two, statistics on actual data thefts are valuable for research purposes. And three, the potential cost of the notification and the associated bad publicity naturally leads companies to spend more money on protecting personal information—or to refrain from collecting it in the first place.

Think of it as public shaming. Companies will spend money to avoid the PR costs of this shaming, and security will improve. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

This public shaming needs the cooperation of the press and, unfortunately, there’s an attenuation effect going on. The first major breach after California passed its disclosure law—SB1386—was in February 2005, when ChoicePoint sold personal data on 145,000 people to criminals. The event was all over the news, and ChoicePoint was shamed into improving its security.

Then LexisNexis exposed personal data on 300,000 individuals. And Citigroup lost data on 3.9 million individuals. SB1386 worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. After a while, it was no longer news. And when the press stopped reporting, the “cost” of these breaches to the companies declined.

Today, the only real cost that remains is the cost of notifying customers and issuing replacement cards. It costs banks about $10 to issue a new card, and that’s money they would much rather not have to spend. This is the agenda they brought to the federal bill, cleverly titled the Data Accountability and Trust Act, or DATA.

Lobbyists attacked the legislation in two ways. First, they went after the definition of personal information. Only the exposure of very specific information requires disclosure. For example, the theft of a database that contained people’s first *initial*, middle name, last name, Social Security number, bank account number, address, phone number, date of birth, mother’s maiden name and password would not have to be disclosed, because “personal information” is defined as “an individual’s first and last name in combination with …” certain other personal data.

Second, lobbyists went after the definition of “breach of security.” The latest version of the bill reads: “The term ‘breach of security’ means the unauthorized acquisition of data in electronic form containing personal information that establishes a reasonable basis to conclude that there is a significant risk of identity theft to the individuals to whom the personal information relates.”

Get that? If a company loses a backup tape containing millions of individuals’ personal information, it doesn’t have to disclose if it believes there is no “significant risk of identity theft.” If it leaves a database exposed, and has absolutely no audit logs of who accessed that database, it could claim it has no “reasonable basis” to conclude there is a significant risk. Actually, the company could point to a ID Analytics study that showed the probability of fraud to someone who has been the victim of this kind of data loss to be less than 1 in 1,000—which is not a “significant risk”—and then not disclose the data breach at all.

Even worse, this federal law pre-empts the 23 existing state laws—and others being considered—many of which contain stronger individual protections. So while DATA might look like a law protecting consumers nationwide, it is actually a law protecting companies with large databases *from* state laws protecting consumers.

So in its current form, this legislation would make things worse, not better.

Of course, things are in flux. They’re *always* in flux. The language of the bill has changed regularly over the past year, as various committees got their hands on it. There’s also another bill, HR3997, which is even worse. And even if something passes, it has to be reconciled with whatever the Senate passes, and then voted on again. So no one really knows what the final language will look like.

But the devil is in the details, and the only way to protect us from lobbyists tinkering with the details is to ensure that the federal bill does not pre-empt any state bills: that the federal law is a minimum, but that states can require more.

That said, disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is so common is that the data is so valuable. The way to mitigate the risk of fraud due to impersonation is not to make personal information harder to steal, it’s to make it harder to use.

Disclosure laws only deal with the economic externality of data brokers protecting your personal information. What we really need are laws prohibiting credit card companies and other financial institutions from granting credit to someone using your name with only a minimum of authentication.

But until that happens, we can at least hope that Congress will refrain from passing bad bills that override good state laws—and helping criminals in the process.

California’s SB 1386:
http://info.sen.ca.gov/pub/01-02/bill/sen/…
Existing state disclosure laws:
http://www.pirg.org/consumer/credit/statelaws.htm
http://www.cwalsh.org/cgi-bin/blosxom.cgi/2006/04/…

HR 4127 – Data Accountability and Trust Act:
http://thomas.loc.gov/cgi-bin/query/C?c109:./temp/…

HR 3997:
http://thomas.loc.gov/cgi-bin/query/C?c109:./temp/…

ID Analytics study:
http://www.idanalytics.com/news_and_events/20051208.htm

My essay on identity theft:
https://www.schneier.com/blog/archives/2005/04/…

A version of this essay originally appeared on Wired.com:
http://www.wired.com/news/columns/0,70690-0.html


When “Off” Doesn’t Mean Off

According to the specs of the new Nintendo Wii (its new game machine), “Wii can communicate with the Internet even when the power is turned off.” Nintendo accentuates the positive: “This WiiConnect24 service delivers a new surprise or game update, even if users do not play with Wii,” while ignoring the possibility that Nintendo can deactivate a game if it chooses to do so, or that someone else can deliver a different—not so wanted—surprise.

We all know that, but what’s interesting here is that Nintendo is changing the meaning of the word “off.” We are all conditioned to believe that “off” means off, and therefore safe. But in Nintendo’s case, “off” really means something like “on standby.” If users expect the Nintendo Wii to be truly off, they need to pull the power plug—assuming there isn’t a battery foiling that tactic. There seems to be no way to disconnect the Internet, as the Nintendo Wii is wireless only.

Maybe there is no way to turn the Nintendo Wii off.

There’s a serious security problem here, made worse by a bad user interface. “Off” should mean off.

http://wii.nintendo.com/hardware.html


News

It’s a provocative headline: “Triple DES Upgrades May Introduce New ATM Vulnerabilities.” Basically, at the same time ATM machine owners upgrading their encryption to triple-DES, they’re also moving the communications links from dedicated lines to the Internet. And while the protocol encrypts PINs, it doesn’t encrypt any of the other information, such as card numbers and expiration dates. So it’s the move from dedicated lines to the Internet that’s adding the insecurities, not the triple-DES upgrades.
http://www.paymentsnews.com/2006/04/…

Someone filed change-of-address forms with the post office to divert other people’s mail to himself. 170 times. “Postal Service spokeswoman Patricia Licata said a credit card is required for security reasons. ‘We have systems in place to prevent this type of occurrence,’ she said, but declined further comment on the specific case until officials have time to analyze what happened.” Sounds like those systems don’t work very well.
http://www.wvec.com/news/local/stories/…

A deniable file system:
https://www.schneier.com/blog/archives/2006/04/…

Great hoax video: graffiti on Air Force One:
http://www.stillfree.com/
http://abcnews.go.com/Technology/wireStory?id=1875386

The Department of Homeland Security has released a Request for Proposal—that’s the document asking industry if anyone can do what it wants—for the Secure Border Initiative.
http://www.washingtontechnology.com/news/1_1/…

Stuntz and Solove Debate Privacy and Transparency
http://www.tnr.com/user/nregi.mhtml?…
http://www.concurringopinions.com/archives/2006/04/…
http://www.tnr.com/user/nregi.mhtml?…
http://www.concurringopinions.com/archives/2006/04/…
Terrorist travel advisory: “My son and I woke up Sunday morning and drove a rented truck to New York City to move his worldly goods into an apartment there. As we made it to the Holland Tunnel, after traveling the Tony Soprano portion of the Jersey Turnpike with a blue moon in our eyes, the woman in the tollbooth informed us that, since 9/11, trucks were not allowed in the tunnel; we’d have to use the Lincoln Tunnel, she said. So if you are a terrorist trying to get into New York from Jersey, be advised that you’re going to have to use the Lincoln Tunnel.”
http://www.post-gazette.com/pg/06110/683563-294.stm

The Kryptos Sculpture is located in the center of the CIA Headquarters in Langley, VA. It was designed in 1990, and contains a four-part encrypted puzzle. The first three parts have been solved, but now we’ve learned that the second-part solution was wrong and has been re-solved:
http://www.elonka.com/kryptos/…
http://www.wired.com/news/technology/0,70701-0.html
More on the sculpture:
http://en.wikipedia.org/wiki/Kryptos
http://www.elonka.com/kryptos/
Blog entry URL:
https://www.schneier.com/blog/archives/2006/04/…

Mafia boss secures his data with Caesar cipher.
http://dsc.discovery.com/news/briefs/20060417/…

Microsoft Vista’s endless security warnings:
http://www.winsupersite.com/reviews/…
The problem with lots of warning dialog boxes is that they don’t provide security. Users stop reading them. They think of them as annoyances, as an extra click required to get a feature to work. Clicking through gets embedded into muscle memory, and when it actually matters the user won’t even realize it.
http://www.codinghorror.com/blog/archives/000571.html
http://west-wind.com/weblog/posts/4678.aspx
These dialog boxes are not security for the user, they’re CYA security *from* the user. When some piece of malware trashes your system, Microsoft can say: “You gave the program permission to do that; it’s not our fault.” Warning dialog boxes are only effective if the user has the ability to make intelligent decisions about the warnings. If the user cannot do that, they’re just annoyances. And they’re annoyances that don’t improve security.
http://s.zdnet.com/Ou/?p=209

Digital cameras have unique fingerprints:
http://www.eurekalert.org/pub_releases/2006-04/…
Interesting research, but there’s one important aspect of this fingerprint that the article did not talk about: how easy is it to forge? Can someone analyze 100 images from a given camera, and then doctor a pre-existing picture so that it appeared to come from that camera? My guess is that it can be done relatively easily.

Kaspersky Labs reports on extortion scams using malware:
http://www.viruslist.com/en/analysis?…
Among other worms, the article discusses the GpCode.ac worm, which encrypts data using 56-bit RSA (no, that’s not a typo). The whole article is interesting reading.

Larry Beinhart makes an interesting case for the elimination of most government secrecy.
http://www.buzzflash.com/contributors/06/04/…
He has a good argument, although I think the issue is a bit more complicated.
http://www.schneier.com/crypto-gram-0205.html#1

“Security Myths and Passwords,” by Gene Spafford:
http://www.cerias.purdue.edu/weblogs/spaf/general/…

There was a code in the judge’s ruling on the Da Vinci Code plagiarism case. It was solved way too quickly after it was discovered, because the judge gave out some really obvious hints. But you can read about it here:
https://www.schneier.com/blog/archives/2006/04/…

As an aside, I am mentioned in Da Vinci Code. No, really. Page 199 of the American hardcover edition. “Da Vinci had been a cryptography pioneer, Sophie knew, although he was seldom given credit. Sophie’s university instructors, while presenting computer encryption methods for securing data, praised modern cryptologists like Zimmermann and Schneier but failed to mention that it was Leonardo who had invented one of the first rudimentary forms of public key encryption centuries ago.” That’s right. I am a realistic background detail.
http://fishbowl.pastiche.org/2004/07/06/house_of_cards

Technology Review has an interesting article discussing some of the technologies used by the NSA in its warrantless wiretapping program, some of them from the killed Total Information Awareness (TIA) program.
http://www.technologyreview.com/read_article.aspx?…
John Dvorak argues that Internet Explorer was Microsoft’s greatest mistake ever. Certainly its decision to tightly integrate IE with the operating system—done as an anti-competitive maneuver against Netscape during the Browser Wars—has resulted in some enormous security problems that Microsoft has still not recovered from. Not even with the introduction of IE7.
http://www.pcmag.com/print_article2/…

Security in comics: attackers are adaptable:
http://www.comics.com/comics/hedge/archive/…

We’ve talked about counterfeit money, counterfeit concert tickets, counterfeit police credentials, and counterfeit police departments. Here’s a story about a counterfeit company.
http://www.iht.com/articles/2006/04/27/business/nec.php

Verizon has announced that it has activated the Access Overload Control (ACCOLC) system, allowing some cell phones to have priority access to the network, even when the network is overloaded. Sounds like you’re going to have to enter some sort of code into your handset. I wonder how long before someone hacks that system.
http://www.pcsintel.com/content/view/1293/0/

An arson squad blows up a news rack, mistaking a promotion for Tom Cruise’s new movie for a bomb. Really; you can’t make this kind of stuff up.
http://www.editorandpublisher.com/eandp/news/…
Assault weapon that passes through X-ray machines.
http://www.promoinnovations.com/xray.htm

A man sues Compaq for false advertising. He bought the computer because it was advertised as totally secure. But after he committed some crimes and the FBI got his computer, they were able to recover his data. This is what I said in the article: “Unfortunately, this probably isn’t a great case. Here’s a man who’s not going to get much sympathy. You want a defendant who bought the Compaq computer, and then, you know, his competitor, or a rogue employee, or someone who broke into his office, got the data. That’s a much more sympathetic defendant.”
http://hartfordadvocate.com/gbase/News/content?…

Infant identity theft victim:
http://www.abcnews.go.com/US/story?id=155878&page=1

An improv group in New York dressed up like Best Buy employees and went into a store, secretly videotaping the results. My favorite part: “Security guards and managers started talking to each other frantically on their walkie-talkies and headsets. ‘Thomas Crown Affair! Thomas Crown Affair!,’ one employee shouted. They were worried that we were using our fake uniforms to stage some type of elaborate heist. ‘I want every available employee out on the floor RIGHT NOW!'”
http://www.improveverywhere.com/mission_view.php?…

Stealing cars with laptops:
http://www.leftlanenews.com/2006/05/03/…
http://slashdot.org/articles/06/05/03/1928256.shtml

The rapper MC Plus+ has written a song about cryptography, “Alice and Bob.” It mentions DES, AES, Blowfish, RSA, SHA-1, and more. And me!
http://www.cs.purdue.edu/homes/anavabi/mp3/…
Here’s an article about “geeksta rap.”
http://www.wired.com/news/culture/0,1284,67970,00.html

The DHS secretly shares European air passenger data in violation of agreement:
http://www.aclu.org/privacy/spying/…

Shell has suspended its chip-and-pin payment system in the UK, after fraudsters stole over one million pounds. Lots of details on my blog:
https://www.schneier.com/blog/archives/2006/05/…

According to this article, the ultimate terrorist threat is flying robot drones. The article really pegs the movie-plot threat hype-meter.
http://www.physorg.com/news66197469.html

A reporter finds an old British Airways boarding pass, and proceeds to use it to find everything else about the person.
http://www.guardian.co.uk/g2/story/0,,1766138,00.html
Notice the economic pressures: “‘The problem here is that a commercial organisation is being given the task of collecting data on behalf of a foreign government, for which it gets no financial reward, and which offers no business benefit in return,’ says Laurie. ‘Naturally, in such a case, they will seek to minimise their costs, which they do by handing the problem off to the passengers themselves. This has the neat side-effect of also handing off liability for data errors.'”

Five stories of RFID hacking:
http://www.wired.com/wired/archive/14.05/rfid.html

And IBM thinks it has a solution: a removable tag that reduces the range of the RFID chip:
http://wired.com/news/technology/0,70793-0.html
Why not disable it entirely?

Serious computer problems inside the NSA:
http://www.baltimoresun.com/news/custom/attack/…
Meanwhile, the NSA is building a massive traffic-analysis database on Americans’ calling patterns:
http://www.usatoday.com/news/washington/…
http://www.prospect.org/weblog/2006/05/…
http://glenngreenwald.blogspot.com/2006/05/…
http://www.orinkerr.com/2006/05/11/…
http://www.orinkerr.com/2006/05/12/…

Major vulnerability found in Diebold election machines. This one is a big deal.
http://www.insidebayarea.com/ci_3805089
http://www.blackboxvoting.org/BBVtsxstudy.pdf

Comparing the security of election machines with the security of slot machines:
http://www.washingtonpost.com/wp-dyn/content/…
Thief disguises himself as a museum guard and tricks employees into giving him 200,000 euros:
http://today.reuters.com/news/articlenews.aspx?…
Fascinating first-person account of being on the TSA’s watch list:
http://arstechnica.com/news.ars/post/20060506-6767.html

Reconceptualizing national intelligence:
http://www.fas.org//secrecy/2006/05/…
Public-key cryptography for digital notarization in Pennsylvania.
http://www.nationalnotary.org/news/index.cfm?…
http://www.eweek.com/article2/0,1895,1955701,00.asp


RFID Cards and Man-in-the-Middle Attacks

Recent articles about a proposed US-Canada and US-Mexico travel document (kind of like a passport, but less useful), with an embedded RFID chip that can be read up to 25 feet away, have once again made RFID security newsworthy.

My views have not changed. The most secure solution is a smart card that only works in contact with a reader; RFID is much more risky. But if we’re stuck with RFID, the combination of shielding for the chip, basic access control security measures, and some positive action by the user to get the chip to operate is a good one. The devil is in the details, of course, but those are good starting points.

And when you start proposing chips with a 25-foot read range, you need to worry about man-in-the-middle attacks. An attacker could potentially impersonate the card of a nearby person to an official reader, just by relaying messages to and from that nearby person’s card.

Here’s how the attack would work. In this scenario, customs Agent Alice has the official card reader. Bob is the innocent traveler, in line at some border crossing. Mallory is the malicious attacker, ahead of Bob in line at the same border crossing, who is going to impersonate Bob to Alice. Mallory’s equipment includes an RFID reader and transmitter.

Assume that the card has to be activated in some way. Maybe the cover has to be opened, or the card taken out of a sleeve. Maybe the card has a button to push in order to activate it. Also assume the card has come challenge-reply security protocol and an encrypted key exchange protocol of some sort.

1. Alice’s reader sends a message to Mallory’s RFID chip.

2. Mallory’s reader/transmitter receives the message, and rebroadcasts it to Bob’s chip. (Bob is somewhere else, out of Alice’s range.)

3. Bob’s chip responds normally to a valid message from Alice’s reader. He has no way of knowing that Mallory relayed the message.

4. Mallory’s reader transmitter receives Bob’s message and rebroadcasts it to Alice. Alice has no way of knowing that the message was relayed.

5. Mallory continues to relay messages back and forth between Alice and Bob.

Defending against this attack is hard. (I talk more about the attack in Applied Cryptography, Second Edition, page 109.) Time stamps don’t help. Encryption doesn’t help. It works because Mallory is simply acting as an amplifier. Mallory might not be able to read the messages. He might not even know who Bob is. But he doesn’t care. All he knows is that Alice thinks he’s Bob.

Precise timing can catch this attack, because of the extra delay that Mallory’s relay introduces. But I don’t think this is part of the spec.

The attack can be easily countered if Alice looks at Mallory’s card and compares the information printed on it with what she’s receiving over the RFID link. But near as I can tell, the point of the 25-foot read distance is so cards can be authenticated in bulk, from a distance.

According to the news.com article: “Homeland Security has said, in a government procurement notice posted in September, that “read ranges shall extend to a minimum of 25 feet” in RFID-equipped identification cards used for border crossings. For people crossing on a bus, the proposal says, ‘the solution must sense up to 55 tokens.'”

If Mallory is on that bus, he can impersonate any nearby Bob who activates his RFID card early. And at a crowded border crossing, the odds of some Bob doing that are pretty good.

From the Federal Computer Week article: “If that were done, the PASS system would automatically screen the cardbearers against criminal watch lists and put the information on the border guard’s screen by the time the vehicle got to the station, Williams said.”

And would predispose the guard to think that everything’s okay, even if it isn’t.

I don’t think people are thinking this one through.

http://news.com.com/…
http://www.fcw.com/article94113-04-18-06-Web

My views on RFID identity cards:
https://www.schneier.com/blog/archives/2005/08/…


Software Failure Causes Airport Evacuation

Last month I wrote about airport passenger screening, and mentioned that the X-ray equipment inserts “test” bags into the stream in order to keep screeners more alert. That system failed pretty badly earlier this week at Atlanta’s Hartsfield-Jackson Airport, when a false alarm resulted in a two-hour evacuation of the entire airport.

The screening system injects test images onto the screen. Normally the software flashes the words “This is a test” on the screen after a brief delay, but this time the software failed to indicate that. The screener noticed the image (of a “suspicious device,” according to CNN) and, per procedure, screeners manually checked the bags on the conveyor belt for it. They couldn’t find it, of course, but they evacuated the airport and spent two hours vainly searching for it.

Hartsfield-Jackson is the country’s busiest passenger airport. It’s Delta’s hub city. The delays were felt across the country for the rest of the day.

Okay, so what went wrong here? Clearly the software failed. Just as clearly the screener procedures didn’t fail—everyone did what they were supposed to do.

What is less obvious is that the system failed. It failed, because it was not designed to fail well. A small failure—in this case, a software glitch in a single X-ray machine—cascaded in such a way as to shut down the entire airport. This kind of failure magnification is common in poorly designed security systems. Better would be for there to be individual X-ray machines at the gates—I’ve seen this design at several European airports—so that when there’s a problem the effects are restricted to that gate.

Of course, this distributed security solution would be more expensive. But I’m willing to bet it would be cheaper overall, taking into account the cost of occasionally clearing out an airport.

http://www.cnn.com/2006/US/04/20/atlanta.airport/…

What I wrote last month:
https://www.schneier.com/blog/archives/2006/03/…


Counterpane News

On May 23, Schneier will be opening a new speaking series by the ACLU with a talk on “The Future of Privacy.”
http://www.aclu.org/privacy/25551res20060512.html

Schneier will be speaking at the Gartner IT Security Summit in Washington DC, June 5-7:
http://www.gartner.com/2_events/conferences/sec12.jsp

Schneier will be speaking at the ACLU New Jersey Membership Conference:
https://www.aclu-nj.org/events/…

Schneier will be speaking at the ACLU Vermont Privacy Conference:
http://www.acluvt.org/news/display.php?…
Tipping Point is offering Managed Security Services through an alliance with Counterpane:
http://www.counterpane.com/pr-20060501.html


Microsoft’s BitLocker

BitLocker Drive Encryption is a new security feature in Windows Vista, designed to work with the Trusted Platform Module (TPM). Basically, it encrypts the C drive with a computer-generated key. In its basic mode, an attacker can still access the data on the drive by guessing the user’s password, but would not be able to get at the drive by booting the disk up using another operating system, or removing the drive and attaching it to another computer.

There are several modes for BitLocker. In the simplest mode, the TPM stores the key and the whole thing happens completely invisibly. The user does nothing differently, and notices nothing different.

The BitLocker key can also be stored on a USB drive. Here, the user has to insert the USB drive into the computer during boot. Then there’s a mode that uses a key stored in the TPM and a key stored on a USB drive. And finally, there’s a mode that uses a key stored in the TPM and a four-digit PIN that the user types into the computer. This happens early in the boot process, when there’s still ASCII text on the screen.

Note that if you configure BitLocker with a USB key or a PIN, password guessing doesn’t work. BitLocker doesn’t even let you get to a password screen to try.

For most people, basic mode is the best. People will keep their USB key in their computer bag with their laptop, so it won’t add much security. But if you can force users to attach it to their key chains—remember that you only need the key to boot the computer, not to operate the computer—and convince them to go through the trouble of sticking it in their computer every time they boot, then you’ll get a higher level of security.

There is a recovery key: optional but strongly encouraged. It is automatically generated by BitLocker, and it can be sent to some administrator or printed out and stored in some secure location. There are ways for an administrator to set group policy settings mandating this key.

There aren’t any back doors for the police, though.

You can get BitLocker to work in systems without a TPM, but it’s kludgy. You can only configure it for a USB key. And it only will work on some hardware: because BitLocker starts running before any device drivers are loaded, the BIOS must recognize USB drives in order for BitLocker to work.

Encryption particulars: The default data encryption algorithm is AES-128-CBC with an additional diffuser. The diffuser is designed to protect against ciphertext-manipulation attacks, and is independently keyed from AES-CBC so that it cannot damage the security you get from AES-CBC. Administrators can select the disk encryption algorithm through group policy. Choices are 128-bit AES-CBC plus the diffuser, 256-bit AES-CBC plus the diffuser, 128-bit AES-CBC, and 256-bit AES-CBC. (My advice: stick with the default.) The key management system uses 256-bit keys wherever possible. The only place where a 128-bit key limit is hard-coded is the recovery key, which is 48 digits (including checksums). It’s shorter because it has to be typed in manually; typing in 96 digits will piss off a lot of people—even if it is only for data recovery.

So, does this destroy dual-boot systems? Not really. If you have Vista running, then set up a dual boot system, BitLocker will consider this sort of change to be an attack and refuse to run. But then you can use the recovery key to boot into Windows, then tell BitLocker to take the current configuration—with the dual boot code—as correct. After that, your dual boot system will work just fine, or so I’ve been told. You still won’t be able to share any files on your C drive between operating systems, but you will be able to share files on any other drive.

The problem is that it’s impossible to distinguish between a legitimate dual boot system and an attacker trying to use another OS—whether Linux or another instance of Vista—to get at the volume.

BitLocker is not a DRM system. However, it is straightforward to turn it into a DRM system. Simply give programs the ability to require that files be stored only on BitLocker-enabled drives, and then only be transferable to other BitLocker-enabled drives. How easy this would be to implement, and how hard it would be to subvert, depends on the details of the system.

BitLocker is also not a panacea. But it does mitigate a specific but significant risk: the risk of attackers getting at data on drives directly. It allows people to throw away or sell old drives without worry. It allows people to stop worrying about their drives getting lost or stolen. It stops a particular attack against data.

Right now BitLocker is only in the Ultimate and Enterprise editions of Vista. It’s a feature that is turned off by default. It is also Microsoft’s first TPM application. Presumably it will be enhanced in the future: allowing the encryption of other drives would be a good next step, for example.

http://www.microsoft.com/technet/windowsvista/…
http://www.microsoft.com/technet/windowsvista/…
Niels Ferguson on back doors:
http://s.msdn.com/si_team/archive/2006/03/02/…

BitLocker and dual boot systems:
http://www.theregister.co.uk/2006/04/27/…
http://arstechnica.com/journals/microsoft.ars/2006/…


The Security Risk of Special Cases

In Beyond Fear, I wrote about the inherent security risks of exceptions to a security policy. Here’s an example, from airport security in Ireland.

Police officers are permitted to bypass airport security at the Dublin Airport. They flash their ID, and walk around the checkpoints.

“A female member of the airport search unit is undergoing re-training after the incident in which a Department of Transport inspector passed unchecked through security screening.

“It is understood that the department official was waved through security checks having flashed an official badge. The inspector immediately notified airport authorities of a failure in vetting procedures. Only gardai are permitted to pass unchecked through security.”

There are two ways this failure could have happened. One, security person could have thought that Department of Transportation officials have the same privileges as police officers. And two, the security person could have thought she was being shown a police ID.

This could have just as easily been a bad guy showing a fake police ID. My guess is that the security people don’t check them all that carefully.

The meta-point is that exceptions to security are themselves security vulnerabilities. As soon as you create a system by which some people can bypass airport security checkpoints, you invite the bad guys to try and use that system. There are reasons why you might want to create those alternate paths through security, of course, but the trade-offs should be well thought out.

http://archives.tcm.ie/businesspost/2006/04/16/…


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.

Copyright (c) 2006 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.