Crypto-Gram

October 15, 2005

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0509.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Phishing

Earlier this month, California became the first state to enact a law specifically addressing phishing. Phishing, for those of you who have been away from the Internet for the past few years, is when an attacker sends you an e-mail falsely claiming to be a legitimate business in order to trick you into giving away your account info—passwords, mostly. When this is done by hacking DNS, it’s called pharming.

Financial companies have until now avoided taking on phishers in a serious way, because it’s cheaper and simpler to pay the costs of fraud. That’s unacceptable, however, because consumers who fall prey to these scams pay a price that goes beyond financial losses, in inconvenience, stress and, in some cases, blots on their credit reports that are hard to eradicate. As a result, lawmakers need to do more than create new punishments for wrongdoers—they need to create tough new incentives that will effectively force financial companies to change the status quo and improve the way they protect their customers’ assets. Unfortunately, the California law does nothing to address this.

The new legislation was enacted because phishing is a new crime. But the law won’t help, because phishing is just a tactic. Criminals phish in order to get your passwords, so they can make fraudulent transactions in your name. The real crime is an ancient one: financial fraud.

These attacks prey on the gullibility of people. This distinguishes them from worms and viruses, which exploit vulnerabilities in computer code. In the past, I’ve called these attacks examples of “semantic attacks” because they exploit human meaning rather than computer logic. The victims are people who get e-mails and visit websites, and generally believe that these e-mails and websites are legitimate.

These attacks take advantage of the inherent unverifiability of the Internet. Phishing and pharming are easy because authenticating businesses on the Internet is hard. While it might be possible for a criminal to build a fake bricks-and-mortar bank in order to scam people out of their signatures and bank details, it’s much easier for the same criminal to build a fake website or send a fake e-mail. And while it might be technically possible to build a security infrastructure to verify both websites and e-mail, both the cost and user unfriendliness means that it’d only be a solution for the geekiest of Internet users.

These attacks also leverage the inherent scalability of computer systems. Scamming someone in person takes work. With e-mail, you can try to scam millions of people per hour. And a one-in-a-million success rate might be good enough for a viable criminal enterprise.

In general, two Internet trends affect all forms of identity theft. The widespread availability of personal information has made it easier for a thief to get his hands on it. At the same time, the rise of electronic authentication and online transactions—you don’t have to walk into a bank, or even use a bank card, in order to withdraw money now—has made that personal information much more valuable.

The problem of phishing cannot be solved solely by focusing on the first trend: the availability of personal information. Criminals are clever people, and if you defend against a particular tactic such as phishing, they’ll find another. In the space of just a few years, we’ve seen phishing attacks get more sophisticated. The newest variant, called “spear phishing,” involves individually targeted and personalized e-mail messages that are even harder to detect. And there are other sorts of electronic fraud that aren’t technically phishing.

The actual problem to be solved is that of fraudulent transactions. Financial institutions make it too easy for a criminal to commit fraudulent transactions, and too difficult for the victims to clear their names. The institutions make a lot of money because it’s easy to make a transaction, open an account, get a credit card and so on. For years I’ve written about how economic considerations affect security problems. They can put security countermeasures in place to prevent fraud, detect it quickly and allow victims to clear themselves. But all of that’s expensive. And it’s not worth it to them.

It’s not that financial institutions suffer no losses. Because of something called Regulation E, they already pay most of the direct costs of identity theft. But the costs in time, stress, and hassle are entirely borne by the victims. And in one in four cases, the victims have not been able to completely restore their good name.

In economics, this is known as an externality: It’s an effect of a business decision that is not borne by the person or organization making the decision. Financial institutions have no incentive to reduce those costs of identity theft because they don’t bear them.

Push the responsibility—all of it—for identity theft onto the financial institutions, and phishing will go away. This fraud will go away not because people will suddenly get smart and quit responding to phishing e-mails, because California has new criminal penalties for phishing, or because ISPs will recognize and delete the e-mails. It will go away because the information a criminal can get from a phishing attack won’t be enough for him to commit fraud—because the companies won’t stand for all those losses.

If there’s one general precept of security policy that is universally true, it is that security works best when the entity that is in the best position to mitigate the risk is responsible for that risk. Making financial institutions responsible for losses due to phishing and identity theft is the only way to deal with the problem. And not just the direct financial losses—they need to make it less painful to resolve identity theft issues, enabling people to truly clear their names and credit histories. Money to reimburse losses is cheap compared with the expense of redesigning their systems, but anything less won’t work.

California law:
<http://www.msnbc.msn.com/id/9547692/>

Definitions:
<http://en.wikipedia.org/wiki/Phishing>
<http://en.wikipedia.org/wiki/Pharming>
<http://www-03.ibm.com/industries/financialservices/…>
<http://www-03.ibm.com/industries/financialservices/…>

Who pays for identity theft:
<http://www.informationweek.com/showArticle.jhtml?…>

Me on semantic attacks:
<http://www.schneier.com/crypto-gram-0010.html#1>

Me on economics and security:
<http://www.schneier.com/book-sandl-intro2.html>

Me on identity theft:
<https://www.schneier.com/blog/archives/2005/04/…>

Discussion of my essay:
<http://it.slashdot.org/article.pl?sid=05/10/06/…>

This essay originally appeared in Wired:
<http://www.wired.com/news/politics/0,1283,69076,00.html>


Major Security at a Minor Ferry

Is a ferry that transports 3,000 cars a day (during the busy season) a national security risk?

Maybe it is, but is it worth instituting extra security measures? How many ferries like this are there in the U.S.? How many other potential targets of the same magnitude are there in the U.S.? How much would it cost to secure them all?

This just isn’t the way to go about it.

<http://www.virginiadot.org/infoservice/news/…>


DUI Cases Thrown Out Due to Closed-Source Breathalyzer

According to the article: “Hundreds of cases involving breath-alcohol tests have been thrown out by Seminole County judges in the past five months because the test’s manufacturer will not disclose how the machines work.”

This is the right decision. Throughout history, the government has had to make the choice: prosecute, or keep your investigative methods secret. They couldn’t have both. If they wanted to keep their methods secret, they had to give up on prosecution.

People have the right to confront their accuser. People have a right to examine the evidence against them, and to contest the validity of that evidence. As more and more evidence is collected by software, this means open-source equipment.

We are all safer because of this decision. (And its implications are huge. Think of voting systems, for one.)

<http://tampatrib.com/floridametronews/MGBUBJ5QK9E.html>


Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

Keeping Network Outages Secret:
<http://www.schneier.com/crypto-gram-0410.html#2>

RFID Passports:
<http://www.schneier.com/crypto-gram-0410.html#3>

The Legacy of DES:
<http://www.schneier.com/crypto-gram-0410.html#8>

Wholesale Surveillance:
<http://www.schneier.com/crypto-gram-0410.html#10>
<http://www.schneier.com/crypto-gram-0410.html#11>

Academic Freedom and Security:
<http://www.schneier.com/crypto-gram-0410.html#13>

The Future of Surveillance:
<http://www.schneier.com/crypto-gram-0310.html#1>

National Strategy to Secure Cyberspace:
<http://www.schneier.com/crypto-gram-0210.html#1>

Cyberterrorism:
<http://www.schneier.com/crypto-gram-0110.html#1>

Dangers of Port 80
<http://www.schneier.com/crypto-gram-0110.html#9>

Semantic Attacks:
<http://www.schneier.com/crypto-gram-0010.html#1>

NSA on Security:
<http://www.schneier.com/crypto-gram-0010.html#7>

So, You Want to be a Cryptographer:
<http://www.schneier.com/…>

Key Length and Security:
<http://www.schneier.com/…>

Steganography: Truths and Fictions:
<http://www.schneier.com/…>

Memo to the Amateur Cipher Designer:
<http://www.schneier.com/…>


Automatic License Plate Scanners

The Boston Transportation Department, among other duties, hands out parking tickets. If a car has too many unpaid parking tickets, the BTD will lock a “Denver Boot” to one of the wheels, making the car unmovable. Once the tickets are paid up, the BTD removes the boot.

The white SUV in the photo (link below) is owned by the Boston Transportation Department. Its job is to locate cars that need to be booted. The two video cameras on top of the vehicle are hooked up to a laptop computer running license plate scanning software. The vehicle drives around the city scanning plates and comparing them with the database of unpaid parking tickets. When a match is found, the BTD officers jump out and boot the offending car. You can sort of see the boot on the front right wheel of the car behind the SUV in the photo.

This is the kind of thing I call “wholesale surveillance,” and I’ve written about license plate scanners in that regard last year.

Richard M. Smith, who took the photo, made a public request to the BTD last summer for the database of scanned license plate numbers that is being collected by this vehicle. The BTD told him at the time that the database is not a public record, because the database is owned by AutoVu, the Canadian company that makes the license plate scanner software used in the vehicle. This software is being “loaned” to the City of Boston as part of a “beta” test program.

Anyone doubt that AutoVu is going to sell this data to a company like ChoicePoint?

AutoVu:
<http://www.autovu.com>

The Boston Globe has written about this program:
<http://www.autovu.com/website/content/pressreleases/…>

The white SUV photo:
<http://www.computerbytesman.com/privacy/…>

Me on wholesale surveillance:
<http://www.schneier.com/essay-057.html>


NSA Watch

U.S. Patent #6,947,978: “Method for geolocating logical network addresses.”
<http://patft.uspto.gov/netacgi/nph-Parser?…>
Fact Sheet NSA Suite B Cryptography
<http://www.nsa.gov/ia/industry/crypto_suite_b.cfm>

The Case for Elliptic Curve Cryptography
<http://www.nsa.gov/ia/industry/…>


Terrorism Laws Used to Stifle Political Speech

Walter Wolfgang, an 82-year-old political veteran, was forcibly removed from the UK Labour party conference for calling a speaker, Jack Straw, a liar. (Opinions on whether Jack Straw is or is not a liar are irrelevant here.) He was later denied access to the conference on the basis of anti-terror laws. Keep in mind that as recently as the 1980s, Labour Party conferences were heated affairs compared with today’s media shows.

From The London Times: “A police spokeswoman said that Mr Wolfgang had not been arrested but detained because his security accreditation had been cancelled by Labour officials when he was ejected. She said: ‘The delegate asked the police officer what powers he was using. The police officer responded that he was using his powers under Section 44 of the Terrorism Act to confirm the delegate’s details. ‘”

From The Scotsman: “Anti-Iraq war protesters, anti-Blairite OAPs and conference delegates were all detained by police under legislation that was designed to combat violent fanatics and bombers – even though none of them was suspected of terrorist links. None of those detained under Section 44 stop-and-search rules in the 2000 Terrorism Act was arrested and no-one was charged under the terrorism laws.

<http://www.timesonline.co.uk/article/…>
<http://news.scotsman.com/uk.cfm?id=2028602005>


News

Snooping on text by listening to the keyboard:
<http://www.freedom-to-tinker.com/?p=893>
<http://www.cs.berkeley.edu/~tygar/papers/…>

Privacy-enhanced computer display. This is from 2001, but still interesting.
<http://www.merl.com/projects/privatedisplay/>

Research in behavioral risk analysis:
<https://www.fastlane.nsf.gov/servlet/showaward?…>

Interesting law-review article on crime-facilitating speech:
<http://www.law.ucla.edu/volokh/facilitating.pdf>

Alan Cox on “The Next 50 Years of Computer Security.” He says a lot of the same things I’ve been saying, but it’s more about the next five years of computer security. Honestly, I have no idea what’s going to happen in the next 50 years, either.
<http://www.oreillynet.com/pub/a/network/2005/09/12/…>

Info on computerized voting machines from a Diebold insider. Yes, it’s sensationalist. But there’s some good information there.
<http://www.bradblog.com/archives/00001838.htm>

Excellent editorial on the new poll tax in Georgia:
<http://www.nytimes.com/2005/09/12/opinion/…>
And here’s EPIC’s commentary on the issue:
<http://www.epic.org/privacy/voting/…>
The ID solves a minor problem, and exacerbates a major one.

With the rise in gas prices, there’s been an increase in the sale of locking gas caps. Has anyone heard of a significant rise in siphoning threat, or are people just reacting from paranoia?
<http://www.phillyburbs.com/pb-dyn/news/…>

A really clever automobile identity-theft scam from Israel:
<https://www.schneier.com/blog/archives/2005/09/…>

Cameras catch a dry run of the 7/7 London terrorists:
<http://news.bbc.co.uk/2/hi/uk_news/4263176.stm>
<http://www.nytimes.com/2005/09/21/international/…>
Security cameras certainly aren’t useless. I just don’t think they’re worth it. There are far more effective countermeasures to spend the money on.

The Department of Homeland Security has been worrying about a movie-plot threat—how terrorists might exploit a hurricane:
<http://s.washingtonpost.com/earlywarning/2005/…>

Verizon is now monitoring customers for Disney. This seems like a really bad idea.
<https://www.schneier.com/blog/archives/2005/09/…>

We all know that Google can be used to find all sorts of sensitive data, but here’s a story about a Spanish astronomer that accessed the unpublished telescope logs of a rival astronomer on the Internet.
<http://www.newscientist.com/article.ns?id=dn8033>

In this disturbing story, a man is arrested in the London subways as a terrorist because, well, because he was acting like a computer nerd.
<http://www.guardian.co.uk/attackonlondon/story/…>

Funny fake picture from the London Tube:
<http://www.cl.cam.ac.uk/~cpk25/outback/tube.jpg>
<http://www.snopes.com/photos/signs/tubesign.asp>

Fingerprint-lock failure in a prison:
<https://www.schneier.com/blog/archives/2005/09/…>

An article on “the Armani of bulletproof clothing.”
<http://www.salon.com/news/feature/2005/09/22/…>

Starting next month, US-CERT will start issuing uniform names for worms, viruses, and other malware. This is part of a program called the Common Malware Enumeration Initiative, and is great news.
<http://www.eweek.com/article2/0,1895,1862251,00.asp>

The Minister of the Interior of Bavaria (in Germany) requested that the industry produce Web content filtering on “instructions on how to build a bomb.” These pages, he claims, are “a very dangerous security problem.” He hopes filters like those for parental filtering can solve this problem. I think he’s trying to solve the wrong problem.
<http://www.heise.de/newsticker/meldung/63832>

At Labour’s Brighton conference in the UK, security screeners are making people take their watches off and run them through the scanner. Why? No one seems to know.
<http://www.guardian.co.uk/g2/story/…>
My guess is that it began as this story about altimeter watches, and then got exaggerated in the retelling.
<https://www.schneier.com/blog/archives/2005/01/…>

Cell phone surveillance captures criminals, because even criminals always carry their cell phones around.
<http://www.sptimes.com/2005/09/17/Worldandnation/…>
I am fine with the police using this tool, as long as the warrant process is there to ensure that they don’t abuse the tool.

A fan’s view of the extra “security” at football games.
<http://sports.espn.go.com/espn/page2/story?…>

This digital plague occurred in an online game, but it’s still fascinating.
<http://www.securityfocus.com/news/11330>

An engineer made public a flaw in a computer chip used in the Airbus A380 aircraft. The resultant cover-up is, sadly, predictable.
<http://www.latimes.com/business/…>

A story about Prince Andrew being screened at Melbourne Airport.
<http://www.guardian.co.uk/uk_news/story/…>
We are all more secure because everyone goes through airport screening, and there’s no automatic white list. (Diplomatic pouches are worth discussing. Interesting trade-off there.)

Windows OneCare is the next-generation pervasive security program that will be part of Microsoft Windows. I know nothing about it.
<http://www.bentuser.com/article.aspx?ID=312>

RFID car keys can be used to track people:
<https://www.schneier.com/blog/archives/2005/10/…>
Cryptography can be used to make these devices anonymous, but there’s no business reason for automobile manufacturers to field such a system. Once again, the economic barriers to security are far greater than the technical ones.

This is a clever piece of research. Turns out you can jam cell phones with SMS messages. Text messages are transmitted on the same channel that is used to set up voice calls, so if you flood the network with one, then the other can’t happen. The researchers believe that sending 165 text messages a second is enough to disrupt all the cell phones in Manhattan.
<http://www.smsanalysis.org/>
<http://www.smsanalysis.org/smsanalysis.pdf>
<http://www.gsm-security.net/forum/post-406.html>
<http://it.slashdot.org/it/05/10/05/1839217.shtml?…>

EPIC has theme park information, mostly on Walt Disney World.
<http://www.epic.org/privacy/themepark/>
Disney World scans hand geometry, not fingerprints.
<http://www.biometricsinfo.org/handgeometry.htm>

A movie-plot threat of exploding baby carriages in the New York City subways:
<http://www.nydailynews.com/front/story/…>
The specificity of the threat seems a bit ridiculous. If we ban baby carriages from the subways, and the terrorists put their bombs in duffel bags instead, have we really won anything? In the end, the threat turned out to be a hoax.
<http://www.cnn.com/2005/US/10/11/nyc.scare/index.html>

Musicians tell fans how to beat copy protection:
<https://www.schneier.com/blog/archives/2005/10/…>

The beginnings of a U.S. Government national DNA database:
<http://www.washingtonpost.com/wp-dyn/content/…>

Clever $6M bank con in the UK. Moral: Security is a people problem, not a technology problem. Note that the con artist used “terrorism” as a pretext.
<http://www.timesonline.co.uk/article/…>

There are two bills in Congress that would grant the Pentagon greater rights to spy on Americans in the U.S.
<http://www.msnbc.msn.com/id/9602401/site/newsweek>

Blizzard Software uses spyware to verify EULA compliance:
<http://www.rootkit.com/.php?newsid=358>
Blizzard responds:
<http://forums.worldofwarcraft.com/thread.aspx?…>
<http://forums.worldofwarcraft.com/thread.aspx?…>

Good editorial on RFID and privacy:
<http://www.boston.com/business/globe/articles/2005/…>

Why does Reuters think that a better ID card will protect against identity theft? The problem with identity theft isn’t that ID cards are forgeable, it’s that financial institutions don’t check them before authorizing transactions.
<http://today.reuters.com/news/…>

Advances in technology will bring better chemical trace screening.
<http://www.wired.com/news/privacy/0,1848,69137,00.htm>
As this kind of technology gets better, the problems of false alarms becomes greater. We already know that a large percentage of U.S. currency bears traces of cocaine, but can a low-budget terrorist close down an airport by spraying trace chemicals randomly at passengers’ luggage when they’re not looking?

Playmobil toy security checkpoint:
<http://store.playmobilusa.com/is-bin/…>
<http://www.concurringopinions.com/archives/2005/10/…>


Jamming Aircraft Navigation Near Nuclear Power Plants

The German government wants to jam aircraft navigation equipment near nuclear power plants.

This certainly could help if terrorists want to fly an airplane into a nuclear power plant, but it feels like a movie-plot threat to me. On the other hand, this could make things significantly worse if an airplane flies near the nuclear power plant by accident. My guess is that the latter happens far more often than the former.

<http://www.expatica.com/source/site_article.asp?…>


Secure Flight Working Group Report

Since January, I have been a member of the Secure Flight Working Group, evaluating the security and privacy of the program. Last month we released our report.

Honestly, I didn’t do any of the writing. I had given up on the process, sick of not being able to get any answers out of TSA, and believed that the report would end up in somebody’s desk drawer, never to be seen again. I was stunned when I learned that the ASAC made the report public.

There’s a lot of stuff in the report, but I’d like to quote the section that outlines the basic questions that the TSA was unable to answer:

“The SFWG found that TSA has failed to answer certain key questions about Secure Flight: First and foremost, TSA has not articulated what the specific goals of Secure Flight are. Based on the limited test results presented to us, we cannot assess whether even the general goal of evaluating passengers for the risk they represent to aviation security is a realistic or feasible one or how TSA proposes to achieve it. We do not know how much or what kind of personal information the system will collect or how data from various sources will flow through the system.

“Until TSA answers these questions, it is impossible to evaluate the potential privacy or security impact of the program, including:

“* Minimizing false positives and dealing with them when they occur.
* Misuse of information in the system.
* Inappropriate or illegal access by persons with and without permissions.
* Preventing use of the system and information processed through it for purposes other than airline passenger screening.

“The following broadly defined questions represent the critical issues we believe TSA must address before we or any other advisory body can effectively evaluate the privacy and security impact of Secure Flight on the public.

“*What is the goal or goals of Secure Flight? The TSA is under a Congressional mandate to match domestic airline passenger lists against the consolidated terrorist watch list. TSA has failed to specify with consistency whether watch list matching is the only goal of Secure Flight at this stage. The Secure Flight Capabilities and Testing Overview, dated February 9, 2005 (a non-public document given to the SFWG), states in the Appendix that the program is not looking for unknown terrorists and has no intention of doing so. On June 29, 2005, Justin Oberman (Assistant Administrator, Secure Flight/Registered Traveler) testified to a Congressional committee that “Another goal proposed for Secure Flight is its use to establish “Mechanisms for…violent criminal data vetting.” Finally, TSA has never been forthcoming about whether it has an additional, implicit goal the tracking of terrorism suspects (whose presence on the terrorist watch list does not necessarily signify intention to commit violence on a flight).

“While the problem of failing to establish clear goals for Secure Flight at a given point in time may arise from not recognizing the difference between program definition and program evolution, it is clearly an issue the TSA must address if Secure Flight is to proceed.

“What is the architecture of the Secure Flight system? The Working Group received limited information about the technical architecture of Secure Flight and none about how software and hardware choices were made. We know very little about how data will be collected, transferred, analyzed, stored or deleted. Although we are charged with evaluating the privacy and security of the system, we saw no statements of privacy policies and procedures other than Privacy Act notices published in the Federal Register for Secure Flight testing. No data management plan either for the test phase or the program as implemented was provided or discussed.

“Will Secure Flight be linked to other TSA applications? Linkage with other screening programs (such as Registered Traveler, Transportation Worker Identification and Credentialing (TWIC), and Customs and Border Patrol systems like U.S.-VISIT) that may operate on the same platform as Secure Flight is another aspect of the architecture and security question. Unanswered questions remain about how Secure Flight will interact with other vetting programs operating on the same platform; how it will ensure that its policies on data collection, use and retention will be implemented and enforced on a platform that also operates programs with significantly different policies in these areas; and how it will interact with the vetting of passengers on international flights?

“How will commercial data sources be used? One of the most controversial elements of Secure Flight has been the possible uses of commercial data. TSA has never clearly defined two threshold issues: what it means by “commercial data;” and how it might use commercial data sources in the implementation of Secure Flight. TSA has never clearly distinguished among various possible uses of commercial data, which all have different implications.

“Possible uses of commercial data sometimes described by TSA include: (1) identity verification or authentication; (2) reducing false positives by augmenting passenger records indicating a possible match with data that could help distinguish an innocent passenger from someone on a watch list; (3) reducing false negatives by augmenting all passenger records with data that could suggest a match that would otherwise have been missed; (4) identifying sleepers, which itself includes: (a) identifying false identities; and (b) identifying behaviors indicative of terrorist activity. A fifth possibility has not been discussed by TSA: using commercial data to augment watch list entries to improve their fidelity. Assuming that identity verification is part of Secure Flight, what are the consequences if an identity cannot be verified with a certain level of assurance?

“It is important to note that TSA never presented the SFWG with the results of its commercial data tests. Until these test results are available and have been independently analyzed, commercial data should not be utilized in the Secure Flight program.

“*Which matching algorithms work best? TSA never presented the SFWG with test results showing the effectiveness of algorithms used to match passenger names to a watch list. One goal of bringing watch list matching inside the government was to ensure that the best available matching technology was used uniformly. The SFWG saw no evidence that TSA compared different products and competing solutions. As a threshold matter, TSA did not describe to the SFWG its criteria for determining how the optimal matching solution would be determined. There are obvious and probably not-so-obvious tradeoffs between false positives and false negatives, but TSA did not explain how it reconciled these concerns.

“What is the oversight structure and policy for Secure Flight?” TSA has not produced a comprehensive policy document for Secure Flight that defines oversight or governance responsibilities.”

The members of the working group, and the signatories to the report, are Martin Abrams, Linda Ackerman, James Dempsey, Edward Felten, Daniel Gallington, Lauren Gelman, Steven Lilenthal, Anna Slomovic, and myself.

There’s one more bizarre twist to this story. Near the end of the process, the TSA hired someone named Larry Ponemon to assist us in writing our report. He had two jobs: one was to edit what we had to say, and the other was to herd the members of the working group into actually writing something coherent. But it turned out that the TSA gave him another, secret, task: to write a document verifying our work. So on the one hand, he was our scribe and project leader, but he was also a TSA spy.

I think this is unethical, although it’s pretty clear that Ponemon was duped by the TSA. (Ponemon defended himself to us by saying that that he did not believe his report would be made public. He refused to say anything in public about this, because—I assume—he wants future work from the TSA.)

His report basically says that TSA is doing everything fine, but that the documentation simply wasn’t available to us when we wrote our report. This is wrong, and my guess is that Justin Oberman simply lied to him convincingly. But the matter is now being taken up by the DHS’s Data Privacy and Integrity Advisory Committee.

Our report:
<http://www.tsa.gov/interweb/assetlibrary/…>
<http://www.epic.org/privacy/airtravel/…>

Ponemon’s report:
<http://www.tsa.gov/interweb/assetlibrary/…>

The U.S. Department of Justice Inspector General released a report last month on Secure Flight, basically concluding that the costs were out of control, and that the TSA didn’t know how much the program would cost in the future.
<http://www.usdoj.gov/oig/reports/FBI/a0534/final.pdf>

In case you think things have gotten better, there’s a new story about how the no-fly list cost a pilot his job:
<http://www.boston.com/news/local/massachusetts/…>

EPIC has received a bunch of documents about continued problems with false positives on the no-fly list:
<http://www.epic.org/foia_notes/note8.html>

Here’s an article about some of the horrible problems people who have mistakenly found themselves on the no-fly list have had to endure.
<http://www.wired.com/news/privacy/0,1848,68973,00.html>
And another on what you can do if you find yourself on a list.
<http://www.wired.com/news/privacy/0,1848,68974,00.html>

And lastly, the TSA is currently not going to use commercial databases in its initial roll-out of Secure Flight. I don’t believe for a minute that they’re shelving plans to use commercial data permanently, but at least they’re delaying the process.
<http://beta.news.com.com/2061-10796_3-5878893.html>

My previous posts about Secure Flight, and my involvement in the working group:
<https://www.schneier.com/blog/archives/2005/01/…>
<https://www.schneier.com/blog/archives/2005/01/…>
<https://www.schneier.com/blog/archives/2005/03/…>
<https://www.schneier.com/blog/archives/2005/03/…>
<https://www.schneier.com/blog/archives/2005/07/…>
<https://www.schneier.com/blog/archives/2005/08/…>


The Doghouse: CryptIt

It’s been far too long since I’ve had one of these. CryptIt (and XorIt) look like your typical one-time-pad snake-oil product:

“Most file encryptors use methods that rely on the theory of computational security, that is difficulty of key factorisation prevents decryption of the file. But this method may not work forever. It used to be considered that a 56 bit key was unbreakable to brute force attacks, but the government of the USA now requires all Top Secret data to use keys of at least 192 bits. This bar will keep raising as computing power increases. (It is argued by some though that this will never happen due to the laws of physics!) CryptIt is designed to use conventional XOR encryption on keys that are the same size as the file to be encrypted. Furthermore, if you use an unpredictable file that is the same size (or larger) than the original file and you use this file only once, this is known as a one-time pad and it is completely unbreakable, even to computers 1000 years from now.”

<http://www.sinnercomputing.com/CryptIt.htm>
<http://www.sinnercomputing.com/XorIt.htm>

Amazingly enough, some people still believe in this sort of nonsense.
<http://www.wilderssecurity.com/showthread.php?t=98048>

My essay on cryptographic snake-oil:
<http://www.schneier.com/crypto-gram-9902.html#snakeoil>


Counterpane News

Counterpane mentioned in a CIO Decisions article:
<http://www.counterpane.com/news-cio.html>

Schneier is speaking at RSA Europe in Vienna on October 18-19:
RSA Europe:
<http://2005.rsaconference.com/europe/>

Schneier is speaking at Data Security 2005 in Helsinki on October 27:
<http://www.tieturi.fi/koulutus/seminaarit/ds2005/…>

Schneier is speaking at the CSO Executive Forum in Denver on November 4:
<http://ciso.issa.org/events/forum.html>

Schneier is speaking at the UCLA Law School in Los Angeles on November 7:
<http://www.lawtechjournal.com>


Hurricane Security and Airline Security Collide

In the days before Hurricane Rita, when Houston was evacuating, about 100 airline security screeners didn’t show up for work. (Presumably, they evacuated themselves.) The result was huge lines and missed flights, as the TSA scrambled to send a replacement team of screeners in from Cleveland.

This is crazy. The TSA is allowed to use “alternate” screening procedures in certain circumstances. It’s not an easy call, but sometimes the smartest thing to do in an emergency is to suspend security rules. Of course there are risks, but the trade-off makes sense.

Why that didn’t happen is an example of agenda. While it makes sense to let these people on airplanes, any person authorized to make that decision had to be worried about his job. If something had happened, however unlikely it might have been, he would have been fired. On the other hand, if he didn’t change the rules, then hundreds of people might be delayed but he would be unaffected personally.

<http://www.kink.fm/index.php/weblog/more/115/>


Tax Breaks for Good Security

Congress is talking—it’s just talking, but at least it’s talking—about giving tax breaks to companies with good cybersecurity.

The devil is in the details, and this could be a meaningless handout, but the idea is sound. Rational companies are going to protect their assets only up to their value *to that company*. The problem is that many of the security risks to digital assets are not risks to the company who owns them. This is an externality. So if we all need a company to protect its digital assets to some higher level, then we need to pay for that extra protection. (At least, we do in a capitalist society.) We can pay through regulation or liabilities, which translates to higher prices for whatever the company does. We can pay through directly funding that extra security, either by writing a check or reducing taxes. But we can’t expect a company to spend the extra money out of the goodness of its heart.

<http://news.com.com/…>


Forging Low-Value Paper Certificates

Both Subway and Cold Stone Creamery have discontinued their frequent-purchaser programs because the paper documentation is too easy to forge. (The article says that forged Subway stamps are for sale on eBay.)

It used to be that the difficulty of counterfeiting paper was enough security for these sorts of low-value applications. Now that desktop publishing and printing is common, it’s not. Subway is implementing a system based on magnetic stripe cards instead. Anyone care to guess how long before that’s hacked?

<http://www.wired.com/news/business/0,1367,68909,00.html>


Judge Roberts, Privacy, and the Future

At John Roberts’ confirmation hearings, there weren’t enough discussions about science fiction. Technologies that are science fiction today will become constitutional questions before Roberts retires from the bench. The same goes for technologies that cannot even be conceived of now. And many of these questions involve privacy.

According to Roberts, there is a “right to privacy” in the Constitution. At least, that’s what he said during his Senate hearings last week. It’s a politically charged question, because the two decisions that established the right to contraceptives and abortion—Griswold v. Connecticut (1965) and Roe v. Wade (1973)—are based in part on a right to privacy. “Where do you stand on privacy?” can be code for “Where do you stand on abortion?”

But constitutional questions on privacy have far more extensive reach. Recent advances in technology have already had profound privacy implications, and there’s every reason to believe that this trend will continue into the foreseeable future. Roberts is 50 years old. If confirmed, he could be chief justice for the next 30 years. That’s a lot of future.

Privacy questions will arise from government actions in the “War on Terror”; they will arise from the actions of corporations and individuals. They will include questions of surveillance, profiling and search and seizure. And the decisions of the Supreme Court on these questions will have a profound effect on society.

Here are some examples. Advances in genetic mapping continue, and someday it will be easy, cheap, and detailed—and will be able to be performed without the subject’s knowledge. What privacy protections do people have for their genetic map, given that they leave copies of their genome in every dead skin cell they leave behind? What protections do people have against government actions based on this data? Against private actions?

Should a customer’s genetics be considered when granting a mortgage, or determining its interest rate?

Surveillance is another area where technological advances will raise new constitutional questions. I’ve written about wholesale surveillance, the ability of the government to collect data on everyone and then search that data looking for certain people. We’re already seeing this kind of surveillance by automatic license plate readers and aerial photographs.

In the future, this will become more personal. New technologies will be able to peer through walls, under clothing, beneath skin, perhaps even into the activity of the brain. Sen. Joseph Biden (D-Delaware) rhetorically asked Roberts: “Can microscopic tags be implanted in a person’s body to track his every movement…. Can brain scans be used to determine whether a person is inclined toward criminal or violent behavior?” What should be the limits on what the police can do without a warrant?

Quoted in a New York Times article, privacy advocate Marc Rotenberg laid out this scenario: Sometime in the near future, a young man is walking around the Washington Monument for 30 minutes. Cameras capture his face, which yields an identity. That identity is queried in a series of commercial databases, producing his travel records, his magazine subscriptions and other personal details. This is all fed into a computerized scoring system, which singles him out as a potential terrorist threat. He is stopped by the police, who open his backpack and find a bag of marijuana. Is the opening of that backpack a legal search as defined by the Constitution?

That story illustrates a number of technologies that might become commonplace over the next several decades. Automatic face recognition will allow police, businesses, and individuals to identify people without their knowledge or consent. Data-mining programs will sift through mountains of data, both real-time and historical, and select people for further investigation. And people might even be accused of conspiracy based on nothing more than a nebulous pattern of events.

Similarly, can corporations engage in the same sort of data mining, and use the results to deny someone a job, or health insurance, or a mortgage?

The Supreme Court will face questions like these in the years to come. Complicating matters, the right to privacy is not explicitly enumerated in the Constitution. Instead, Supreme Court decisions have held that the First, Third, Fourth, Ninth, and 14th Amendments implicitly grant a right to privacy against government intrusion. But some legal scholars believe that the basis for privacy law is obsolete, and needs to be completely rethought.

Unfortunately, there’s not a whole lot out there by which to judge Roberts’ views. The Electronic Privacy Information Center, or EPIC, published a survey of Roberts’ scant writings on privacy, and found many causes for concern. In a 1981 memo, he referred to the “so-called ‘right to privacy.'” And others have analyzed his Senate hearing comments and concluded that his views haven’t changed much since then.

Between “natural” erosion through the advance of technology and government erosion in its fervor to pursue terrorists, we as a country are likely to face enormous challenges to personal privacy in the decades ahead. And the Supreme Court will increasingly have to rule on questions so far only discussed in science fiction books.

Rosen on future questions for the Supreme Court:
<http://www.law.umich.edu/library/news/topics/…>

Me on wholesale surveillance:
<https://www.schneier.com/blog/archives/2004/10/…>

Brain scanning:
<http://www.brainwavescience.com/counterterrorism.php>

Daniel Solove on a legal framework for privacy:
<http://papers.ssrn.com/sol3/papers.cfm?…>

EPIC on Roberts and privacy (I am a signatory to this letter(:
<http://www.epic.org/privacy/justices/roberts/…>

Roberts’ 1981 memo:
<http://www.washingtonpost.com/wp-srv/nation/…>

An analysis of Roberts’ confirmation-hearing answers about privacy:
<http://www.acsblog.org/…>

This essay originally appeared in Wired:
<http://www.wired.com/news/politics/0,1283,68911,00.html>


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.

Copyright (c) 2005 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.