Crypto-Gram

August 15, 2005

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0508.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Profiling

Since the London bombings, there has been a lot of discussion about profiling. To help, here is what I wrote on the subject in “Beyond Fear” (pp. 133-7):

“Good security has people in charge. People are resilient. People can improvise. People can be creative. People can develop on-the-spot solutions. People can detect attackers who cheat, and can attempt to maintain security despite the cheating. People can detect passive failures and attempt to recover. People are the strongest point in a security process. When a security system succeeds in the face of a new or coordinated or devastating attack, it’s usually due to the efforts of people.

“On 14 December 1999, Ahmed Ressam tried to enter the U.S. by ferryboat from Victoria, Vancouver Island, British Columbia. In the trunk of his car, he had a suitcase bomb. His plan was to drive to Los Angeles International Airport, put his suitcase on a luggage cart in the terminal, set the timer, and then leave. The plan would have worked had someone not been vigilant.

“Ressam had to clear customs before boarding the ferry. He had fake ID, in the name of Benni Antoine Noris, and the computer cleared him based on this ID. He was allowed to go through after a routine check of his car’s trunk, even though he was wanted by the Canadian police. On the other side of the Strait of Juan de Fuca, at Port Angeles, Washington, Ressam was approached by U.S. customs agent Diana Dean, who asked some routine questions and then decided that he looked suspicious. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting ‘hinky.’ More questioning—there was no one else crossing the border, so two other agents got involved—and more hinky behavior. Ressam’s car was eventually searched, and he was finally discovered and captured. It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But the system worked. The reason there wasn’t a bombing at LAX around Christmas in 1999 was because a knowledgeable person was in charge of security and paying attention.

“There’s a dirty word for what Dean did that chilly afternoon in December, and it’s profiling. Everyone does it all the time. When you see someone lurking in a dark alley and change your direction to avoid him, you’re profiling. When a storeowner sees someone furtively looking around as she fiddles inside her jacket, that storeowner is profiling. People profile based on someone’s dress, mannerisms, tone of voice … and yes, also on their race and ethnicity. When you see someone running toward you on the street with a bloody ax, you don’t know for sure that he’s a crazed ax murderer. Perhaps he’s a butcher who’s actually running after the person next to you to give her the change she forgot. But you’re going to make a guess one way or another. That guess is an example of profiling.

“To profile is to generalize. It’s taking characteristics of a population and applying them to an individual. People naturally have an intuition about other people based on different characteristics. Sometimes that intuition is right and sometimes it’s wrong, but it’s still a person’s first reaction. How good this intuition is as a countermeasure depends on two things: how accurate the intuition is and how effective it is when it becomes institutionalized or when the profile characteristics become commonplace.

“One of the ways profiling becomes institutionalized is through computerization. Instead of Diana Dean looking someone over, a computer looks the profile over and gives it some sort of rating. Generally profiles with high ratings are further evaluated by people, although sometimes countermeasures kick in based on the computerized profile alone. This is, of course, more brittle. The computer can profile based only on simple, easy-to-assign characteristics: age, race, credit history, job history, et cetera. Computers don’t get hinky feelings. Computers also can’t adapt the way people can.

“Profiling works better if the characteristics profiled are accurate. If erratic driving is a good indication that the driver is intoxicated, then that’s a good characteristic for a police officer to use to determine who he’s going to pull over. If furtively looking around a store or wearing a coat on a hot day is a good indication that the person is a shoplifter, then those are good characteristics for a store owner to pay attention to. But if wearing baggy trousers isn’t a good indication that the person is a shoplifter, then the store owner is going to spend a lot of time paying undue attention to honest people with lousy fashion sense.

“In common parlance, the term ‘profiling’ doesn’t refer to these characteristics. It refers to profiling based on characteristics like race and ethnicity, and institutionalized profiling based on those characteristics alone. During World War II, the U.S. rounded up over 100,000 people of Japanese origin who lived on the West Coast and locked them in camps (prisons, really). That was an example of profiling. Israeli border guards spend a lot more time scrutinizing Arab men than Israeli women; that’s another example of profiling. In many U.S. communities, police have been known to stop and question people of color driving around in wealthy white neighborhoods (commonly referred to as ‘DWB’—Driving While Black). In all of these cases you might possibly be able to argue some security benefit, but the trade-offs are enormous: honest people who fit the profile can get annoyed, or harassed, or arrested, when they’re assumed to be attackers.

“For democratic governments, this is a major problem. It’s just wrong to segregate people into ‘more likely to be attackers’ and ‘less likely to be attackers’ based on race or ethnicity. It’s wrong for the police to pull a car over just because its black occupants are driving in a rich white neighborhood. It’s discrimination.

“But people make bad security trade-offs when they’re scared, which is why we saw Japanese internment camps during World War II, and why there is so much discrimination against Arabs in the U.S. going on today. That doesn’t make it right, and it doesn’t make it effective security. Writing about the Japanese internment, for example, a 1983 commission reported that the causes of the incarceration were rooted in “race prejudice, war hysteria, and a failure of political leadership.” But just because something is wrong doesn’t mean that people won’t continue to do it.

“Ethics aside, institutionalized profiling fails because real attackers are so rare: Active failures will be much more common than passive failures. The great majority of people who fit the profile will be innocent. At the same time, some real attackers are going to deliberately try to sneak past the profile. During World War II, a Japanese American saboteur could try to evade imprisonment by pretending to be Chinese. Similarly, an Arab terrorist could dye his hair blond, practice an American accent, and so on.

“Profiling can also blind you to threats outside the profile. If U.S. border guards stop and search everyone who’s young, Arab, and male, they’re not going to have the time to stop and search all sorts of other people, no matter how hinky they might be acting. On the other hand, if the attackers are of a single race or ethnicity, profiling is more likely to work (although the ethics are still questionable). It makes real security sense for El Al to spend more time investigating young Arab males than it does for them to investigate Israeli families. In Vietnam, American soldiers never knew which local civilians were really combatants; sometimes killing all of them was the security solution they chose.

“If a lot of this discussion is abhorrent, as it probably should be, it’s the trade-offs in your head talking. It’s perfectly reasonable to decide not to implement a countermeasure not because it doesn’t work, but because the trade-offs are too great. Locking up every Arab-looking person will reduce the potential for Muslim terrorism, but no reasonable person would suggest it. (It’s an example of ‘winning the battle but losing the war.’) In the U.S., there are laws that prohibit police profiling by characteristics like ethnicity, because we believe that such security measures are wrong (and not simply because we believe them to be ineffective).

“Still, no matter how much a government makes it illegal, profiling does occur. It occurs at an individual level, at the level of Diana Dean deciding which cars to wave through and which ones to investigate further. She profiled Ressam based on his mannerisms and his answers to her questions. He was Algerian, and she certainly noticed that. However, this was before 9/11, and the reports of the incident clearly indicate that she thought he was a drug smuggler; ethnicity probably wasn’t a key profiling factor in this case. In fact, this is one of the most interesting aspects of the story. That intuitive sense that something was amiss worked beautifully, even though everybody made a wrong assumption about what was wrong. Human intuition detected a completely unexpected kind of attack. Humans will beat computers at hinkiness-detection for many decades to come.

“And done correctly, this intuition-based sort of profiling can be an excellent security countermeasure. Dean needed to have the training and the experience to profile accurately and properly, without stepping over the line and profiling illegally. The trick here is to make sure perceptions of risk match the actual risks. If those responsible for security profile based on superstition and wrong-headed intuition, or by blindly following a computerized profiling system, profiling won’t work at all. And even worse, it actually can reduce security by blinding people to the real threats. Institutionalized profiling can ossify a mind, and a person’s mind is the most important security countermeasure we have.”

A couple of other points (not from the book):

1. Whenever you design a security system with two ways through—an easy way and a hard way—you invite the attacker to take the easy way. Profile for young Arab males, and you’ll get terrorists that are old non-Arab females.

2. If we are going to increase security against terrorism, the young Arab males living in our country are precisely the people we want on our side. Discriminating against them in the name of security is not going to make them more likely to help.

3. Despite what many people think, terrorism is not confined to young Arab males. Shoe-bomber Richard Reid was British. Germaine Lindsay, one of the 7/7 London bombers, was Afro-Caribbean. Here are some more examples from a speech by the U.S. Secretary of Transportation Norman Mineta:

“In 1986, a 32-year-old Irish woman, pregnant at the time, was about to board an El Al flight from London to Tel Aviv when El Al security agents discovered an explosive device hidden in the false bottom of her bag. The woman’s boyfriend—the father of her unborn child—had hidden the bomb.

“In 1987, a 70-year-old man and a 25-year-old woman—neither of whom were Middle Eastern—posed as father and daughter and brought a bomb aboard a Korean Air flight from Baghdad to Thailand. En route to Bangkok, the bomb exploded, killing all on board.

“In 1999, men dressed as businessmen (and one dressed as a Catholic priest) turned out to be terrorist hijackers, who forced an Avianca flight to divert to an airstrip in Colombia, where some passengers were held as hostages for more than a year and a half.”

The 2002 Bali terrorists were Indonesian. The Chechnyan terrorists who downed the Russian planes were women. Timothy McVeigh and the Unabomber were Americans. The Basque terrorists are Basque, and Irish terrorists are Irish. The Tamil Tigers are Sri Lankan.

And many Muslims are not Arabs. Even worse, almost everyone who is Arab is not a terrorist—many people who look Arab are not even Muslims. So not only are there an large number of false negatives—terrorists who don’t meet the profile—but there an enormous number of false positives: innocents that do meet the profile.

Beyond Fear:
<http://www.schneier.com/bf.html>

U.S. Secretary of Transportation Mineta’s speech:
<http://www.dot.gov/affairs/042002sp.htm>

Research into the security effectiveness of profiling versus random searching:
<http://www.firstmonday.org/issues/issue7_10/chakrabarti>


Cisco and ISS Harass Security Researcher

I’ve written about full disclosure, and how disclosing security vulnerabilities is our best mechanism for improving security—especially in a free-market system. (That essay is also worth reading for a general discussion of the security trade-offs.) I’ve also written about how security companies treat vulnerabilities as public-relations problems first and technical problems second. This week at BlackHat, security researcher Michael Lynn and Cisco demonstrated both points.

Lynn was going to present security flaws in Cisco’s IOS, and Cisco went to inordinate lengths to make sure that information never got into the hands of the their consumers, the press, or the public. According to the Wall Street Journal:

“Cisco threatened legal action to stop the conference’s organizers from allowing a 24-year-old researcher for a rival tech firm to discuss how he says hackers could seize control of Cisco’s Internet routers, which dominate the market. Cisco also instructed workers to tear 20 pages outlining the presentation from the conference program and ordered 2,000 CDs containing the presentation destroyed.

“In the end, the researcher, Michael Lynn, went ahead with a presentation, describing flaws in Cisco’s software that he said could allow hackers to take over corporate and government networks and the Internet, intercepting and misdirecting data communications. Mr. Lynn, wearing a white hat emblazoned with the word “Good,” spoke after quitting his job at Internet Security Systems Inc. Wednesday. Mr. Lynn said he resigned because ISS executives had insisted he strike key portions of his presentation.”

The complete story is even weirder than this. Initially, Cisco and ISS were happy with Lynn presenting his research result. They changed their minds at the last minute. Lynn gave an interview to Wired that talks about some of the details; I am impressed with his integrity in this matter.

Not being able to censor the information, Cisco decided to act as if it were no big deal. This is from a SearchSecurity article:

“In a release shortly after the presentation, Cisco stated, “It is important to note that the information Lynn presented was not a disclosure of a new vulnerability or a flaw with Cisco IOS software. Lynn’s research explores possible ways to expand exploitations of known security vulnerabilities impacting routers.” And went on to state “Cisco believes that the information Lynn presented at the BlackHat conference today contained proprietary information and was illegally obtained.” The statement also refers to the fact that Lynn stated in his presentation that he used a popular file decompresser to ‘unzip’ the Cisco image before reverse engineering it and finding the flaw, which is against Cisco’s use agreement.”

The Cisco propaganda machine certainly was working overtime that week.

Cisco and ISS also sued Lynn and BlackHat. The suit was settled the next day, and it’s worth reading Jennifer Granick’s blog posts on the negotiations. The agreement prohibited Lynn or BlackHat from talking about this matter or distributing any presentation materials or recordings of the presentation. Not that it mattered; copies of the presentation slides—the version with ISS’s name on it, before they changed their mind and objected to the talk—are all over the Internet.

The security implications of this are enormous. If companies have the power to censor information about their products they don’t like, then we as consumers have less information with which to make intelligent buying decisions. If companies have the power to squelch vulnerability information about their products, then there’s no incentive for them to improve security. (I’ve written about this in connection with physical keys and locks.) If free speech is subordinate to corporate demands, then we are all much less safe.

Full disclosure is good for society. But because it helps the bad guys as well as the good guys (see my essay on secrecy and security for more discussion of the balance), many of us have championed “responsible disclosure” guidelines that give vendors a head start in fixing vulnerabilities before they’re announced.

The problem is that not all researchers follow these guidelines. And laws limiting free speech do more harm to society than good. (In any case, laws won’t completely fix the problem; we can’t get laws passed in every possible country security researchers live.) So the only reasonable course of action for a company is to work with researchers who alert them to vulnerabilities, but also to assume that vulnerability information will sometimes be released without prior warning.

I can’t imagine the discussions inside Cisco that led them to act like thugs. I can’t figure out why they decided to attack Michael Lynn, BlackHat, and ISS rather than turn the situation into a public-relations success. I can’t believe that they thought they could have censored the information by their actions, or even that it was a good idea.

Cisco’s customers want information. They don’t expect perfection, but they want to know the extent of problems and what Cisco is doing about them. They don’t want to know that Cisco tries to stifle the truth. This is from a Computerworld article:

“Joseph Klein, senior security analyst at the aerospace electronic systems division for Honeywell Technology Solutions, said he helped arrange a meeting between government IT professionals and Lynn after the talk. Klein said he was furious that Cisco had been unwilling to disclose the buffer-overflow vulnerability in unpatched routers. ‘I can see a class-action lawsuit against Cisco coming out of this,” Klein said.'”

ISS didn’t come out of this looking very good, either. From a Wired article:

“‘A few years ago it was rumored that ISS would hold back on certain things because (they’re in the business of) providing solutions,’ [Ali-Reza] Anghaie, [a senior security engineer with an aerospace firm, who was in the audience,] said. ‘But now you’ve got full public confirmation that they’ll submit to the will of a Cisco or Microsoft, and that’s not fair to their customers…. If they’re willing to back down and leave an employee … out to hang, well what are they going to do for customers?'”

Despite their thuggish behavior, this has been a public-relations disaster for Cisco and ISS. Now it doesn’t matter what they say—we won’t believe them. We know that the public-relations department handles their security vulnerabilities, and not the engineering department. We know that they think squelching information and muzzling researchers is more important than informing the public. They could have shown that they put their customers first, but instead they demonstrated that short-sighted corporate interests are more important than being a responsible corporate citizen.

And these are the people building the hardware that runs much of our infrastructure? Somehow, I don’t feel very secure right now.

In the weeks after this event, it seemed to me that ISS was pursuing this out of malice. With Cisco I think it was simple stupidity, but I think it’s malice with ISS.

Of course, hackers are working overtime to reconstruct Lynn’s attack and write an exploit. This, of course, means that we’re in much more danger of there being a worm that makes use of this vulnerability.

The sad thing is that we could have avoided this. If Cisco and ISS had simply let Lynn present his work, it would have been just another obscure presentation amongst the sea of obscure presentations that is BlackHat. By attempting to muzzle Lynn, the two companies ensured that 1) the vulnerability was the biggest story of the conference, and 2) some group of hackers would turn the vulnerability into exploit code just to get back at them.

News articles:
<http://online.wsj.com/public/article/…>
<http://searchsecurity.techtarget.com/…>
<http://www.computerworld.com/securitytopics/…>
<http://www.wired.com/news/privacy/0,1848,68328,00.html>
<http://news.zdnet.co.uk/internet/security/…>
<http://www.securityfocus.com/news/11259>
<http://hosted.ap.org/dynamic/stories/C/…>
<http://news.zdnet.co.uk/0,39020330,39211231,00.htm>
<http://www.wired.com/news/politics/0,1283,68356,00.html>
<http://www.theregister.co.uk/2005/08/02/cisco_exploits/>
<http://news.zdnet.co.uk/internet/security/…>

Lynn’s Wired interview:
<http://www.wired.com/news/privacy/0,1848,68365,00.html>

Commentary:
<http://s.businessweek.com/the_thread/techbeat/…>
<http://www.eweek.com/article2/0,1895,1842310,00.asp>
<http://searchsecurity.techtarget.com/columnItem/…>
<http://www.computerworld.com/newsletter/…>
<http://searchsecurity.techtarget.com/columnItem/…>

Jennifer Granick’s blog posts:
<http://www.granick.com/archive/…>
<http://www.granick.com/archive/…>
<http://www.granick.com/archive/…>
<http://www.granick.com/archive/…>

A video of Cisco/ISS ripping pages out of the BlackHat conference proceedings:
<http://www.makezine.com//archive/2005/08/…>

My essays on full disclosure:
<http://www.schneier.com/crypto-gram-0111.html#1>
<http://www.schneier.com/crypto-gram-0203.html#2>

My essay on secrecy and security:
<http://www.schneier.com/crypto-gram-0205.html#1>

My essay on keys and locks:
<http://www.schneier.com/crypto-gram-0302.html#1>

Copies of Lynn’s presentation, or maybe a cease-and-desist letter:
<http://www.infowarrior.org/users/rforno/lynn-cisco.pdf>
<http://www.jwdt.com/~paysan/lynn-cisco.pdf>
<http://www.infowarrior.org/users/rforno/lynn-cisco.pdf>
<http://www.purpleandgrey.com/free/lynn-cisco.pdf>
<http://cryptome.org/lynn-cisco.zip>
<http://www.securitylab.ru/_Exploits/2005/07/…>
<http://www.jwdt.com/~paysan/lynn-cisco.pdf>
<http://files.bitchx.ru/index.php?dir=ebooks/…>
<http://s48.yousendit.com/d.aspx?…>
<http://www.megaupload.com/?d=31GTUIFR>
<http://www.dfconsultants.com/lynn-cisco.pdf>
<http://www.security.nnov.ru/files/lynn-cisco.pdf>
<http://www.mininova.org/get/81889>
<http://www.stephencollins.org/library/linn-cisco.pdf>
<http://teknews.net/~radio/lynn-cisco.pdf>
<http://snafu.priv.at/download/lynn-cisco.pdf>

Photographs of Lynn’s actual presentation slides were here:
<http://www.tomsnetworking.com/Sections-article131.php>
Now they’re here:
<http://42.pl/lynn/>

Someone is setting up a legal defense fund for Lynn. Send donations via PayPal to Abaddon@IO.com. (Does anyone know the URL?) According to BoingBoing, donations not used to defend Lynn will be donated to the EFF.
<http://www.boingboing.net/2005/07/30/…>


E-Mail Interception Decision Reversed

A U.S. federal appeals court has ruled that the interception of e-mail in temporary storage violates the federal wiretap act, reversing an earlier court opinion.

Basically, different privacy laws protect electronic communications in transit and data in storage; the former is protected much more than the latter. E-mail stored by the sender or the recipient is obviously data in storage. But what about e-mail on its way from the sender to the receiver? On the one hand, it’s obviously communications on transit. But the government argued that it’s actually stored on various computers as it wends its way through the Internet; hence it’s data in storage.

The initial court decision in this case sided with the government. Judge Lipez wrote an inspired dissent in the original opinion. In the rehearing <i>en banc</i> (more judges), he wrote the opinion for the majority which overturned the earlier opinion.

The opinion itself is long, but well worth reading. It’s well reasoned, and reflects extraordinary understanding and attention to detail. And a great last line: “If the issue presented be ‘garden-variety’… this is a garden in need of a weed killer.”

I participated in an Amicus Curiae (“friend of the court”) brief in the case.

There’s a larger issue here, and it’s the same one that the entertainment industry used to greatly expand copyright law in cyberspace. They argued that every time a copyrighted work is moved from computer to computer, or CD-ROM to RAM, or server to client, or disk drive to video card, a “copy” is being made. This ridiculous definition of “copy” has allowed them to exert far greater legal control over how people use copyrighted works.

The ruling:
<http://www.ca1.uscourts.gov/pdf.opinions/03-1383EB-01A.pdf>

Summary of the case and privacy implications:
<http://www.epic.org/privacy/councilman/>

My brief:
<http://www.epic.org/privacy/councilman/tech_amicus.pdf>

A brief by six different civil liberties organizations:
<http://www.epic.org/privacy/councilman/kerr_amicus.pdf>


Stealing Imaginary Things

There’s a new Trojan that tries to steal World of Warcraft passwords.

That reminded me of people paying programmers to find exploits to make virtual money in multiplayer online games, and then selling the proceeds for real money.

And here’s a page about ways people steal fake money in the online game Neopets, including cookie grabbers, fake login pages, fake contests, social engineering, and pyramid schemes.

I regularly say that every form of theft and fraud in the real world will eventually be duplicated in cyberspace. Perhaps every method of stealing real money will eventually be used to steal imaginary money, too.

<http://securityresponse.symantec.com/avcenter/venc/…>
<http://www.1up.com/do/feature?cId=3141815>
<http://star-girl.org/pages/reads/neopets/avoidscams.php>


Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

Bob on Board:
<http://www.schneier.com/crypto-gram-0408.html#1>

Alibis and the Kindness of Strangers:
<http://www.schneier.com/crypto-gram-0408.html#3>

Houston Airport Rangers:
<http://www.schneier.com/crypto-gram-0408.html#7>

Websites, Passwords, and Consumers:
<http://www.schneier.com/crypto-gram-0408.html#8>

Flying on Someone Else’s Airplane Ticket:
<http://www.schneier.com/crypto-gram-0308.html#6>

Hidden Text in Computer Documents:
<http://www.schneier.com/crypto-gram-0308.html#8>

Palladium and the TCPA:
<http://www.schneier.com/crypto-gram-0208.html#1>

Arming Airplane Pilots:
<http://www.schneier.com/crypto-gram-0208.html#8>

Code Red:
<http://www.schneier.com/crypto-gram-0108.html#1>

Protecting Copyright in the Digital World:
<http://www.schneier.com/crypto-gram-0108.html#7>

Vulnerabilities, Publicity, and Virus-Based Fixes:
<http://www.schneier.com/crypto-gram-0008.html#2>

Bluetooth:
<http://www.schneier.com/crypto-gram-0008.html#8>

A Hardware DES Cracker:
<http://www.schneier.com/…>

Biometrics: Truths and Fictions:
<http://www.schneier.com/…>

Back Orifice 2000:
<http://www.schneier.com/…>

Web-Based Encrypted E-Mail:
<http://www.schneier.com/…>


Turning Cell Phones off in Tunnels

In response to the London bombings, officials turned off cell phones in tunnels around New York City, in an attempt to thwart bombers who might use cell phones as remote triggering devices. (Phone service has been restored in two of the four tunnels. As far as I know, it is still not available in the other two.)

This is as idiotic as it gets. It’s a perfect example of what I call “movie plot security”: imagining a particular scenario rather than focusing on the broad threats. It’s completely useless if a terrorist uses something other than a cell phone: a kitchen timer, for example. Even worse, it harms security in the general case. Have people forgotten how cell phones saved lives on 9/11? Communication benefits the defenders far more than it benefits the attackers.

<http://www.nytimes.com/reuters/technology/…>
<http://www.ny1.com/ny1/content/index.jsp?…>
<http://www.computerworld.com/mobiletopics/mobile/…>


Searching Bags in Subways

The New York City police will begin randomly searching people’s bags on subways, buses, commuter trains, and ferries. Other cities are following suit.

If the choice is between random searching and profiling, then random searching is a more effective security countermeasure. But are some enormous trade-offs in liberty. And I don’t think we’re getting very much security in return. Especially considering that passengers are free to turn around and leave the subway station if they don’t want to be searched.

“Okay guys; here are your explosives. If one of you gets singled out for a search, just turn around and leave. And then go back in via another entrance, or take a taxi to the next subway stop.”

(To be fair, while that was reported in the news, I have not heard from anyone who has tried to refuse a search and leave.)

And I don’t think they’ll be truly random, either. I think the police doing the searching will profile, because that’s what happens.

It’s another “movie plot threat.” It’s another “public relations security system.” It’s a waste of money, it substantially reduces our liberties, and it won’t make us any safer.

Final note: I often get comments along the lines of “Stop criticizing stuff; tell us what we should do.” My answer is always the same. Counterterrorism is most effective when it doesn’t make arbitrary assumptions about the terrorists’ plans. Stop searching bags on the subways, and spend the money on 1) intelligence and investigation—stopping the terrorists regardless of what their plans are, and 2) emergency response—lessening the impact of a terrorist attack, regardless of what the plans are. Countermeasures that defend against particular targets, or assume particular tactics, or cause the terrorists to make insignificant modifications in their plans, or that surveil the entire population looking for the few terrorists, are largely not worth it.

<http://www.nytimes.com/2005/07/21/nyregion/…>
<http://www.washingtonpost.com/wp-dyn/content/…>

A Citizen’s Guide to Refusing New York Subway Searches:
<http://www.flexyourrights.org/subway/>


Plagiarism and Academia: Personal Experience

A paper published in the December 2004 issue of the SIGCSE Bulletin, “Cryptanalysis of some encryption/cipher schemes using related key attack,” by Khawaja Amer Hayat, Umar Waqar Anis, and S. Tauseef-ur-Rehman, is the same as a paper that John Kelsey, David Wagner, and I published in 1997.

It’s clearly plagiarism. Sentences have been reworded or summarized a bit and many typos have been introduced, but otherwise it’s the same paper. It’s copied, with the same section, paragraph, and sentence structure—right down to the same mathematical variable names. It has the same quirks in the way references are cited. And so on.

We wrote two papers on the topic; this is the second. They don’t list either of our papers in their bibliography. They do have a lurking reference to “[KSW96]” in the body of their introduction and design principles, presumably copied from our text; but a full citation for “[KSW96]” isn’t in their bibliography. Perhaps they were worried that one of the referees would read the papers listed in their bibliography, and notice the plagiarism.

The three authors are from the International Islamic University in Islamabad, Pakistan. The third author, S. Tauseef-Ur-Rehman, is a department head (and faculty member) in the Telecommunications Engineering Department at this Pakistani institution. If you believe his story—which is probably correct—he had nothing to do with the research, but just appended his name to a paper by two of his students. (This is not unusual; it happens all the time in universities all over the world.) But that doesn’t get him off the hook. He’s still responsible for anything he puts his name on.

And we’re not the only ones. The same three authors plagiarized a paper by French cryptographer Serge Vaudenay and others. And one of my blog readers found a third plagiarized paper, and potentially a fourth.

I wrote to the editor of the SIGCSE Bulletin, who removed the paper from their website and demanded official letters of admission and apology. They said that they would ban them from submitting again, but have since backpedaled. Mark Mandelbaum, Director of the Office of Publications at ACM, now says that ACM has no policy on plagiarism and that nothing additional will be done. I’ve also written to Springer-Verlag, the publisher of my original paper.

I don’t blame the journals for letting these papers through. I’ve refereed papers, and it’s pretty much impossible to verify that a piece of research is original. We’re largely self-policing.

Mostly, the system works. These three have been found out, and should be fired and/or expelled. Certainly ACM should ban them from submitting anything, and I am very surprised at their claim that they have no policy with regards to plagiarism. Academic plagiarism is serious enough to warrant that level of response. I don’t know if the system works in Pakistan, though. I hope it does. These people knew the risks when they did it. And then they did it again.

If I sound angry, I’m not. I’m more amused. I’ve heard of researchers from developing countries resorting to plagiarism to pad their CVs, but I’m surprised to see it happen to me. I mean, really; if they were going to do this, wouldn’t it have been smarter to pick a more obscure author?

And it’s nice to know that our work is still considered relevant eight years later.

My paper:
<http://www.schneier.com/paper-relatedkey.html>
The plagiarized version:
<http://portal.acm.org/citation.cfm?doid=1041624.1041665>

Another paper:
<http://lasecwww.epfl.ch/php_code/publications/…>
The plagiarized version:
<http://www.ansinet.org/fulltext/itj/itj33327-331.pdf>

A third paper:
<http://www.iki.fi/vph/files/rtp_security.pdf>
The plagiarized version:
<http://www.ansinet.org/fulltext/itj/itj33311-314.pdf>

The apologies are at the bottom of this page:
<http://www.schneier.com/paper-relatedkey-p.html>

There is a lot of discussion, much of it from students at the International Islamic University, in the comments section of my blog post:
<https://www.schneier.com/blog/archives/2005/08/…>

And there’s some news about the incident. (Note that my name is completely wrong.)
<http://www.onlinenews.com.pk/details.php?id=85519>


RFID Passport Security Revisited

I’ve written previously about RFID chips in passports. Two recent articles summarize the latest State Department proposal, and it looks pretty good. They’re addressing privacy concerns, and they’re doing it right.

The most important feature they’ve designed is an access-control system for the RFID chip. The data on the chip is encrypted, and the key is printed on the passport. The officer swipes the passport through an optical reader to get the key, and then the RFID reader uses the key to communicate with the RFID chip. This means that the passport-holder can control who has access to the information on the chip; someone cannot skim information from the passport without first opening it up and reading the information inside. Good security.

The new design also includes a thin radio shield in the cover, protecting the chip when the passport is closed. More good security.

If the State Department implements these features (an assumption at this point), and the features work as advertised (a big “if,” I grant you), then I am no longer opposed to the idea. And, more importantly, we have an example of an RFID identification system with good privacy safeguards. We should demand that any other RFID identification cards have similar privacy safeguards.

<http://www.usatoday.com/travel/news/…>
<http://www.wired.com/news/privacy/…>

My previous writings:
<http://www.schneier.com/essay-060.html>
<https://www.schneier.com/blog/archives/2004/10/…>
<https://www.schneier.com/blog/archives/2005/04/…>


Risks of Losing Portable Devices

As PDAs become more powerful, and memory becomes cheaper, more people are carrying around a lot of personal information in an easy-to-lose format.

I’ve noticed this in my own life. If I didn’t make a special effort to limit the amount of information on my Treo, it would include detailed scheduling information from the past six years. My small laptop would include every e-mail I’ve sent and received in the past dozen years. And so on. A lot of us are carrying around an enormous amount of very personal data.

And some of us are carrying around personal data about other people, too.

There are several ways to deal with this—password protection and encryption, of course. More recently, some communications devices can be remotely erased if lost.

<http://www.washingtonpost.com/wp-dyn/content/…>


How to Not Fix the ID Problem

Several of the 9/11 terrorists had Virginia driver’s licenses in fake names. These were not forgeries; these were valid Virginia IDs that were illegally sold by Department of Motor Vehicle workers.

So what did Virginia do to correct the problem? They required more paperwork in order to get an ID.

But the problem wasn’t that it was too easy to get an ID. The problem was that insiders were selling them illegally. Which is why the Virginia “solution” didn’t help, and the problem remains:

“The manager of the Virginia Department of Motor Vehicles office at Springfield Mall was charged yesterday with selling driver’s licenses to illegal immigrants and others for up to $3,500 apiece.

“The arrest of Francisco J. Martinez marked the second time in two years that a Northern Virginia DMV employee was accused of fraudulently selling licenses for cash. A similar scheme two years ago at the DMV office in Tysons Corner led to the guilty pleas of two employees.”

And after we spend billions on the REAL ID act, and require even more paperwork to get a state ID, the problem will still remain.

<http://www.washingtonpost.com/wp-dyn/content/…>

Virginia license requirements:
<http://www.dmvnow.com/webdoc/pdf/dmv141.pdf>


Secure Flight

Last month the GAO issued a new report on Secure Flight. It’s couched in friendly language, but it’s not good. Here’s an excerpt:

“During the course of our ongoing review of the Secure Flight program, we found that TSA did not fully disclose to the public its use of personal information in its fall 2004 privacy notices as required by the Privacy Act. In particular, the public was not made fully aware of, nor had the opportunity to comment on, TSA’s use of personal information drawn from commercial sources to test aspects of the Secure Flight program. In September 2004 and November 2004, TSA issued privacy notices in the Federal Register that included descriptions of how such information would be used. However, these notices did not fully inform the public before testing began about the procedures that TSA and its contractors would follow for collecting, using, and storing commercial data. In addition, the scope of the data used during commercial data testing was not fully disclosed in the notices. Specifically, a TSA contractor, acting on behalf of the agency, collected more than 100 million commercial data records containing personal information such as name, date of birth, and telephone number without informing the public. As a result of TSA’s actions, the public did not receive the full protections of the Privacy Act.”

Got that? The TSA violated federal law when it secretly expanded Secure Flight’s use of commercial data about passengers. It also lied to Congress and the public about it.

Much of this isn’t new. Last month we learned that the TSA bought and is storing commercial data about passengers, even though officials said they wouldn’t do it and Congress told them not to.

Secure Flight is a disaster in every way. The TSA has been operating with complete disregard for the law or Congress. It has lied to pretty much everyone. And it is turning Secure Flight from a simple program to match airline passengers against terrorist watch lists into a complex program that compiles dossiers on passengers in order to give them some kind of score indicating the likelihood that they are a terrorist.

Which is exactly what it was not supposed to do in the first place.

This is what I wrote about Secure Flight in January:

“For those who have not been following along, Secure Flight is the follow-on to CAPPS-I. (CAPPS stands for Computer Assisted Passenger Pre-Screening.) CAPPS-I has been in place since 1997, and is a simple system to match airplane passengers to a terrorist watch list. A follow-on system, CAPPS-II, was proposed last year. That complicated system would have given every traveler a risk score based on information in government and commercial databases. There was a huge public outcry over the invasiveness of the system, and it was cancelled over the summer. Secure Flight is the new follow-on system to CAPPS-I.”

Back then, Secure Flight was intended to just be a more efficient system of matching airline passengers with terrorist watch lists.

I am on a TSA working group that is looking at the security and privacy implications of Secure Flight. Before joining the group I signed an NDA agreeing not to disclose any information learned within the group, and to not talk about deliberations within the group. But there’s no reason to believe that the TSA is lying to us any less than they’re lying to Congress, and there’s nothing I learned within the working group that I wish I could talk about. Everything I say here comes from public documents.

In January I gave some general conclusions about Secure Flight. These have not changed:

“One, assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place. (And by this I mean the matching program, not any potential uses of commercial or other third-party data.)

“Two, the security system surrounding Secure Flight is riddled with security holes. There are security problems with false IDs, ID verification, the ability to fly on someone else’s ticket, airline procedures, etc.

“Three, the urge to use this system for other things will be irresistible. It’s just too easy to say: “As long as you’ve got this system that watches out for terrorists, how about also looking for this list of drug dealers…and by the way, we’ve got the Super Bowl to worry about too.” Once Secure Flight gets built, all it’ll take is a new law and we’ll have a nationwide security checkpoint system.

“And four, a program of matching airline passengers with names on terrorism watch lists is not making us appreciably safer, and is a lousy way to spend our security dollars.”

What has changed is the scope of Secure Flight. First, it started using data from commercial sources, like Acxiom. Technically, they’re testing the use of commercial data, but it’s still a violation. Even the DHS started investigating whether the TSA has violated federal privacy laws.

The TSA’s response to being caught violating their own Privacy Act statements? Revise them. A news report quotes a TSA official as saying that it’s routine to change Privacy Act statements during testing.

Actually, it’s not. And it’s better to change the Privacy Act statement before violating the old one. Changing it after the fact just looks bad.

The point of Secure Flight is to match airline passengers against lists of suspected terrorists. But the vast majority of people flagged by this list simply have the same name, or a similar name, as the suspected terrorist: Ted Kennedy and Cat Stevens are two famous examples. The question is whether combining commercial data with the PNR (Passenger Name Record) supplied by the airline could reduce this false-positive problem. Maybe knowing the passenger’s address, or phone number, or date of birth, could reduce false positives. Or maybe not; it depends what data is on the terrorist lists. In any case, it’s certainly a smart thing to test.

But using commercial data has serious privacy implications, which is why Congress mandated all sorts of rules surrounding the TSA testing of commercial data—and more rules before it could deploy a final system—rules that the TSA has decided it can ignore completely.

Commercial data had another use under CAPPS-II In that now-dead program, every passenger would be subjected to a computerized background check to determine their “risk” to airline safety. The system would assign a risk score based on commercial data: their credit rating, how recently they moved, what kind of job they had, etc. This capability was removed from Secure Flight, but now it’s back. An AP story quotes Justin Oberman, the TSA official in charge of Secure Flight, as saying: “We are trying to use commercial data to verify the identities of people who fly because we are not going to rely on the watch list…. If we just rise and fall on the watch list, it’s not adequate.”

Oberman also testified in a Congressional hearing:

“THOMPSON: There are a couple of questions I’d like to get answered in my mind about Secure Flight. Would Secure Flight pick up a person with strong community roots but who is in a terrorist sleeper cell or would a person have to be a known terrorist in order for Secure Flight to pick him up?

“OBERMAN: Let me answer that this way: It will identify people who are known or suspected terrorists contained in the terrorist screening database, and it ought to be able to identify people who may not be on the watch list. It ought to be able to do that. We’re not in a position today to say that it does, but we think it’s absolutely critical that it be able to do that.

“And so we are conducting this test of commercially available data to get at that exact issue.: Very difficult to do, generally. It’s particularly difficult to do when you have a system that transports 1.8 million people a day on 30,000 flights at 450 airports. That is a very high bar to get over.

“It’s also very difficult to do with a threat described just like you described it, which is somebody who has sort of burrowed themselves into society and is not readily apparent to us when they’re walking through the airport. And so I cannot stress enough how important we think it is that it be able to have that functionality. And that’s precisely the reason we have been conducting this commercial data test, why we’ve extended the testing period and why we’re very hopeful that the results will prove fruitful to us so that we can then come up here, brief them to you and explain to you why we need to include that in the system.”

My fear is that TSA has already decided that they’re going to use commercial data, regardless of any test results. And once you have commercial data, why not build a dossier on every passenger and give him or her a risk score? So we’re back to CAPPS-II, the very system Congress killed last summer. Actually, we’re very close to TIA (Total/Terrorism Information Awareness), that vast spy-on-everyone data-mining program that Congress killed in 2003 because it was just too invasive.

Secure Flight is a mess in lots of other ways, too. A March GAO report said that Secure Flight had not met nine out of the ten conditions mandated by Congress before TSA could spend money on implementing the program. (If you haven’t read this report, it’s pretty scathing.) The redress problem—helping people who cannot fly because they share a name with a terrorist—is not getting any better. And Secure Flight is behind schedule and over budget.

It’s also a rogue program that is operating in flagrant disregard for the law. It can’t be killed completely; the Intelligence Reform and Terrorism Prevention Act of 2004 mandates that TSA implement a program of passenger prescreening. And until we have Secure Flight, airlines will still be matching passenger names with terrorist watch lists under the CAPPS-I program. But it needs some serious public scrutiny.

July GAO Report:
<http://www.gao.gov/new.items/d05864r.pdf>

My essays on Secure Flight:
<http://www.schneier.com/crypto-gram-0502.html#1>
<http://www.schneier.com/crypto-gram-0501.html#9>
<http://www.schneier.com/crypto-gram-0504.html#11>

News articles:
<http://www.commondreams.org/headlines05/0621-05.htm>
<http://www.secondaryscreening.net/static/archives/…>
<http://www.airportbusiness.com/article/article.jsp?…>
<http://www.commondreams.org/headlines05/0621-05.htm>
<http://www.sfgate.com/cgi-bin/article.cgi?f=/n/a/…>
<http://www.alternet.org/story/23362/>

Congressional hearing:
<http://www6.lexisnexis.com/publisher/EndUser?…>

March GAO Report:
<http://www.gao.gov/new.items/d05356.pdf>

Secure Flight background:
<http://www.epic.org/privacy/airtravel/secureflight.html>

CAPPS-II background:
<http://www.aclu.org/SafeandFree/SafeandFree.cfm?…>

TIA background:
<http://www.epic.org/privacy/profiling/tia/>

Anita Ramasastry’s commentary is worth reading:
<http://writ.news.findlaw.com/ramasastry/20050726.html>

LAST MINUTE NEWS: Wired News reports that the Department of Homeland Security is pushing to let Secure Flight use commercial databases, and to reduce independent Congressional oversight of the program.
<http://www.wired.com/news/privacy/…>


News

An absolutely fascinating interview with Robert Pape, a University of Chicago professor who has studied every suicide terrorist attack since 1980. “The central fact is that overwhelmingly suicide-terrorist attacks are not driven by religion as much as they are by a clear strategic objective: to compel modern democracies to withdraw military forces from the territory that the terrorists view as their homeland.”
<http://www.amconmag.com/2005_07_18/article.html>
His book:
<http://www.amazon.com/exec/obidos/tg/detail/-/…>
Reviews:
<http://www.salon.com/books/review/2005/07/26/pape/…>
<http://www.antiwar.com/scheuer/?articleid=6286>

There’s a major reorganization going on at the Department of Homeland Security. One of the effects is the creation of a new post: assistant secretary for cyber and telecommunications security. Honestly, it doesn’t matter where the nation’s chief cybersecurity chief sits in the organizational chart. If he has the authority to spend money and write regulations, he can do good. If he only has the power to suggest, plead, and cheerlead he’ll be as frustrated as all the previous ones were.
<http://www.computerworld.com/newsletter/…>

In yet another “movie-plot threat” defense, the U.S. government is starting to test anti-missile lasers on commercial aircraft.
<http://news.yahoo.com/news?tmpl=story&u=/…>

Nice MSNBC piece on domestic terrorism in the U.S.
<http://www.msnbc.msn.com/id/8649078/site/newsweek/>
David Neiwert has some good commentary on the topic:
<http://dneiwert.blogspot.com/2005/07/…>
See also this U.S. News and World Report article:
<http://www.usnews.com/usnews/news/articles/050712/…>

The Sorting Door project studies the massive databases that will be created by RFID chips:
<http://www.theregister.co.uk/2005/07/12/…>

Yet another Microsoft-built-in security bypass:
<http://news.com.com/…>
I am very suspicious of tools that allow you to bypass network security systems. Yes, they make life easier. But if security is important, than all security decisions should be made by a central process; tools that bypass that centrality are very risky.

For $13 a month, you can buy “Wells Fargo Select Identity Theft Protection.” The service includes daily monitoring of one’s credit files and assistance in dealing with cases of fraud. It’s a good idea, and it’s reprehensible that Wells Fargo doesn’t offer this service for free. Actually, that’s not true. It’s smart business for Wells Fargo to charge for this service. It’s reprehensible that the regulatory landscape is such that Wells Fargo does not feel it’s in its best interest to offer this service for free. Wells Fargo is a for-profit enterprise, and they react to the realities of the market. We need those realities to better serve the people.
<http://www.sfgate.com/cgi-bin/article.cgi?file=/c/a/…>

Phil Zimmermann’s encrypted VOIP phone:
<http://www.wired.com/news/technology/…>

Supposedly British police have asked the government for a bunch of new powers to fight terrorism, including the right to detain a suspect for up to three months without charge (current limit is 14 days), and make it a criminal offence not to give police encryption keys. When Sir Ian Blair was asked why the police wanted the extra time, he said that they sometimes needed to access encrypted computer files and 14 days was not enough time for them to break the encryption. That answer makes no sense. While it’s certainly possible that password-guessing programs are more successful with three months to guess, the Regulation of Investigatory Powers (RIP) Act—which went into effect in 2000—already allows the police to jail people who don’t surrender encryption keys.
<http://www.guardian.co.uk/print/…>
<http://edge.channel4.com/news/2005/07/week_4/…>
<http://www.guardian.co.uk/theissues/article/…>

Intel and Microsoft are using DRM technology to cut Linux out of the content market.
<http://theinquirer.net/?article=24638>
My essay on Microsoft’s “Trusted Computing” platform:
<http://www.schneier.com/crypto-gram-0208.html#1>
My essay on the Microsoft monopoly, which predicted this kind of behavior:
<http://www.schneier.com/crypto-gram-0310.html#12>
<http://www.ccianet.org/papers/cyberinsecurity.pdf>

Fascinating research on automatic surveillance via cell phone:
<http://www.wired.com/news/wireless/0,1382,68263,00.html>
<http://reality.media.mit.edu/>

Microsoft wants to make pirated software less useful by preventing it from receiving patches and updates. At the same time, it is in everyone’s best interest for all software to be more secure: legitimate and pirated. This issue has been percolating for a while, and I’ve written about it twice before. After much going back and forth, Microsoft is going to do the right thing.
<http://news.com.com/…>
My previous writings:
<https://www.schneier.com/blog/archives/2005/02/…>
<http://www.schneier.com/crypto-gram-0406.html#4>

Hacking hotel infrared systems:
<http://www.wired.com/news/privacy/0,1848,68370,00.html>

The Department of Homeland Security is testing a program to issue RFID identity cards to visitors entering the U.S.
<http://www.thewhig.com/webapp/sitepages/content.asp?…>
<http://www.dhs.gov/dhspublic/display?content=4308>
I know nothing about the details of this program or about the security of the cards. Even so, the long-term implications of this kind of thing are very chilling.

Eavesdropping on Bluetooth-enabled automobiles.
<http://trifinite.org/blog/archives/2005/07/…>
<http://www.computerworld.com/securitytopics/…>

Salon has an interesting article about parents turning to technology to monitor their children, instead of to other people in their community. This is security based on fear, not reason. And I think people who act this way make their families less safe.
<http://www.salon.com/mwt/feature/2005/07/25/…>
<http://search.barnesandnoble.com/booksearch/…>

Here’s a post-Cold-War risk I had not thought of: caches of explosives hidden in Moscow:
<http://www.mosnews.com/feature/2005/07/15/bomba.shtml>
Turns out this is not just a Soviet phenomenon. In the 1980s and 1990s, several weapons caches were discovered in Western Europe, left by the CIA and NATO.

Rules on exporting cryptography outside the United States have been renewed.
<http://news.com.com/2061-10789_3-5817718.html>

There’s a new Windows 2000 vulnerability. When you read the link, don’t fail to notice the sensationalist explanation from eEye. This is what I call a “publicity attack”: it’s an attempt by eEye Digital Security to get publicity for their company. Yes, I’m sure it’s a bad vulnerability. Yes, I’m sure Microsoft should have done more to secure their systems. But eEye isn’t blameless in this; they’re searching for vulnerabilities that make good press releases.
<http://news.com.com/Worm+hole+found+in+Windows+2000/…>
My essay on publicity attacks:
<http://www.schneier.com/…>
The wrong example:
<http://www.schneier.com/…>(note that the particular example in that essay is wrong)

Here’s the basic story: A woman and her dog are riding the Seoul subways. The dog poops in the floor. The woman refuses to clean it up, despite being told to by other passengers. Someone takes a picture of her, posts it on the Internet, and she is publicly shamed—and the story will live on the Internet forever. Then, the blogosphere debates the notion of the Internet as a social enforcement tool.
<https://www.schneier.com/blog/archives/2005/07/…>

Interesting details about the bombs used in the 7/7 London bombings:
<http://www.cnn.com/2005/US/08/03/nypd.london.bomb.ap/>
For those of you upset that the police divulged the recipe—citric acid, hair bleach, and food heater tablets—the details are already out there.
<http://business.fortunecity.com/executive/674/hmtd.html>
<http://www.fortliberty.org/military-library/…>
<http://www.roguesci.org/theforum/index.php>
And here are some images of home-made explosives seized in the various raids after the bombings.
<http://abcnews.go.com/WNT/popup?id=979901l>
Normally this kind of information would be classified. It seems that the New York Police released this information by mistake.
<http://news.bbc.co.uk/2/hi/uk_news/4746381.stm>

Playing classical music outside your storefront helps prevent loitering:
<http://www.freenewmexican.com/artsfeatures/10701.html>
The idea is at least a decade old:
<http://www.citypages.com/databank/18/842/…>
Note that this does not reduce loitering, only moves it around. But if you’re the owner of a 7-Eleven, you don’t care if kids are loitering at the store down the block. You just don’t want them loitering at your store.

Profiling humor:
<http://images.ucomics.com/comics/gm/2005/gm050804.gif/>

Orlando Airport is piloting a new pre-screening program called CLEAR. The idea is that you pay $80 a year and subject yourself to a background check, and then you can use a faster security line at airports.
<http://www.airportbusiness.com/article/article.jsp?…>
<http://www.rednova.com/news/technology/153572/…>
<http://www.securityinfowatch.com/online/Biometrics/…>
<http://www.flyclear.com/clear.html>
I’ve already written about this idea, back when Steven Brill first started talking about it:
<http://www.schneier.com/crypto-gram-0403.html#10>
Nothing in this program is different from what I wrote about last year. According to their website: “Your Membership will be continuously reviewed by TSA’s ongoing Security Threat Assessment Process. If your security status changes, your Membership will be immediately deactivated and you will receive a notification email of your status change as well as a refund of the unused portion of your annual enrollment fee.” Think about it. For $80 a year, any potential terrorist can be automatically notified if the Department of Homeland Security is on to him. Such a deal.

At DefCon earlier this month, a group was able to set up an unamplified 802.11 network at a distance of 124.9 miles.
<http://www.enterpriseitplanet.com/networking/news/…>
<http://pasadena.net/shootout05/>
Even more important, the world record for communicating with a passive RFID device was set at 69 feet. Remember that the next time someone tells you that it’s impossible to read RFID identity cards at a distance.
<http://s.washingtonpost.com/securityfix/2005/08/…>
<http://www.makezine.com//archive/2005/07/…>
Whenever you hear a manufacturer talk about a distance limitation for any wireless technology—wireless LANs, RFID, Bluetooth, anything—assume he’s wrong. If he’s not wrong today, he will be in a couple of years. Assume that someone who spends some money and effort building more sensitive technology can do much better, and that it will take less money and effort over the years. Technology always gets better; it never gets worse. If something is difficult and expensive now, it will get easier and cheaper in the future.

This New York Times op-ed argues that panic is largely a myth. People feel stressed but they behave rationally, and it only gets called “panic” because of the stress.
<http://www.nytimes.com/2005/08/07/opinion/…>

Interesting article: “The Hidden Boot Code of the Xbox, or How to fit three bugs in 512 bytes of security code.”
<http://www.xbox-linux.org/wiki/…>
Microsoft wanted to lock out both pirated games and unofficial games, so they built a chain of trust on the Xbox from the hardware to the execution of the game code. Only code authorized by Microsoft could run on the Xbox. The link between hardware and software in this chain of trust is the hidden “MCPX” boot ROM. The article discusses that ROM. Lots of kindergarten security mistakes.

An attorney in Australia has successfully used the MD5 Defense—the fact that the hash function is broken—to fight a highway camera that photographs speeders.
<http://theage.com.au/articles/2005/08/10/…>
<http://www.news.com.au/story/…>
This is interesting. It’s true that MD5 is broken. On the other hand, it’s almost certainly true that the speed cameras were correct. If there’s any lesson here, it’s that theoretical security is important in legal proceedings. I think that’s a good thing.
<http://www.schneier.com/crypto-gram-0409.html#3>

A comment on the U.K. government using a border-security failure to push for national ID cards:
<http://www.theregister.co.uk/2005/08/04/…>

Fingerprinting paper:
<https://www.schneier.com/blog/archives/2005/08/…>
This could make an enormous difference in security against forgeries. The idea isn’t new. I remember currency anti-counterfeiting research in which fiber-optic bits were added to the paper pulp, and a “fingerprint” was taken using a laser. It didn’t work then, but it was clever.

Do-it-Yourself Security Checkpoint:
<http://eurobsd.org/2005-WhatTheHack/reports/…>

The TSA wants you to get spam:
<https://www.schneier.com/blog/archives/2005/08/…>

Cryptographically-secured murder confession:
<http://seattlepi.nwsource.com/local/…>

Remember all thost stories about the terrorists hiding messages in television broadcasts? They were all false alarms.
<http://www.guardian.co.uk/life/feature/story/…>

The Devil’s Infosec Dictionary:
<http://www.csoonline.com/read/080105/debrief.html>
I want it to be funnier. And I want the entry that mentions me—”Cryptography: The science of applying a complex set of mathematical algorithms to sensitive data with the aim of making Bruce Schneier exceedingly rich”—to be more true. Over at my blog, I’m collecting better and funnier definitions. Join in if you want:
<https://www.schneier.com/blog/archives/2005/08/…>


Shoot-to-Kill

London’s Metropolitan Police has a shoot-to-kill policy when dealing with suspected suicide terrorists. And the International Association of Chiefs of Police have issued new guidelines that also recommend a shoot-to-kill policy. The theory is that only a direct headshot will kill the terrorist immediately, and thus destroy the ability to execute a bombing attack.

What might cause a police officer to think you’re a suicide bomber, and then shoot you in the head?

“The police organization’s behavioral profile says such a person might exhibit ‘multiple anomalies,’ including wearing a heavy coat or jacket in warm weather or carrying a briefcase, duffel bag or backpack with protrusions or visible wires. The person might display nervousness, an unwillingness to make eye contact or excessive sweating. There might be chemical burns on the clothing or stains on the hands. The person might mumble prayers or be ‘pacing back and forth in front of a venue.'”

Is that all that’s required?

“The police group’s guidelines also say the threat to officers does not have to be ‘imminent,’ as police training traditionally teaches. Officers do not have to wait until a suspected bomber makes a move, another traditional requirement for police to use deadly force. An officer just needs to have a ‘reasonable basis’ to believe that the suspect can detonate a bomb, the guidelines say.”

This policy is based on the extremely short-sighted assumption that a terrorist needs to push buttons to make a bomb explode. In fact, ever since World War I, the most common type of bomb carried by a person has been the hand grenade. It is entirely conceivable, especially when a shoot-to-kill policy is known to be in effect, that suicide bombers will use the same kind of dead-man’s trigger on their bombs: a detonator that is activated when a button is released, rather than when it is pushed. This is a difficult one. Whatever policy you choose, the terrorists will adapt to make that policy the wrong one.

It’s also a policy that puts people at risk rather than making them safer. The security question to ask is not: “How else can we stop a suicide bomber?” The real question is: “When the police suspect someone of being able to detonate a bomb, what should they do?” Backpack bombers are very rare, so much so that anyone whom the police suspect will most likely be innocent.

The London police are now sorry they accidentally killed an innocent they suspected of being a suicide bomber, but I can certainly understand the mistake. In the end, the best solution is to train police officers and then leave the decision to them. But honestly, policies that are more likely to result in living incarcerated suspects who can be interrogated are better than policies that are more likely to result in corpses, especially when most suspects will be found innocent.

London policy:
<http://news.bbc.co.uk/2/hi/uk_news/4707781.stm>

International Association of Chiefs of Police policy:
<http://www.washingtonpost.com/wp-dyn/content/…>


Counterpane News

WilTel Communications is now offering Counterpane managed services to its customers:
<http://www.counterpane.com/alliances-news.html>

Schneier was interviewed in Government Technology:
<http://www.govtech.net/magazine/story.php?id=95671>


Visa and Amex Drop CardSystems

Remember CardSystems Solutions, the company that exposed over 40 million identities to potential fraud? (The actual number of identities that will be the victims of fraud is almost certainly much, much lower.)

Both Visa and American Express are dropping them as a payment processor: “Within hours of the disclosure that Visa was seeking a replacement for CardSystems Solutions, American Express said Tuesday it would no longer do business with the company beginning in October.”

The biggest problem with CardSystems’ actions wasn’t that it had bad computer security practices, but that it had bad business practices. It was holding exception files with personal information, even though it was not supposed to. It was not for marketing, as I originally surmised, but to find out why transactions were not being authorized. It was disregarding the rules it agreed to follow.

Technical problems can be remediated. A dishonest corporate culture is much harder to fix. That was what I sense reading between the lines:

“Visa had been weighing the decision for a few weeks but as recently as mid-June said that it was working with CardSystems to correct the problem. CardSystems hired an outside security assessor this month to review its policies and practices, and it promised to make any necessary upgrades by the end of August. CardSystems, in its statement yesterday, said the company’s executives had been “in almost daily contact” with Visa since the problems were discovered in May.

“Visa, however, said that despite ‘some remediation efforts’ since the incident was reported, the actions by CardSystems were not enough.”

And this:

“CardSystems Solutions Inc. ‘has not corrected, and cannot at this point correct, the failure to provide proper data security for Visa accounts,’ said Rosetta Jones, a spokeswoman for Foster City, Calif.-based Visa….

“Visa said that while CardSystems has taken some remediating actions since the breach was disclosed, those could not overcome the fact that it was inappropriately holding on to account information—purportedly for ‘research purposes’—when the breach occurred, in violation of Visa’s security rules.”

At this point, it is unclear what MasterCard and Discover will do.

“MasterCard International Inc. is taking a different tack with CardSystems. The credit card company expects CardSystems to develop a plan for improving its security by Aug. 31, ‘and as of today, we are not aware of any deficiencies in its systems that are incapable of being remediated,’ spokeswoman Sharon Gamsin said.

“‘However, if CardSystems cannot demonstrate that they are in compliance by that date, their ability to provide services to MasterCard members will be at risk,’ she said.

“Jennifer Born, a spokeswoman for Discover Financial Services Inc., which also has a relationship with CardSystems, said the Riverwoods, Ill.-based company was ‘doing our due diligence and will make our decision once that process is completed.'”

I think this is a positive development. I have long said that companies like CardSystems won’t clean up their acts unless there are consequences for not doing so. Credit card companies dropping CardSystems sends a strong message to the other payment processors: improve your security if you want to stay in business.

News articles:
<http://www.ajc.com/news/content/business/0705/…>
<http://www.nytimes.com/2005/07/19/business/…>
<http://news.yahoo.com/news?…>

My original essay on CardSystems:
<http://www.schneier.com/crypto-gram-0507.html#3>

Some interesting legal opinions on the larger issue of disclosure:
<http://writ.news.findlaw.com/ramasastry/20050713.html>


Comments from Readers

From: Ed Gerck <egerck nma.com>
Subject: Comment on CardSystems article

As you report, credit card companies can and do force companies that process credit card data to increase their security. However, how about the “acceptable risk” concept that underlies the very security procedures of these same credit card companies?

The dirty little secret of the credit card industry is that they are very happy with 10% of credit card fraud, over the Internet or not.

In fact, if they would reduce fraud to _zero_ today, their revenue would decrease as well as their profits. So, there is really no incentive to reduce fraud. On the contrary, keeping the status quo is just fine.

This is so because of insurance—up to a certain level, which is well within the operational boundaries of course, a fraudulent transaction does not go unpaid through Visa, American Express or MasterCard servers. The transaction is fully paid, with its insurance cost paid by the merchant and, ultimately, by the customer.

“Acceptable risk” has been for a long time an euphemism for that business model that shifts the burden of fraud to the customer.

Thus, the credit card industry has successfully turned fraud into a sale. This is the same attitude reported to me by a car manufacturer representative when I was talking to him about simple techniques to reduce car theft—to which he said: “A car stolen is a car sold.”

In fact, a car stolen will need replacement that will be provided by insurance or by the customer working again to buy another car, while the stolen car continues to generate revenue for the manufacturer in service and parts.

Whenever we see continued fraud, we should be certain: the defrauded is profiting from it, because no company will accept a continued loss without doing anything to reduce it. Arguments such as “we don’t want to reduce the fraud level because it would cost more to reduce the fraud than the fraud costs” are just a marketing way to say that a fraud has become a sale.

Because fraud is an hemorrhage that adds up, while efforts to fix it—if done correctly—are mostly an up front cost that is incurred only once. So, to accept fraud debits is to accept that there is also a credit that continuously compensates the debit. Which credit ultimately flows from the customer—just like in car theft.

What is to blame? Not only the twisted ethics behind this attitude but also that traditional security school of thought which focus on risk, surveillance and insurance as the solution to security problems.

There is no consideration of what trust really would mean in terms of bits and machines, no consideration that the insurance model of security cannot scale in Internet volumes and cannot even be ethically justifiable.

“A fraud is a sale” is the only outcome possible from using such security school of thought. Also sometimes referred to as “acceptable risk”—acceptable indeed, because it is paid for.

From: Tom Welsh <tom draco.demon.co.uk>
Subject: Re: IEDs in Iraq

“After a while, U.S. troops got good at spotting and killing the triggermen when bombs went off.”

Well yes… kinda. Think for a moment, and you can imagine how it would go. “If a bomb goes off while we’re driving along the road, take out any hajis who look as if they might be holding a remote detonator”. Rat-a-tat-tat! Goodbye to lots of locals, most of whom were checking their mobile phones, reading books, getting money out of their wallets, etc.

Of course this is exactly what the insurgents are trying to accomplish. Killing infidels is OK, but it’s not the main goal. Getting the infidels to kill civilians is the main goal, and boy do they oblige when you goose them right.

I don’t know whether it was Vietnam or Mogadishu that was the turning point, but at some stage the Pentagon decided that as few American boys were going to be hurt as possible when they were in other people’s countries setting the world to rights. Give a bunch of green troops the heaviest firepower that soldiers have ever had at their disposal, and tell them to be sure and get their retaliation in first—as if they needed any encouragement—and guess what happens? Freakily low U.S. casualties, tens of thousands of dead and maimed civilians, and a popularity rating that is steadily catching up with the Waffen-SS. Give them time, they’ll be challenging the Allgemeine-SS.

My point is that, from a security expert’s point of view, you can win the battles and lose the war – and taking out any and all “suspicious-looking people” is a great way to do so.

From: Les Jones <llj sses.net>
Subject: RE: CRYPTO-GRAM, July 15, 2005

“This advice would have helped Brennan Hawkins, the 11-year-old boy who was lost in the Utah wilderness for four days last month. He avoided people searching for him because he had been taught not to talk to strangers.”

Avoiding rescuers is a common reaction in people who have been lost in the woods. See Dwight McCarter’s book, “Lost,” an account of search and rescue operations in the Great Smoky Mountains National Park. In one chapter McCarter tells the story of two backpackers in the park who got separated while traveling off-trail in the vicinity of Thunderhead. The less-experienced hiker quickly got lost.

After a day or two wandering around he was going through his pack and found a backpacking how-to book that explained what to do in case you got lost in the woods. Following the advice, he went to a clearing and built a signal fire. A rescue helicopter saw the smoke and hovered overhead above the tree tops as he waved his arms to attract their attention. The helicopter dropped a sleeping bag and food, with a note saying they couldn’t land in the clearing, but that they would send in a rescue party on foot.

The lost hiker sat down, tended his fire, and waited for rescue. When the rescuers appeared at the edge of the clearing, he panicked, jumped up, and ran in the other direction. They had to chase him down to rescue him. This despite the fact that he wanted to be rescued, had taken active steps to attract rescuers, and knew that rescuers were coming to him. Odd but true.

From: Tamas K Papp
Subject: Re: Talking to Strangers

You claim that “‘don’t talk to strangers” is just about the worst possible advice you can give a child.”

The “security policy” of not talking to strangers actually covers two distinct situations:

(A) Don’t initiate conversation with strangers.

(B) Do not respond if strangers try to talk to you.

In (A), we are dealing with the prior probability (e.g., their proportion in the population of the area, etc) of strangers being harmless or dangerous (p(H) and p(D), respectively). I agree with your conclusion that in any normal society, p(D) is very small, hence the advice of paranoid parents doesn’t make much sense in this case.

However, careful analysis of (B) shows that here we are dealing with the posterior probability of strangers being dangerous _given_ that they initiated the conversation (we will denote that by T). You can use Bayes’ Rule to calculate this; i.e.:

p(D|T) = p(T|D)p(D)/p(T)

where p(T) = p(T|D)p(D) + p(T|H)p(H) is the probability that strangers of any kind talk to you. In a society where “normal” people don’t talk to strangers, p(T|H) is close to zero, while it is possible that dangerous people (child molesters, criminals) will talk to children with significant probability, thus p(T|D) will be larger than zero.

Thus even if p(D) is low, p(D|T) might be high enough for part (B) to make sense: you use the information in the signal to revise your estimate of strangers being dangerous.

Parents might think that the distinction between (A) and (B) is too subtle for a little child, and resort to the suboptimal but simple rule of not talking to strangers.

I agree with you that “[i]n a world where good guys are common and bad guys are rare, assuming a random person is a good guy is a smart security strategy”. However, ignoring signals that help revise your probability estimates is a bad security strategy.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.

Copyright (c) 2005 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.