Blog: July 2005 Archives

Dog Poop Girl

Here’s the basic story: A woman and her dog are riding the Seoul subways. The dog poops in the floor. The woman refuses to clean it up, despite being told to by other passangers. Someone takes a picture of her, posts it on the Internet, and she is publicly shamed—and the story will live on the Internet forever. Then, the blogosphere debates the notion of the Internet as a social enforcement tool.

The Internet is changing our notions of personal privacy, and how the public enforces social norms.

Daniel Solove writes:

The dog-shit-girl case involves a norm that most people would seemingly agree to—clean up after your dog. Who could argue with that one? But what about when norm enforcement becomes too extreme? Most norm enforcement involves angry scowls or just telling a person off. But having a permanent record of one’s norm violations is upping the sanction to a whole new level. The blogosphere can be a very powerful norm-enforcing tool, allowing bloggers to act as a cyber-posse, tracking down norm violators and branding them with digital scarlet letters.

And that is why the law might be necessary—to modulate the harmful effects when the norm enforcement system gets out of whack. In the United States, privacy law is often the legal tool called in to address the situation. Suppose the dog poop incident occurred in the United States. Should the woman have legal redress under the privacy torts?

If this incident is any guide, then anyone acting outside the accepted norms of whatever segment of humanity surrounds him had better tread lightly. The question we need to answer is: is this the sort of society we want to live in? And if not, what technological or legal controls do we need to put in place to ensure that we don’t?

Solove again:

I believe that, as complicated as it might be, the law must play a role here. The stakes are too important. While entering law into the picture could indeed stifle freedom of discussion on the Internet, allowing excessive norm enforcement can be stifling to freedom as well.

All the more reason why we need to rethink old notions of privacy. Under existing notions, privacy is often thought of in a binary way ­ something either is private or public. According to the general rule, if something occurs in a public place, it is not private. But a more nuanced view of privacy would suggest that this case involved taking an event that occurred in one context and significantly altering its nature ­ by making it permanent and widespread. The dog-shit-girl would have been just a vague image in a few people’s memory if it hadn’t been for the photo entering cyberspace and spreading around faster than an epidemic. Despite the fact that the event occurred in public, there was no need for her image and identity to be spread across the Internet.

Could the law provide redress? This is a complicated question; certainly under existing doctrine, making a case would have many hurdles. And some will point to practical problems. Bloggers often don’t have deep pockets. But perhaps the possibility of lawsuits might help shape the norms of the Internet. In the end, I strongly doubt that the law alone can address this problem; but its greatest contribution might be to help along the development of blogging norms that will hopefully prevent more cases such as this one from having crappy endings.

Posted on July 29, 2005 at 4:21 PM56 Comments

Microsoft Permits Pirated Software to Receive Security Patches

Microsoft wants to make pirated software less useful by preventing it from receiving patches and updates. At the same time, it is in everyone’s best interest for all software to be more secure: legitimate and pirated. This issue has been percolating for a while, and I’ve written about it twice before. After much back and forth, Microsoft is going to do the right thing:

From now on, customers looking to get the latest add-ons to Windows will have to verify that their copy of the operating system is legit….

The only exception is for security-related patches. Regardless of whether a system passes the test, security updates will be available to all Windows users via either manual download or automatic update.

Microsoft deserves praise for this.

On the other hand, the system was cracked within 24 hours.

Posted on July 29, 2005 at 11:26 AM29 Comments

Cisco Harasses Security Researcher

I’ve written about full disclosure, and how disclosing security vulnerabilities is our best mechanism for improving security—especially in a free-market system. (That essay is also worth reading for a general discussion of the security trade-offs.) I’ve also written about how security companies treat vulnerabilities as public-relations problems first and technical problems second. This week at BlackHat, security researcher Michael Lynn and Cisco demonstrated both points.

Lynn was going to present security flaws in Cisco’s IOS, and Cisco went to inordinate lengths to make sure that information never got into the hands of the their consumers, the press, or the public.

Cisco threatened legal action to stop the conference’s organizers from allowing a 24-year-old researcher for a rival tech firm to discuss how he says hackers could seize control of Cisco’s Internet routers, which dominate the market. Cisco also instructed workers to tear 20 pages outlining the presentation from the conference program and ordered 2,000 CDs containing the presentation destroyed.

In the end, the researcher, Michael Lynn, went ahead with a presentation, describing flaws in Cisco’s software that he said could allow hackers to take over corporate and government networks and the Internet, intercepting and misdirecting data communications. Mr. Lynn, wearing a white hat emblazoned with the word “Good,” spoke after quitting his job at Internet Security Systems Inc. Wednesday. Mr. Lynn said he resigned because ISS executives had insisted he strike key portions of his presentation.

Not being able to censor the information, Cisco decided to act as if it were no big deal:

In a release shortly after the presentation, Cisco stated, “It is important to note that the information Lynn presented was not a disclosure of a new vulnerability or a flaw with Cisco IOS software. Lynn’s research explores possible ways to expand exploitations of known security vulnerabilities impacting routers.” And went on to state “Cisco believes that the information Lynn presented at the Blackhat conference today contained proprietary information and was illegally obtained.” The statement also refers to the fact that Lynn stated in his presentation that he used a popular file decompressor to ‘unzip’ the Cisco image before reverse engineering it and finding the flaw, which is against Cisco’s use agreement.

The Cisco propaganda machine is certainly working overtime this week.

The security implications of this are enormous. If companies have the power to censor information about their products they don’t like, then we as consumers have less information with which to make intelligent buying decisions. If companies have the power to squelch vulnerability information about their products, then there’s no incentive for them to improve security. (I’ve written about this in connection to physical keys and locks.) If free speech is subordinate to corporate demands, then we are all much less safe.

Full disclosure is good for society. But because it helps the bad guys as well as the good guys (see my essay on secrecy and security for more discussion of the balance), many of us have championed “responsible disclosure” guidelines that give vendors a head start in fixing vulnerabilities before they’re announced.

The problem is that not all researchers follow these guidelines. And laws limiting free speech do more harm to society than good. (In any case, laws won’t completely fix the problem; we can’t get laws passed in every possible country security researchers live.) So the only reasonable course of action for a company is to work with researchers who alert them to vulnerabilities, but also assume that vulnerability information will sometimes be released without prior warning.

I can’t imagine the discussions inside Cisco that led them to act like thugs. I can’t figure out why they decided to attack Michael Lynn, BlackHat, and ISS rather than turn the situation into a public-relations success. I can’t believe that they thought they could have censored the information by their actions, or even that it was a good idea.

Cisco’s customers want information. They don’t expect perfection, but they want to know the extent of problems and what Cisco is doing about them. They don’t want to know that Cisco tries to stifle the truth:

Joseph Klein, senior security analyst at the aerospace electronic systems division for Honeywell Technology Solutions, said he helped arrange a meeting between government IT professionals and Lynn after the talk. Klein said he was furious that Cisco had been unwilling to disclose the buffer-overflow vulnerability in unpatched routers. “I can see a class-action lawsuit against Cisco coming out of this,” Klein said.

ISS didn’t come out of this looking very good, either:

“A few years ago it was rumored that ISS would hold back on certain things because (they’re in the business of) providing solutions,” [Ali-Reza] Anghaie, [a senior security engineer with an aerospace firm, who was in the audience,] said. “But now you’ve got full public confirmation that they’ll submit to the will of a Cisco or Microsoft, and that’s not fair to their customers…. If they’re willing to back down and leave an employee … out to hang, well what are they going to do for customers?”

Despite their thuggish behavior, this has been a public-relations disaster for Cisco. Now it doesn’t matter what they say—we won’t believe them. We know that the public-relations department handles their security vulnerabilities, and not the engineering department. We know that they think squelching information and muzzling researchers is more important than informing the public. They could have shown that they put their customers first, but instead they demonstrated that short-sighted corporate interests are more important than being a responsible corporate citizen.

And these are the people building the hardware that runs much of our infrastructure? Somehow, I don’t feel very secure right now.

EDITED TO ADD: I am impressed with Lynn’s personal integrity in this matter:

When Mr. Lynn took the stage yesterday, he was introduced as speaking on a different topic, eliciting boos. But those turned to cheers when he asked, “Who wants to hear about Cisco?” As he got started, Mr. Lynn said, “What I just did means I’m about to get sued by Cisco and ISS. Not to put too fine a point on it, but bring it on.”

And this:

Lynn closed his talk by directing the audience to his resume and asking if anyone could give him a job.

“In large part I had to quit to give this presentation because ISS and Cisco would rather the world be at risk, I guess,” Lynn said. “They had to do what’s right for their shareholders; I understand that. But I figured I needed to do what’s right for the country and for the national critical infrastructure.”

There’s a lawsuit against him. I’ll let you know if there’s a legal defense fund.

EDITED TO ADD: The lawsuit has been settled. Some details:

Michael Lynn, a former ISS researcher, and the Black Hat organisers agreed to a permanent injunction barring them from further discussing the presentation Lynn gave on Wednesday. The presentation showed how attackers could take over Cisco routers, a problem that Lynn said could bring the Internet to its knees.

The injunction also requires Lynn to return any materials and disassembled code related to Cisco, according to a copy of the injunction, which was filed in US District Court for the District of Northern California. The injunction was agreed on by attorneys for Lynn, Black Hat, ISS and Cisco.

Lynn is also forbidden to make any further presentations at the Black Hat event, which ended on Thursday, or the following Defcon event. Additionally, Lynn and Black Hat have agreed never to disseminate a video made of Lynn’s presentation and to deliver to Cisco any video recording made of Lynn.

My hope is that Cisco realized that continuing with this would be a public-relations disaster.

EDITED TO ADD: Lynn’s BlackHat presentation is on line.

EDITED TO ADD: The FBI is getting involved.

EDITED TO ADD: The link to the presentation, above, has been replaced with a cease-and-desist letter. A copy of the presentation is now here.

Posted on July 29, 2005 at 4:35 AM115 Comments

Automatic Surveillance Via Cell Phone

Your cell phone company knows where you are all the time. (Well, it knows where your phone is whenever it’s on.) Turns out there’s a lot of information to be mined in that data.

Eagle’s Realty Mining project logged 350,000 hours of data over nine months about the location, proximity, activity and communication of volunteers, and was quickly able to guess whether two people were friends or just co-workers….

He and his team were able to create detailed views of life at the Media Lab, by observing how late people stayed at the lab, when they called one another and how much sleep students got.

Given enough data, Eagle’s algorithms were able to predict what people—especially professors and Media Lab employees—would do next and be right up to 85 percent of the time.

This is worrisome from a number of angles: government surveillance, corporate surveillance for marketing purposes, criminal surveillance. I am not mollified by this comment:

People should not be too concerned about the data trails left by their phone, according to Chris Hoofnagle, associate director of the Electronic Privacy Information Center.

“The location data and billing records is protected by statute, and carriers are under a duty of confidentiality to protect it,” Hoofnagle said.

We’re building an infrastructure of surveillance as a side effect of the convenience of carrying our cell phones everywhere.

Posted on July 28, 2005 at 4:09 PM

Risks of Losing Portable Devices

As PDAs become more powerful, and memory becomes cheaper, more people are carrying around a lot of personal information in an easy-to-lose format. The Washington Post has a story about this:

Personal devices “are carrying incredibly sensitive information,” said Joel Yarmon, who, as technology director for the staff of Sen. Ted Stevens (R-Alaska), had to scramble over a weekend last month after a colleague lost one of the office’s wireless messaging devices. In this case, the data included “personal phone numbers of leaders of Congress. . . . If that were to leak, that would be very embarrassing,” Yarmon said.

I’ve noticed this in my own life. If I didn’t make a special effort to limit the amount of information on my Treo, it would include detailed scheduling information from the past six years. My small laptop would include every e-mail I’ve sent and received in the past dozen years. And so on. A lot of us are carrying around an enormous amount of very personal data.

And some of us are carrying around personal data about other people, too:

Companies are seeking to avoid becoming the latest example of compromised security. Earlier this year, a laptop computer containing the names and Social Security numbers of 16,500 current and former MCI Inc. employees was stolen from the car of an MCI financial analyst in Colorado. In another case, a former Morgan Stanley employee sold a used BlackBerry on the online auction site eBay with confidential information still stored on the device. And in yet another incident, personal information for 665 families in Japan was recently stolen along with a handheld device belonging to a Japanese power-company employee.

There are several ways to deal with this—password protection and encryption, of course. More recently, some communications devices can be remotely erased if lost.

Posted on July 28, 2005 at 11:40 AM31 Comments

Monopolies and DRM

Two years ago I (and others) wrote about the security dangers of Microsoft’s monopoly. In the paper, we wrote:

Security has become a strategic concern at Microsoft but security must not be permitted to become a tool of further monopolization.

A year before that, I wrote about Microsoft’s trusted computer system (called Palladium—Pd for short—at the time):

Pay attention to the antitrust angle. I guarantee you that Microsoft believes Pd is a way to extend its market share, not to increase competition.

Intel and Microsoft are using DRM technology to cut Linux out of the content market.

This whole East Fork scheme is a failure from the start. It brings nothing positive to the table, costs you money, and rights. If you want to use Linux to view your legitimately purchased media, you will be a criminal. In fact, if you want to take your legitimately bought media with you on a road trip and don’t feel the need to pay again for it—fair use, remember—you are also a criminal. Wonderful.

Intel has handed the keys to the digital media kingdom to several convicted monopolists who have no care at all for their customers. The excuse Intel gives you if you ask is that they are producing tools, and only tools, their use is not up to Intel. The problem here is that Intel has given the said tools to some of the most rapacious people on earth. If you give the record companies a DRM scheme that goes from 1 (open) to 10 (unusably locked down), they will start at 14 and lobby Congress to mandate that it can be turned up higher by default.

Posted on July 28, 2005 at 7:25 AM28 Comments

UK Police and Encryption

From The Guardian:

Police last night told Tony Blair that they need sweeping new powers to counter the terrorist threat, including the right to detain a suspect for up to three months without charge instead of the current 14 days….

They also want to make it a criminal offence for suspects to refuse to cooperate in giving the police full access to computer files by refusing to disclose their encryption keys.

On Channel 4 News today, Sir Ian Blair was asked why the police wanted to extend the time they could hold someone without charges from 14 days to 3 months. Part of his answer was that they sometimes needed to access encrypted computer files and 14 days was not enough time for them to break the encryption.

There’s something fishy going on here.

It’s certainly possible that password-guessing programs are more successful with three months to guess. But the Regulation of Investigatory Powers (RIP) Act, which went into effect in 2000, already allows the police to jail people who don’t surrender encryption keys:

If intercepted communications are encrypted (encoded and made secret), the act will force the individual to surrender the keys (pin numbers which allow users to decipher encoded data), on pain of jail sentences of up to two years.

Posted on July 27, 2005 at 3:00 PM56 Comments

How Banks Profit from ID Theft

Wells Fargo is profiting because its customers are afraid of identity theft:

The San Francisco bank, in conjunction with marketing behemoth Trilegiant, is offering a new service called Wells Fargo Select Identity Theft Protection. For $12.99 a month, this includes daily monitoring of one’s credit files and assistance in dealing with cases of fraud.

It’s reprehensible that Wells Fargo doesn’t offer this service for free.

Actually, that’s not true. It’s smart business for Wells Fargo to charge for this service. It’s reprehensible that the regulatory landscape is such that Wells Fargo does not feel it’s in its best interest to offer this service for free. Wells Fargo is a for-profit enterprise, and they react to the realities of the market. We need those realities to better serve the people.

Posted on July 27, 2005 at 7:42 AM51 Comments

Microsoft Builds In Security Bypasses

I am very suspicious of tools that allow you to bypass network security systems. Yes, they make life easier. But if security is important, than all security decisions should be made by a central process; tools that bypass that centrality are very risky.

I didn’t like SOAP for that reason, and I don’t like the sound of this new Microsoft thingy:

We’re always looking for new things that can allow you to do things uniquely different today. For example, this new feature tool we have would allow me to tunnel directly using HTTP into my corporate Exchange server without having to go through the whole VPN (virtual private network) process, bypassing the need to use a smart card. It’s such a huge time-saver, for me at least, compared to how long it takes me now. We will be extending that functionality to the next version of Windows.

That’s Martin Taylor, Microsoft’s general manager of platform strategy, talking.

Posted on July 26, 2005 at 1:20 PM33 Comments

The Sorting Door Project

From The Register:

A former CIA intelligence analyst and researchers from SAP plan to study how RFID tags might be used to profile and track individuals and consumer goods.

“I believe that tags will be readily used for surveillance, given the interests of various parties able to deploy readers,” said Ross Stapleton-Gray, former CIA analyst and manager of the study, called the Sorting Door Project.

Sorting Door will be a test-bed for studying the massive databases that will be created by RFID tags and readers, once they become ubiquitous. The project will help legislators, regulators and businesses make policies that balance the interests of industry, national security and civil liberties, said Stapleton-Gray.

In Sorting Door, RFID readers (whether in doorways, walls or floors, or the hands of workers) will collect data from RFID tags and feed them into databases.

Sorting Door participants will then investigate how the RFID tag’s unique serial numbers, called EPCs, can be merged with other data to identify dangerous people and gather intelligence in a particular location.

Posted on July 26, 2005 at 9:31 AM31 Comments

Domestic Terrorism (U.S.)

Nice MSNBC piece on domestic terrorism in the U.S.:

The sentencing of Eric Rudolph, who bombed abortion clinics, a gay bar and the Atlanta Olympics, ought to be a milestone in the Global War on Terror. In Birmingham, Ala., on Monday he got life without parole. Next month he’ll stack up a couple more life terms in Georgia, which is the least he deserves. (He escaped the death penalty only because he made a deal to help law-enforcement agents find the explosives he had hidden while on the run in North Carolina.) Rudolph killed two people, but not for want of trying to kill many more. In his 1997 attack on an Atlanta abortion clinic, he set off a second bomb meant to take out bystanders and rescue workers. Unrepentant, of course, Rudolph defended his actions as a moral imperative: “Abortion is murder, and because it is murder I believe deadly force is needed to stop it.” The Birmingham prosecutor declared that Rudolph had “appointed himself judge, jury and executioner.”

Indeed. That’s what all terrorists have in common: the four lunatics in London earlier this month; the 19 men who attacked America on September 11, 2001; Timothy McVeigh in Oklahoma City, and many others. They were all convinced they had noble motives for wreaking their violence. Terrorists are very righteous folks. Which is why the real global war we’re fighting, let’s be absolutely clear, should be one of our shared humanity against the madness of people like these; the rule of man-made laws on the books against the divine law they imagine for themselves. It’s the cause of reason against unreason, of self-criticism against the firm convictions of fanaticism.

David Neiwert has some good commentary on the topic. He also points to this U.S. News and World Report article.

Posted on July 25, 2005 at 9:04 PM33 Comments

Shoot-to-Kill

We’ve recently learned that London’s Metropolitan Police has a shoot-to-kill policy when dealing with suspected suicide terrorists. The theory is that only a direct headshot will kill the terrorist immediately, and thus destroy the ability to execute a bombing attack.

Roy Ramm, former Met Police specialist operations commander, said the rules for confronting potential suicide bombers had recently changed to “shoot to kill”….

Mr Ramm said the danger of shooting a suspected suicide bomber in the body was that it could detonate a bomb they were carrying on them.

“The fact is that when you’re dealing with suicide bombers they only way you can stop them effectively—and protect yourself—is to try for a head-shot,” he said.

This policy is based on the extremely short-sighted assumption that a terrorist needs to push buttons to make a bomb explode. In fact, ever since World War I, the most common type of bomb carried by a person has been the hand grenade. It is entirely conceivable, especially when a shoot-to-kill policy is known to be in effect, that suicide bombers will use the same kind of dead-man’s trigger on their bombs: a detonate that is activated when a button is released, rather than when it is pushed.

This is a difficult one. Whatever policy you choose, the terrorists will adapt to make that policy the wrong one.

The police are now sorry they accidentally killed an innocent they suspected of being a suicide bomber, but I can certainly understand the mistake. In the end, the best solution is to train police officers and then leave the decision to them. But honestly, policies that are more likely to result in living incarcerated suspects—and recover well from false alarms—that can be interrogated are better than policies that are more likely to result in corpses.

EDITED TO ADD these comments by Nicholas Weaver:

“One other thing: The suspect was on the ground, and immobilized. Thus the decision was made to shoot the suspect, repeatedly (7 times) in the head, based on the perception that he could have been a suicide attacker (who dispite being a suicide attacker, wasn’t holding a dead-man’s switch. Or heck, wire up the bomb to a $50 heart-rate monitor).

“If this is policy, it is STUPID: There is an easy way for the attackers to counter it, and when you have a subway execution of an innocent man, the damage (in the hearts and minds of british muslims) is immense.

“One thing to remember:

“These were NON uniformed officers, and the suspect was brasilian (and probably didn’t speak very good english).

“Why did he run? What would YOU do if three individuals accosted you, speaking a language which you were unfamiliar with, drawing weapons? You would RUN LIKE HELL!

“I find the blaming the victim (‘but he was running!’) reprehensible.”

ANOTHER EDIT: The consensus seems to be that he spoke English well enough. I don’t think we can blame the officers without a whole lot more details about what happened, and possibly not even then. Clearly they were under a lot of stress, and made a split-second decision.

But I think we can reasonably criticize the shoot-to-kill policy that the officers were following. That policy is a threat to our security, and our society.

Posted on July 25, 2005 at 1:59 PM131 Comments

Secure Flight

Last Friday the GAO issued a new report on Secure Flight. It’s couched in friendly language, but it’s not good:

During the course of our ongoing review of the Secure Flight program, we found that TSA did not fully disclose to the public its use of personal information in its fall 2004 privacy notices as required by the Privacy Act. In particular, the public was not made fully aware of, nor had the opportunity to comment on, TSA’s use of personal information drawn from commercial sources to test aspects of the Secure Flight program. In September 2004 and November 2004, TSA issued privacy notices in the Federal Register that included descriptions of how such information would be used. However, these notices did not fully inform the public before testing began about the procedures that TSA and its contractors would follow for collecting, using, and storing commercial data. In addition, the scope of the data used during commercial data testing was not fully disclosed in the notices. Specifically, a TSA contractor, acting on behalf of the agency, collected more than 100 million commercial data records containing personal information such as name, date of birth, and telephone number without informing the public. As a result of TSA’s actions, the public did not receive the full protections of the Privacy Act.

Get that? The TSA violated federal law when it secretly expanded Secure Flight’s use of commercial data about passengers. It also lied to Congress and the public about it.

Much of this isn’t new. Last month we learned that:

The federal agency in charge of aviation security revealed that it bought and is storing commercial data about some passengers—even though officials said they wouldn’t do it and Congress told them not to.

Secure Flight is a disaster in every way. The TSA has been operating with complete disregard for the law or Congress. It has lied to pretty much everyone. And it is turning Secure Flight from a simple program to match airline passengers against terrorist watch lists into a complex program that compiles dossiers on passengers in order to give them some kind of score indicating the likelihood that they are a terrorist.

Which is exactly what it was not supposed to do in the first place.

Let’s review:

For those who have not been following along, Secure Flight is the follow-on to CAPPS-I. (CAPPS stands for Computer Assisted Passenger Pre-Screening.) CAPPS-I has been in place since 1997, and is a simple system to match airplane passengers to a terrorist watch list. A follow-on system, CAPPS-II, was proposed last year. That complicated system would have given every traveler a risk score based on information in government and commercial databases. There was a huge public outcry over the invasiveness of the system, and it was cancelled over the summer. Secure Flight is the new follow-on system to CAPPS-I.

EPIC has more background information.

Back in January, Secure Flight was intended to just be a more efficient system of matching airline passengers with terrorist watch lists.

I am on a working group that is looking at the security and privacy implications of Secure Flight. Before joining the group I signed an NDA agreeing not to disclose any information learned within the group, and to not talk about deliberations within the group. But there’s no reason to believe that the TSA is lying to us any less than they’re lying to Congress, and there’s nothing I learned within the working group that I wish I could talk about. Everything I say here comes from public documents.

In January I gave some general conclusions about Secure Flight. These have not changed.

One, assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place. (And by this I mean the matching program, not any potential uses of commercial or other third-party data.)

Two, the security system surrounding Secure Flight is riddled with security holes. There are security problems with false IDs, ID verification, the ability to fly on someone else’s ticket, airline procedures, etc.

Three, the urge to use this system for other things will be irresistible. It’s just too easy to say: “As long as you’ve got this system that watches out for terrorists, how about also looking for this list of drug dealers…and by the way, we’ve got the Super Bowl to worry about too.” Once Secure Flight gets built, all it’ll take is a new law and we’ll have a nationwide security checkpoint system.

And four, a program of matching airline passengers with names on terrorism watch lists is not making us appreciably safer, and is a lousy way to spend our security dollars.

What has changed is the scope of Secure Flight. First, it started using data from commercial sources, like Acxiom. (The details are even worse.) Technically, they’re testing the use of commercial data, but it’s still a violation. Even the DHS started investigating:

The Department of Homeland Security’s top privacy official said Wednesday that she is investigating whether the agency’s airline passenger screening program has violated federal privacy laws by failing to properly disclose its mission.

The privacy officer, Nuala O’Connor Kelly, said the review will focus on whether the program’s use of commercial databases and other details were properly disclosed to the public.

The TSA’s response to being caught violating their own Privacy Act statements? Revise them:

According to previous official notices, TSA had said it would not store commercial data about airline passengers.

The Privacy Act of 1974 prohibits the government from keeping a secret database. It also requires agencies to make official statements on the impact of their record keeping on privacy.

The TSA revealed its use of commercial data in a revised Privacy Act statement to be published in the Federal Register on Wednesday.

TSA spokesman Mark Hatfield said the program was being developed with a commitment to privacy, and that it was routine to change Privacy Act statements during testing.

Actually, it’s not. And it’s better to change the Privacy Act statement before violating the old one. Changing it after the fact just looks bad.

The point of Secure Flight match airline passengers against lists of suspected terrorists. But the vast majority of people flagged by this list simply have the same name, or a similar name, as the suspected terrorist: Ted Kennedy and Cat Stevens are two famous examples. The question is whether combining commercial data with the PNR (Passenger Name Record) supplied by the airline could reduce this false-positive problem. Maybe knowing the passenger’s address, or phone number, or date of birth, could reduce false positives. Or maybe not; it depends what data is on the terrorist lists. In any case, it’s certainly a smart thing to test.

But using commercial data has serious privacy implications, which is why Congress mandated all sorts of rules surrounding the TSA testing of commercial data—and more rules before it could deploy a final system—rules that the TSA has decided it can ignore completely.

Commercial data had another use under CAPPS-II In that now-dead program, every passenger would be subjected to a computerized background check to determine their “risk” to airline safety. The system would assign a risk score based on commercial data: their credit rating, how recently they moved, what kind of job they had, etc. This capability was removed from Secure Flight, but now it’s back:

The government will try to determine whether commercial data can be used to detect terrorist “sleeper cells” when it checks airline passengers against watch lists, the official running the project says….

Justin Oberman, in charge of Secure Flight at TSA, said the agency intends to do more testing of commercial data to see if it will help identify known or suspected terrorists not on the watch lists.

“We are trying to use commercial data to verify the identities of people who fly because we are not going to rely on the watch list,” he said. “If we just rise and fall on the watch list, it’s not adequate.”

Also this Congressional hearing (emphasis mine):

THOMPSON: There are a couple of questions I’d like to get answered in my mind about Secure Flight. Would Secure Flight pick up a person with strong community roots but who is in a terrorist sleeper cell or would a person have to be a known terrorist in order for Secure Flight to pick him up?

OBERMAN: Let me answer that this way: It will identify people who are known or suspected terrorists contained in the terrorist screening database, and it ought to be able to identify people who may not be on the watch list. It ought to be able to do that. We’re not in a position today to say that it does, but we think it’s absolutely critical that it be able to do that.

And so we are conducting this test of commercially available data to get at that exact issue.: Very difficult to do, generally. It’s particularly difficult to do when you have a system that transports 1.8 million people a day on 30,000 flights at 450 airports. That is a very high bar to get over.

It’s also very difficult to do with a threat described just like you described it, which is somebody who has sort of burrowed themselves into society and is not readily apparent to us when they’re walking through the airport. And so I cannot stress enough how important we think it is that it be able to have that functionality. And that’s precisely the reason we have been conducting this ommercial data test, why we’ve extended the testing period and why we’re very hopeful that the results will prove fruitful to us so that we can then come up here, brief them to you and explain to you why we need to include that in the system.

My fear is that TSA has already decided that they’re going to use commercial data, regardless of any test results. And once you have commercial data, why not build a dossier on every passenger and give them a risk score? So we’re back to CAPPS-II, the very system Congress killed last summer. Actually, we’re very close to TIA (Total/Terrorism Information Awareness), that vast spy-on-everyone data-mining program that Congress killed in 2003 because it was just too invasive.

Secure Flight is a mess in lots of other ways, too. A March GAO report said that Secure Flight had not met nine out of the ten conditions mandated by Congress before TSA could spend money on implementing the program. (If you haven’t read this report, it’s pretty scathing.) The redress problem—helping people who cannot fly because they share a name with a terrorist—is not getting any better. And Secure Flight is behind schedule and over budget.

It’s also a rogue program that is operating in flagrant disregard for the law. It can’t be killed completely; the Intelligence Reform and Terrorism Prevention Act of 2004 mandates that TSA implement a program of passenger prescreening. And until we have Secure Flight, airlines will still be matching passenger names with terrorist watch lists under the CAPPS-I program. But it needs some serious public scrutiny.

EDITED TO ADD: Anita Ramasastry’s commentary is worth reading.

Posted on July 24, 2005 at 9:10 PM31 Comments

Profiling

There is a great discussion about profiling going on in the comments to the previous post. To help, here is what I wrote on the subject in Beyond Fear (pp. 133-7):

Good security has people in charge. People are resilient. People can improvise. People can be creative. People can develop on-the-spot solutions. People can detect attackers who cheat, and can attempt to maintain security despite the cheating. People can detect passive failures and attempt to recover. People are the strongest point in a security process. When a security system succeeds in the face of a new or coordinated or devastating attack, it’s usually due to the efforts of people.

On 14 December 1999, Ahmed Ressam tried to enter the U.S. by ferryboat from Victoria Island, British Columbia. In the trunk of his car, he had a suitcase bomb. His plan was to drive to Los Angeles International Airport, put his suitcase on a luggage cart in the terminal, set the timer, and then leave. The plan would have worked had someone not been vigilant.

Ressam had to clear customs before boarding the ferry. He had fake ID, in the name of Benni Antoine Noris, and the computer cleared him based on this ID. He was allowed to go through after a routine check of his car’s trunk, even though he was wanted by the Canadian police. On the other side of the Strait of Juan de Fuca, at Port Angeles, Washington, Ressam was approached by U.S. customs agent Diana Dean, who asked some routine questions and then decided that he looked suspicious. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” More questioning—there was no one else crossing the border, so two other agents got involved—and more hinky behavior. Ressam’s car was eventually searched, and he was finally discovered and captured. It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But the system worked. The reason there wasn’t a bombing at LAX around Christmas in 1999 was because a knowledgeable person was in charge of security and paying attention.

There’s a dirty word for what Dean did that chilly afternoon in December, and it’s profiling. Everyone does it all the time. When you see someone lurking in a dark alley and change your direction to avoid him, you’re profiling. When a storeowner sees someone furtively looking around as she fiddles inside her jacket, that storeowner is profiling. People profile based on someone’s dress, mannerisms, tone of voice … and yes, also on their race and ethnicity. When you see someone running toward you on the street with a bloody ax, you don’t know for sure that he’s a crazed ax murderer. Perhaps he’s a butcher who’s actually running after the person next to you to give her the change she forgot. But you’re going to make a guess one way or another. That guess is an example of profiling.

To profile is to generalize. It’s taking characteristics of a population and applying them to an individual. People naturally have an intuition about other people based on different characteristics. Sometimes that intuition is right and sometimes it’s wrong, but it’s still a person’s first reaction. How good this intuition is as a countermeasure depends on two things: how accurate the intuition is and how effective it is when it becomes institutionalized or when the profile characteristics become commonplace.

One of the ways profiling becomes institutionalized is through computerization. Instead of Diana Dean looking someone over, a computer looks the profile over and gives it some sort of rating. Generally profiles with high ratings are further evaluated by people, although sometimes countermeasures kick in based on the computerized profile alone. This is, of course, more brittle. The computer can profile based only on simple, easy-to-assign characteristics: age, race, credit history, job history, et cetera. Computers don’t get hinky feelings. Computers also can’t adapt the way people can.

Profiling works better if the characteristics profiled are accurate. If erratic driving is a good indication that the driver is intoxicated, then that’s a good characteristic for a police officer to use to determine who he’s going to pull over. If furtively looking around a store or wearing a coat on a hot day is a good indication that the person is a shoplifter, then those are good characteristics for a store owner to pay attention to. But if wearing baggy trousers isn’t a good indication that the person is a shoplifter, then the store owner is going to spend a lot of time paying undue attention to honest people with lousy fashion sense.

In common parlance, the term “profiling” doesn’t refer to these characteristics. It refers to profiling based on characteristics like race and ethnicity, and institutionalized profiling based on those characteristics alone. During World War II, the U.S. rounded up over 100,000 people of Japanese origin who lived on the West Coast and locked them in camps (prisons, really). That was an example of profiling. Israeli border guards spend a lot more time scrutinizing Arab men than Israeli women; that’s another example of profiling. In many U.S. communities, police have been known to stop and question people of color driving around in wealthy white neighborhoods (commonly referred to as “DWB”—Driving While Black). In all of these cases you might possibly be able to argue some security benefit, but the trade-offs are enormous: Honest people who fit the profile can get annoyed, or harassed, or arrested, when they’re assumed to be attackers.

For democratic governments, this is a major problem. It’s just wrong to segregate people into “more likely to be attackers” and “less likely to be attackers” based on race or ethnicity. It’s wrong for the police to pull a car over just because its black occupants are driving in a rich white neighborhood. It’s discrimination.

But people make bad security trade-offs when they’re scared, which is why we saw Japanese internment camps during World War II, and why there is so much discrimination against Arabs in the U.S. going on today. That doesn’t make it right, and it doesn’t make it effective security. Writing about the Japanese internment, for example, a 1983 commission reported that the causes of the incarceration were rooted in “race prejudice, war hysteria, and a failure of political leadership.” But just because something is wrong doesn’t mean that people won’t continue to do it.

Ethics aside, institutionalized profiling fails because real attackers are so rare: Active failures will be much more common than passive failures. The great majority of people who fit the profile will be innocent. At the same time, some real attackers are going to deliberately try to sneak past the profile. During World War II, a Japanese American saboteur could try to evade imprisonment by pretending to be Chinese. Similarly, an Arab terrorist could dye his hair blond, practice an American accent, and so on.

Profiling can also blind you to threats outside the profile. If U.S. border guards stop and search everyone who’s young, Arab, and male, they’re not going to have the time to stop and search all sorts of other people, no matter how hinky they might be acting. On the other hand, if the attackers are of a single race or ethnicity, profiling is more likely to work (although the ethics are still questionable). It makes real security sense for El Al to spend more time investigating young Arab males than it does for them to investigate Israeli families. In Vietnam, American soldiers never knew which local civilians were really combatants; sometimes killing all of them was the security solution they chose.

If a lot of this discussion is abhorrent, as it probably should be, it’s the trade-offs in your head talking. It’s perfectly reasonable to decide not to implement a countermeasure not because it doesn’t work, but because the trade-offs are too great. Locking up every Arab-looking person will reduce the potential for Muslim terrorism, but no reasonable person would suggest it. (It’s an example of “winning the battle but losing the war.”) In the U.S., there are laws that prohibit police profiling by characteristics like ethnicity, because we believe that such security measures are wrong (and not simply because we believe them to be ineffective).

Still, no matter how much a government makes it illegal, profiling does occur. It occurs at an individual level, at the level of Diana Dean deciding which cars to wave through and which ones to investigate further. She profiled Ressam based on his mannerisms and his answers to her questions. He was Algerian, and she certainly noticed that. However, this was before 9/11, and the reports of the incident clearly indicate that she thought he was a drug smuggler; ethnicity probably wasn’t a key profiling factor in this case. In fact, this is one of the most interesting aspects of the story. That intuitive sense that something was amiss worked beautifully, even though everybody made a wrong assumption about what was wrong. Human intuition detected a completely unexpected kind of attack. Humans will beat computers at hinkiness-detection for many decades to come.

And done correctly, this intuition-based sort of profiling can be an excellent security countermeasure. Dean needed to have the training and the experience to profile accurately and properly, without stepping over the line and profiling illegally. The trick here is to make sure perceptions of risk match the actual risks. If those responsible for security profile based on superstition and wrong-headed intuition, or by blindly following a computerized profiling system, profiling won’t work at all. And even worse, it actually can reduce security by blinding people to the real threats. Institutionalized profiling can ossify a mind, and a person’s mind is the most important security countermeasure we have.

A couple of other points (not from the book):

  • Whenever you design a security system with two ways through—an easy way and a hard way—you invite the attacker to take the easy way. Profile for young Arab males, and you’ll get terrorists that are old non-Arab females. This paper looks at the security effectiveness of profiling versus random searching.
  • If we are going to increase security against terrorism, the young Arab males living in our country are precisely the people we want on our side. Discriminating against them in the name of security is not going to make them more likely to help.
  • Despite what many people think, terrorism is not confined to young Arab males. Shoe-bomber Richard Reid was British. Germaine Lindsay, one of the 7/7 London bombers, was Afro-Caribbean. Here are some more examples:

    In 1986, a 32-year-old Irish woman, pregnant at the time, was about to board an El Al flight from London to Tel Aviv when El Al security agents discovered an explosive device hidden in the false bottom of her bag. The woman’s boyfriend—the father of her unborn child—had hidden the bomb.

    In 1987, a 70-year-old man and a 25-year-old woman—neither of whom were Middle Eastern—posed as father and daughter and brought a bomb aboard a Korean Air flight from Baghdad to Thailand. En route to Bangkok, the bomb exploded, killing all on board.

    In 1999, men dressed as businessmen (and one dressed as a Catholic priest) turned out to be terrorist hijackers, who forced an Avianca flight to divert to an airstrip in Colombia, where some passengers were held as hostages for more than a year-and-half.

    The 2002 Bali terrorists were Indonesian. The Chechnyan terrorists who downed the Russian planes were women. Timothy McVeigh and the Unabomber were Americans. The Basque terrorists are Basque, and Irish terrorists are Irish. Tha Tamil Tigers are Sri Lankan.

    And many Muslims are not Arabs. Even worse, almost everyone who is Arab is not a terrorist—many people who look Arab are not even Muslims. So not only are there an large number of false negatives—terrorists who don’t meet the profile—but there an enormous number of false positives: innocents that do meet the profile.

Posted on July 22, 2005 at 3:12 PM88 Comments

Searching Bags in Subways

The New York City police will begin randomly searching people’s bags on subways, buses, commuter trains, and ferries.

“The police can and should be aggressively investigating anyone they suspect is trying to bring explosives into the subway,” said Christopher Dunn, associate legal director at the New York Civil Liberties Union. “However, random police searches of people without any suspicion of wrongdoing are contrary to our most basic constitutional values. This is a very troubling announcement.”

If the choice is between random searching and profiling, then random searching is a more effective security countermeasure. But Dunn is correct above when he says that there are some enormous trade-offs in liberty. And I don’t think we’re getting very much security in return.

Especially considering this:

[Police Commissioner Raymond] Kelly stressed that officers posted at subway entrances would not engage in racial profiling, and that passengers are free to “turn around and leave.”

“Okay guys; here are your explosives. If one of you gets singled out for a search, just turn around and leave. And then go back in via another entrance, or take a taxi to the next subway stop.”

And I don’t think they’ll be truly random, either. I think the police doing the searching will profile, because that’s what happens.

It’s another “movie plot threat.” It’s another “public relations security system.” It’s a waste of money, it substantially reduces our liberties, and it won’t make us any safer.

Final note: I often get comments along the lines of “Stop criticizing stuff; tell us what we should do.” My answer is always the same. Counterterrorism is most effective when it doesn’t make arbitrary assumptions about the terrorists’ plans. Stop searching bags on the subways, and spend the money on 1) intelligence and investigation—stopping the terrorists regardless of what their plans are, and 2) emergency response—lessening the impact of a terrorist attack, regardless of what the plans are. Countermeasures that defend against particular targets, or assume particular tactics, or cause the terrorists to make insignificant modifications in their plans, or that surveil the entire population looking for the few terrorists, are largely not worth it.

EDITED TO ADD: A Citizen’s Guide to Refusing New York Subway Searches.

Posted on July 22, 2005 at 6:27 AM130 Comments

Visa and Amex Drop CardSystems

Remember CardSystems Solutions, the company that exposed over 40 million identities to potential fraud? (The actual number of identities that will be the victims of fraud is almost certainly much, much lower.)

Both Visa and American Express are dropping them as a payment processor:

Within hours of the disclosure that Visa was seeking a replacement for CardSystems Solutions, American Express said Tuesday it would no longer do business with the company beginning in October.

The biggest problem with CardSystems’ actions wasn’t that it had bad computer security practices, but that it had bad business practices. It was holding exception files with personal information even though it was not supposed to. It was not for marketing, as I originally surmised, but to find out why transactions were not being authorized. It was disregrading the rules it agreed to follow.

Technical problems can be remediated. A dishonest corporate culture is much harder to fix. This is what I sense reading between the lines:

Visa had been weighing the decision for a few weeks but as recently as mid-June said that it was working with CardSystems to correct the problem. CardSystems hired an outside security assessor this month to review its policies and practices, and it promised to make any necessary upgrades by the end of August. CardSystems, in its statement yesterday, said the company’s executives had been “in almost daily contact” with Visa since the problems were discovered in May.

Visa, however, said that despite “some remediation efforts” since the incident was reported, the actions by CardSystems were not enough.

And this:

CardSystems Solutions Inc. “has not corrected, and cannot at this point correct, the failure to provide proper data security for Visa accounts,” said Rosetta Jones, a spokeswoman for Foster City, Calif.-based Visa….

Visa said that while CardSystems has taken some remediating actions since the breach was disclosed, those could not overcome the fact that it was inappropriately holding on to account information—purportedly for “research purposes”—when the breach occurred, in violation of Visa’s security rules.

At this point, it is unclear what MasterCard and Discover will do.

MasterCard International Inc. is taking a different tack with CardSystems. The credit card company expects CardSystems to develop a plan for improving its security by Aug. 31, “and as of today, we are not aware of any deficiencies in its systems that are incapable of being remediated,” spokeswoman Sharon Gamsin said.

“However, if CardSystems cannot demonstrate that they are in compliance by that date, their ability to provide services to MasterCard members will be at risk,” she said.

Jennifer Born, a spokeswoman for Discover Financial Services Inc., which also has a relationship with CardSystems, said the Riverwoods, Ill.-based company was “doing our due diligence and will make our decision once that process is completed.”

I think this is a positive development. I have long said that companies like CardSystems won’t clean up their acts unless there are consequences for not doing so. Credit card companies dropping CardSystems sends a strong message to the other payment processors: improve your security if you want to stay in business.

(Some interesting legal opinions on the larger issue of disclosure are here.)

Posted on July 21, 2005 at 11:49 AM25 Comments

Anti-Missile Defenses for Commercial Aircraft

In yet another “movie-plot threat” defense, the U.S. government is starting to test anti-missile lasers on commercial aircraft.

It could take years before passenger planes carry protection against missiles, a weapon terrorists might use to shoot down jets and cause economic havoc in the airline industry. The tests will help the nation’s leaders decide if they should install laser systems on all 6,800 aircraft in the U.S. airline fleet at a cost of at least $6 billion.

“Yes, it will cost money, but it’s the same cost as an aircraft entertainment system,” Kubricky says.

I think the airline industry is missing something here. If they linked the anti-missile lasers with the in-seat entertainment systems, cross-country flights would be much more exciting.

Posted on July 21, 2005 at 8:58 AM68 Comments

New Cybersecurity Position at DHS

There’s a major reorganization going on at the Department of Homeland Security. One of the effects is the creation of a new post: assistant secretary for cyber and telecommunications security.

Honestly, it doesn’t matter where the nation’s chief cybersecurity chief sits in the organizational chart. If he has the authority to spend money and write regulations, he can do good. If he only has the power to suggest, plead, and cheerlead he’ll be as frustrated as all the previous ones were.

Posted on July 20, 2005 at 7:44 AM25 Comments

How to Not Fix the ID Problem

Several of the 9/11 terrorists had Virginia driver’s licenses in fake names. These were not forgeries; these were valid Virginia IDs that were illegally sold by Department of Motor Vehicle workers.

So what did Virginia do to correct the problem? They required more paperwork in order to get an ID.

But the problem wasn’t that it was too easy to get an ID. The problem was that insiders were selling them illegally. Which is why the Virginia “solution” didn’t help, and the problem remains:

The manager of the Virginia Department of Motor Vehicles office at Springfield Mall was charged yesterday with selling driver’s licenses to illegal immigrants and others for up to $3,500 apiece.

The arrest of Francisco J. Martinez marked the second time in two years that a Northern Virginia DMV employee was accused of fraudulently selling licenses for cash. A similar scheme two years ago at the DMV office in Tysons Corner led to the guilty pleas of two employees.

And after we spend billions on the REAL ID act, and require even more paperwork to get a state ID, the problem will still remain.

Posted on July 19, 2005 at 1:15 PM48 Comments

Turning Cell Phones off in Tunnels

In response to the London bombings, officials turned off cell phones in tunnels around New York City, in an attempt to thwart bombers who might use cell phones as remote triggering devices. (Phone service has been restored in two of the four tunnels. As far as I know, it is still not available in th other two.)

This is as idiotic as it gets. It’s a perfect example of what I call “movie plot security”: imagining a particular scenario rather than focusing on the broad threats. It’s completely useless if a terrorist uses something other than a cell phone: a kitchen timer, for example. Even worse, it harms security in the general case. Have people forgotten how cell phones saved lives on 9/11? Communications benefits the defenders far more than it benefits the attackers.

Posted on July 19, 2005 at 7:52 AM42 Comments

Thinking About Suicide Bombers

Remember the 1996 movie Independence Day? One of the characters was a grizzled old fighter pilot who had been kidnapped and degraded by the alien invaders years before. He flew his plane into the alien spaceship when his air-to-air missile jammed, causing the spaceship to explode. Everybody in the movie, as well as the audience, considered this suicide bomber a hero.

What’s the difference?

Partly it’s which side you’re rooting for, but mostly it’s that the pilot defended his planet by attacking the invaders. Terrorism targets innocents, and no one is a hero for killing innocents. Killing people who are invading and occupying your planet—or country—can be heroic, as can sacrificing yourself in the process.

This is an interesting observation in light of the previous post, where a professor makes the observation that the motivation of suicide terrorism is to repel what is perceived to be an occupation force.

What are the lessons here for Iraq? I think there are three. One, the insurgents (or whatever we’re calling them these days) would do best by attacking military targets and not civilian ones. Two, the coalition forces (or whatever we’re calling them these days) need to do everything they can not to be perceived as invaders or occupiers. And three, the terrorists should try to advance a worldview where there are no innocents, only invaders and occupiers. To the extent that the bombing victims are perceived to be invaders and occupiers, those who kill them defending their country will be viewed as heroic by the people.

There are no lessons for London. There was no invasion. Every victim was an innocent. No one should consider the terrorists heroes.

Posted on July 18, 2005 at 2:47 PM101 Comments

Causes of Suicide Terrorism

Here’s an absolutely fascinating interview with Robert Pape, a University of Chicago professor who has studied every suicide terrorist attack since 1980.

RP: This wealth of information creates a new picture about what is motivating suicide terrorism. Islamic fundamentalism is not as closely associated with suicide terrorism as many people think. The world leader in suicide terrorism is a group that you may not be familiar with: the Tamil Tigers in Sri Lanka.

….TAC: So if Islamic fundamentalism is not necessarily a key variable behind these groups, what is?

RP: The central fact is that overwhelmingly suicide-terrorist attacks are not driven by religion as much as they are by a clear strategic objective: to compel modern democracies to withdraw military forces from the territory that the terrorists view as their homeland. From Lebanon to Sri Lanka to Chechnya to Kashmir to the West Bank, every major suicide-terrorist campaign—over 95 percent of all the incidents—has had as its central objective to compel a democratic state to withdraw.

….TAC: If you were to break down causal factors, how much weight would you put on a cultural rejection of the West and how much weight on the presence of American troops on Muslim territory?

RP: The evidence shows that the presence of American troops is clearly the pivotal factor driving suicide terrorism.

If Islamic fundamentalism were the pivotal factor, then we should see some of the largest Islamic fundamentalist countries in the world, like Iran, which has 70 million people—three times the population of Iraq and three times the population of Saudi Arabia—with some of the most active groups in suicide terrorism against the United States. However, there has never been an al-Qaeda suicide terrorist from Iran, and we have no evidence that there are any suicide terrorists in Iraq from Iran.

….TAC: Osama bin Laden and other al-Qaeda leaders also talked about the “Crusaders-Zionist alliance,” and I wonder if that, even if we weren’t in Iraq, would not foster suicide terrorism. Even if the policy had helped bring about a Palestinian state, I don’t think that would appease the more hardcore opponents of Israel.

RP: I not only study the patterns of where suicide terrorism has occurred but also where it hasn’t occurred. Not every foreign occupation has produced suicide terrorism. Why do some and not others? Here is where religion matters, but not quite in the way most people think. In virtually every instance where an occupation has produced a suicide-terrorist campaign, there has been a religious difference between the occupier and the occupied community.

….TAC: Has the next generation of anti-American suicide terrorists already been created? Is it too late to wind this down, even assuming your analysis is correct and we could de-occupy Iraq?

RP: Many people worry that once a large number of suicide terrorists have acted that it is impossible to wind it down. The history of the last 20 years, however, shows the opposite. Once the occupying forces withdraw from the homeland territory of the terrorists, they often stop—and often on a dime.

Pope recently published a book, Dying to Win: The Strategic Logic of Suicide Terrorism. Here’s a review.

UPDATED TO ADD: Salon reviewed the book.

Posted on July 18, 2005 at 8:10 AM67 Comments

NIST Publication on Discrete Log Crypto

NIST (The United States’ National Institute of Standards and Technology) has released a draft of “Special Publication 800-56, Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography.” They’re looking for comments before the document is finalized. Send comments to ebarker@nist.gov by Friday, August 19th, with “Comments on SP800-56” in the subject line.

Posted on July 15, 2005 at 6:46 AM0 Comments

Redefining Spyware

The problem with spyware is that it can be in the eye of the beholder. There are companies that decry the general problem, but have their own software report back to a central server.

This kind of thing can result in a conflict of interest: “Spyware is spyware only if I don’t have a corporate interest in it.” Here’s the most recent example:

Microsoft’s Windows AntiSpyware application is no longer flagging adware products from Claria Corp. as a threat to PC users.

Less than a week after published reports of acquisition talks between Microsoft Corp. and the Redwood City, Calif.-based distributor of the controversial Gator ad-serving software, security researchers have discovered that Microsoft has quietly downgraded its Claria detections.

If you’re a user of AntiSpyware, you can fix this. Claria’s spyware is now flagged as “Ignore” by default, but you can still change the action to “Quarantine” or “Remove.” I recommend “Remove.”

Edited to add: Actually, I recommend using a different anti-spyware program.

Posted on July 14, 2005 at 5:05 PM40 Comments

Security Risks of Airplane WiFi

I’ve already written about the stupidity of worrying about cell phones on airplanes. Now the Department of Homeland Security is worried about broadband Internet.

Federal law enforcement officials, fearful that terrorists will exploit emerging in-flight broadband services to remotely activate bombs or coordinate hijackings, are asking regulators for the power to begin eavesdropping on any passenger’s internet use within 10 minutes of obtaining court authorization.

In joint comments filed with the FCC last Tuesday, the Justice Department, the FBI and the Department of Homeland Security warned that a terrorist could use on-board internet access to communicate with confederates on other planes, on the ground or in different sections of the same plane—all from the comfort of an aisle seat.

“There is a short window of opportunity in which action can be taken to thwart a suicidal terrorist hijacking or remedy other crisis situations on board an aircraft, and law enforcement needs to maximize its ability to respond to these potentially lethal situations,” the filing reads.

Terrorists never use SSH, after all. (I suppose that’s the next thing the DHS is going to try to ban.)

Posted on July 14, 2005 at 12:02 PM48 Comments

Forged Documents in National Archives Change History

A recently published book claims that Himmler was murdered by the British Special Operations Executive, rather than him committing suicide after the Allies captured him. The book was based on documents found—apparently in good faith—in the UK’s National Archive, which now appear to have been faked and inserted.

Documents from the National Archives used to substantiate claims that British intelligence agents murdered Heinrich Himmler in 1945 are forgeries, The Daily Telegraph can reveal today.

It seems certain that the bogus documents were somehow planted among genuine papers to pervert the course of historical study.

The results of investigations by forensic document experts on behalf of this newspaper have shocked historians and caused tremors at the Archives, the home of millions of historical documents, which has previously been thought immune to distortion or contamination.

It seems that the security effort at the National Archives is directed towards preventing people from removing documents. But the effects of adding forged documents could be much worse.

Posted on July 14, 2005 at 8:40 AM22 Comments

Security Risks of Street Photography

Interesting article on the particular art form of street photography. One ominous paragraph:

More onerous are post-9/11 restrictions that have placed limits on photographing in public settings. Tucker has received e-mails from professionals detained by authorities for photographing bridges and elevated trains. “There are places where photographing people on the street may become illegal,” observes Westerbeck.

Sad.

Posted on July 13, 2005 at 8:38 AM39 Comments

New York Times on Identity Theft

I got some really good quotes in this New York Times article on identity theft:

Which is why I wish William Proxmire were still on the case. What we need right now is someone in power who can put the burden for this problem right where it belongs: on the financial and other institutions who collect this data. Let’s face it: by the time even the most vigilant consumer discovers his information has been used fraudulently, it’s already too late. “When people ask me what can the average person do to stop identity theft, I say, ‘nothing,'” said Bruce Schneier, the chief technology officer of Counterpane Internet Security. “This data is held by third parties and they have no impetus to fix it.”

Mr. Schneier, though, has a solution that is positively Proxmirian in its elegance and simplicity. Most of the bills that have been filed in Congress to deal with identity fraud are filled with specific requirements for banks and other institutions: encrypt this; safeguard that; strengthen this firewall.

Mr. Schneier says forget about all that. Instead, do what Congress did in the 1970’s—just put the burden on the financial industry. “If we’re ever going to manage the risks and effects of electronic impersonation,” he wrote recently on CNET (and also in his blog), “we must concentrate on preventing and detecting fraudulent transactions.” And the only way to do that, he added, is by making the financial institutions liable for fraudulent transactions.

“I think business ingenuity is top notch,” Mr. Schneier said in an interview. “And I think if you make it their problem, they will solve it.”

Yes, he acknowledged, letting consumers off the hook might cause them to be less vigilant. But that is exactly what Senator Proxmire did and to great effect. Forcing the financial institutions to bear the entire burden will cause them to tighten up their procedures until the fraud is under control. Maybe they will invest in complex software. But maybe they’ll take simpler measures as well, like making it a little less easy than it is today to obtain a credit card. Best of all, once people see these measures take effect—and realize that someone else is responsible for fixing the problems—their fear will abate.

As Senator Proxmire understood a long time ago, fear is the great enemy of commerce. Maybe this time, the banks will finally understand that as well.

Posted on July 12, 2005 at 5:14 PM25 Comments

Terrorism Defense: A Failure of Imagination

The 9/11 Commission report talked about a “failure of imagination” before the 9/11 attacks:

The most important failure was one of imagination. We do not believe leaders understood the gravity of the threat. The terrorist danger from Bin Ladin and al Qaeda was not a major topic for policy debate among the public, the media, or in the Congress. Indeed, it barely came up during the 2000 presidential campaign.

More generally, this term has been used to describe the U.S. government’s response to the terrorist threat. We spend a lot of money defending against what they did last time, or against particular threats we imagine, but ignore the general threat or the root causes of terrorism.

With the London bombings, we’re doing it again. I was going to write a long post about this, but Richard Forno already wrote a nice essay.

The London bombs went off over 12 hours ago.

So why is CNN-TV still splashing “breaking news” on the screen?

There’s been zero new developments in the past several hours. Perhaps the “breaking news” is that CNN’s now playing spooky “terror attack” music over commercial bumpers now filled with dramatic camera-phone images from London commuters that appeared on the Web earlier this morning.

Aside from that, the only new development since about noon seems to be the incessant press conferences held by public officials in cities around the country showcasing what they’ve done since 9/11 and what they’re doing here at home to respond to the blasts in London…which pretty much comes down to lots of guys with guns running around America’s mass transit system in an effort to present the appearance of “increased security” to reassure the public. While such activities are a political necessity to show that our leaders are ‘doing something’ during a time of crisis we must remember that talk or activity is no substitute for progress or effectiveness.

Forget the fact that regular uniformed police officers and rail employees can sweep or monitor a train station just as well as a fully-decked-out SWAT team—not to mention, they know it better, too. Forget that even with an added law enforcement presence, it’s quite possible to launch a suicide attack on mass transit. Forget that a smart terrorist now knows that the DHS response to attacks is to “increase” the security of related infrastructures (e.g., train stations) and just might attack another, lesser-protected part of American society potentially with far greater success. In these and other ways today following the London bombings, the majority of security attention has been directed at mass transit. However, while we can’t protect everything against every form of attack, our American responses remain conventional and predictable—just as we did after the Madrid train bombings in 2004 and today’s events in London, we continue to respond in ways designed to “prevent the last attack.”

In other words, we are demonstrating a lack of protective imagination.

Contrary to America’s infatuation with instant gratification, protective imagination is not quickly built, funded, or enacted. It takes years to inculcate such a mindset brought about by outside the box, unconventional, and daring thinking from folks with expertise and years of firsthand knowledge in areas far beyond security or law enforcement and who are encouraged to think freely and have their analyses seriously considered in the halls of Washington. Such a radical way of thinking and planning is necessary to deal with an equally radical adversary, yet we remain entrenched in conventional wisdom and responses.

Here at home, for all the money spent in the name of homeland security, we’re not acting against the terrorists, we’re reacting against them, and doing so in a very conventional, very ineffective manner. Yet nobody seems to be asking why.

While this morning’s events in London is a tragedy and Londoners deserve our full support in the coming days, it’s sad to see that regarding the need for effective domestic preparedness here in the United States, nearly 4 years after 9/11, it’s clear that despite the catchy sound-bytes and flurry of activity in the name of protecting the homeland, the more things seem to change, the more they stay the same.

Posted on July 12, 2005 at 12:08 PM48 Comments

Surveillance Cameras and Terrorism

I was going to write something about the foolishness of adding cameras to public spaces as a response to terrorism threats, but Scott Henson said it already:

Homeland Security Ubermeister Michael Chertoff just told NBC’s Tim Russert on Meet the Press this morning that the United States should invest in “cameras and dogs” to protect subway, rail and bus transit systems from terrorist attacks.

B.S.

Surveillance cameras didn’t deter the terrorist attacks in London. They didn’t stop the courthouse killing spree in Atlanta. But they’re prone to abuse. And at the end of they day they don’t reduce crime.

Posted on July 12, 2005 at 8:13 AM35 Comments

Hymn Project

The Hymn Project exists to break the iTunes mp4 copy-protection scheme, so you can hear the music you bought on any machine you want.

The purpose of the Hymn Project is to allow you to exercise your fair-use rights under copyright law. The various software provided on this web site allows you to free your iTunes Music Store purchases (protected AAC / .m4p) from their DRM restrictions with no loss of sound quality. These songs can then be played outside of the iTunes environment, even on operating systems not supported by iTunes and on hardware not supported by Apple.

Initially, the software recovered your iTunes password (your key, basically) from your hard drive. In response, Apple obfuscated the format and no one has yet figured out how to recover the keys cleanly. To get around this, they developed a program called FairKeys that impersonates iTunes and contacts the server. Since the iTunes client can still get your password, this works.

FairKeys … pretends to be a copy of iTunes running on an imaginary computer, one of the five computers that you’re currently allowed to authorize for playing your iTMS purchases. FairKeys logs into Apple’s web servers to get your keys the same way iTunes does when it needs to get new keys. At least for now, at this stage of the cat-and-mouse game, FairKeys knows how to request your keys and how to decode the response which contains your keys, and once it has those keys it can store them for immediate or future use by JHymn.

More security by inconvenience, and yet another illustration of the neverending arms race between attacker and defender.

Posted on July 11, 2005 at 8:09 AM41 Comments

The Doghouse: Privacy.li

This company has a heartwarming description on its website:

PRIVACY.LI – Privacy from the Principality of Liechtenstein, in the heart of the Alps, nestled between Switzerland and Austria. In times of turmoil and insecurity, witch hunt and suspicions, expropriations and diminishing credibility of our world leaders it’s always good to have a place you can turn to. This is the humble effort to provide a place to the privacy and freedom concerned world citizens to meet, discuss, help each other and foster ones desire for liberty and freedom.

But they have no intention of letting their customers know anything about themselves.

Company Profile

Actually, this is not to be published here:-) A privacy service like ours is best if not too many details are known, we hope you fully understand and support this. The makers of this page are veterans at the chosen subject, and will under no circumstances jeopardize your privacy.

Oh yeah, and their “DriveCrypt” product includes “real Time, 1344 bit – Military Strength encryption.”

Somehow, my heart is no longer warm.

Posted on July 8, 2005 at 8:36 AM104 Comments

London Transport Bombings

I am on vacation today and this weekend, and won’t be able to read about the London Transport bombings in depth until Monday. For now I would just like to express my sympathy and condolences to those directly affected, and the good people of London, England, Europe, and the world. Targeting innocents might be an effective tactic, but that doesn’t make it any less craven and despicable.

I would also like to urge everyone not to get wrapped up in the particulars of the terrorist tactics. We need to resist the urge to react against the particulars of this particular terrorist plot, and to keep focused on the terrorists’ goals. Spending billions to defend our trains and busses at the expense of other counterterrorist measures makes no sense. Terrorists are out to cause terror, and they don’t care if they bomb trains, busses, shopping malls, theaters, stadiums, schools, markets, restaurants, discos, or any other collection of 100 people in a small space. There are simply too many targets to defend, and we need to think smarter than protecting the particular targets the terrorists attacked last week.

Smart counterterrorism focuses on the terrorists and their funding—stopping plots regardless of their targets—and emergency response that limits their damage.

I’ll have more to say later. But again, my sympathy goes out to those killed and injured, their family and friends, and everyone else in the world indirectly affected by these acts as they are endlessly repeated in the media.

Posted on July 7, 2005 at 1:27 PM69 Comments

Russia's Black-Market Data Trade

Interesting story on the market for data in Moscow:

This Gorbushka vendor offers a hard drive with cash transfer records from Russia’s central bank for $1,500 (Canadian).

And:

At the Gorbushka kiosk, sales are so brisk that the vendor excuses himself to help other customers while the foreigner considers his options: $43 for a mobile phone company’s list of subscribers? Or $100 for a database of vehicles registered in the Moscow region?

The vehicle database proves irresistible. It appears to contain names, birthdays, passport numbers, addresses, telephone numbers, descriptions of vehicles, and vehicle identification (VIN) numbers for every driver in Moscow.

I don’t know whether you can buy data about people in other countries, but it is certainly plausible.

Posted on July 6, 2005 at 6:10 AM25 Comments

Noticing Data Misuse

Everyone seems to be looking at their databases for personal information leakages.

Tax liens, mortgage papers, deeds, and other real estate-related documents are publicly available in on-line databases run by registries of deeds across the state. The Globe found documents in free databases of all but three Massachusetts counties containing the names and Social Security numbers of Massachusetts residents….

Although registers of deeds said that they are unaware of cases in which criminals used information from their databases maliciously, the information contained in the documents would be more than enough to steal an identity and open new lines of credit….

Isn’t that part of the problem, though? It’s easy to say “we haven’t seen any cases of fraud using our information,” because there’s rarely a way to tell where information comes from. The recent epidemic of public leaks comes from people noticing the leak process, not the effects of the leaks. So everyone thinks their data practices are good, because there have never been any documented abuses stemming from leaks of their data, and everyone is fooling themselves.

Posted on July 5, 2005 at 8:47 AM13 Comments

Evaluating the Effectiveness of Security Countermeasures

Amidst all the emotional rhetoric about security, it’s nice to see something well-reasoned. This New York Times op-ed by Nicholas Kristof looks at security as a trade-off, and makes a distinction between security countermeasures that reduce the threat and those that simply shift it.

The op ed starts with countermeasures against car theft.

Sold for $695, the LoJack is a radio transmitter that is hidden on a vehicle and then activated if the car is stolen. The transmitter then silently summons the police – and it is ruining the economics of auto theft….

The thief’s challenge is that it’s impossible to determine which vehicle has a LoJack (there’s no decal). So stealing any car becomes significantly more risky, and one academic study found that the introduction of LoJack in Boston reduced car theft there by 50 percent.

Two Yale professors, Barry Nalebuff and Ian Ayres, note that this means that the LoJack benefits everyone, not only those who install the system. Professor Ayres and another scholar, Steven Levitt, found that every $1 invested in LoJack saves other car owners $10.

Professors Nalebuff and Ayres note that other antitheft devices, such as the Club, a polelike device that locks the steering wheel, help protect that car, but only at the expense of the next vehicle.

“The Club doesn’t reduce crime,” Mr. Nalebuff says. “It just shifts it to the next person.”

This model could be applied to home burglar alarms:

Conventional home alarms are accompanied by warning signs and don’t reduce crime but simply shift the risk to the next house. What if we encouraged hidden silent alarms to change the economics of burglary?

Granted, most people don’t want hidden alarms that entice a burglar to stay until the police show up. But suppose communities adjusted the fees they charge for alarm systems – say, $2,000 a year for an audible alarm, but no charge for a hidden LoJack-style silent alarm.

Then many people would choose the silent alarms, more burglars would get caught, and many of the criminally inclined would choose a new line of work….

I wrote about this in Beyond Fear:

A burglar who sees evidence of an alarm system is more likely to go rob the house next door. As far as the local police station is concerned, this doesn’t mitigate the risk at all. But for the homeowner, it mitigates the risk just fine.

The difference is the perspective of the defender.

Problems with perspectives show up in counterterrorism defenses all the time. Also from Beyond Fear:

It’s important not to lose sight of the forest for the trees. Countermeasures often focus on preventing particular terrorist acts against specific targets, but the scope of the assets that need to be protected encompasses all potential targets, and they all must be considered together. A terrorist’s real target is morale, and he really doesn’t care about one physical target versus another. We want to prevent terrorist acts everywhere, so countermeasures that simply move the threat around are of limited value. If, for example, we spend a lot of money defending our shopping malls, and bombings subsequently occur in crowded sports stadiums or movie theaters, we haven’t really received any value from our countermeasures.

I like seeing thinking like this in the media, and wish there were more of it.

Posted on July 1, 2005 at 12:19 PM50 Comments

Security Skins

Much has been written about the insecurity of passwords. Aside from being guessable, people are regularly tricked into providing their passwords to rogue servers because they can’t distinguish spoofed windows and webpages from legitimate ones.

Here’s a clever scheme by Rachna Dhamija and Doug Tygar at the University of California Berkeley that tries to deal with the problem. It’s called “Dynamic Security Skins,” and it’s a pair of protocols that augment passwords.

First, the authors propose creating a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user’s personal image, it is safe for the user to enter his password.

Second, to prove its identity, the server generates a unique abstract image for each user and each transaction. This image is used to create a “skin” that automatically customizes the browser window or the user interface elements in the content of a webpage. The user’s browser can independently reach the same image that it expects to receive from the server. To verify the server, the user only has to visually verify that the images match.

Not a perfect solution by any means—much Internet fraud bypasses authentication altogether—but two clever ideas that use visual cues to ensure security. You can also verify server authenticity by inspecting the SSL certificate, but no one does that. With this scheme, the user has to recognize only one image and remember one password, no matter how many servers he interacts with. In contrast, the recently announced Site Key (Bank of America’s implementation of the Passmark scheme) requires users to save a different image with each server.

Posted on July 1, 2005 at 7:31 AM16 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.