Entries Tagged "watch lists"

Page 1 of 6

Can Everybody Read the US Terrorist Watch List?

After years of claiming that the Terrorist Screening Database is kept secret within the government, we have now learned that the DHS shares it “with more than 1,400 private entities, including hospitals and universities….”

Critics say that the watchlist is wildly overbroad and mismanaged, and that large numbers of people wrongly included on the list suffer routine difficulties and indignities because of their inclusion.

The government’s admission comes in a class-action lawsuit filed in federal court in Alexandria by Muslims who say they regularly experience difficulties in travel, financial transactions and interactions with law enforcement because they have been wrongly added to the list.

Of course that is the effect.

We need more transparency into this process. People need a way to challenge their inclusion on the list, and a redress process if they are being falsely accused.

Posted on February 28, 2019 at 6:22 AMView Comments

Government Secrets and the Need for Whistle-blowers

Yesterday, we learned that the NSA received all calling records from Verizon customers for a three-month period starting in April. That’s everything except the voice content: who called who, where they were, how long the call lasted—for millions of people, both Americans and foreigners. This “metadata” allows the government to track the movements of everyone during that period, and build a detailed picture of who talks to whom. It’s exactly the same data the Justice Department collected about AP journalists.

The Guardian delivered this revelation after receiving a copy of a secret memo about this—presumably from a whistle-blower. We don’t know if the other phone companies handed data to the NSA too. We don’t know if this was a one-off demand or a continuously renewed demand; the order started a few days after the Boston bombers were captured by police.

We don’t know a lot about how the government spies on us, but we know some things. We know the FBI has issued tens of thousands of ultra-secret National Security Letters to collect all sorts of data on people—we believe on millions of people—and has been abusing them to spy on cloud-computer users. We know it can collect a wide array of personal data from the Internet without a warrant. We also know that the FBI has been intercepting cell-phone data, all but voice content, for the past 20 years without a warrant, and can use the microphone on some powered-off cell phones as a room bug—presumably only with a warrant.

We know that the NSA has many domestic-surveillance and data-mining programs with codenames like Trailblazer, Stellar Wind, and Ragtime—deliberately using different codenames for similar programs to stymie oversight and conceal what’s really going on. We know that the NSA is building an enormous computer facility in Utah to store all this data, as well as faster computer networks to process it all. We know the U.S. Cyber Command employs 4,000 people.

We know that the DHS is also collecting a massive amount of data on people, and that local police departments are running “fusion centers” to collect and analyze this data, and covering up its failures. This is all part of the militarization of the police.

Remember in 2003, when Congress defunded the decidedly creepy Total Information Awareness program? It didn’t die; it just changed names and split into many smaller programs. We know that corporations are doing an enormous amount of spying on behalf of the government: all parts.

We know all of this not because the government is honest and forthcoming, but mostly through three backchannels—inadvertent hints or outright admissions by government officials in hearings and court cases, information gleaned from government documents received under FOIA, and government whistle-blowers.

There’s much more we don’t know, and often what we know is obsolete. We know quite a bit about the NSA’s ECHELON program from a 2000 European investigation, and about the DHS’s plans for Total Information Awareness from 2002, but much less about how these programs have evolved. We can make inferences about the NSA’s Utah facility based on the theoretical amount of data from various sources, the cost of computation, and the power requirements from the facility, but those are rough guesses at best. For a lot of this, we’re completely in the dark.

And that’s wrong.

The U.S. government is on a secrecy binge. It overclassifies more information than ever. And we learn, again and again, that our government regularly classifies things not because they need to be secret, but because their release would be embarrassing.

Knowing how the government spies on us is important. Not only because so much of it is illegal—or, to be as charitable as possible, based on novel interpretations of the law—but because we have a right to know. Democracy requires an informed citizenry in order to function properly, and transparency and accountability are essential parts of that. That means knowing what our government is doing to us, in our name. That means knowing that the government is operating within the constraints of the law. Otherwise, we’re living in a police state.

We need whistle-blowers.

Leaking information without getting caught is difficult. It’s almost impossible to maintain privacy in the Internet Age. The WikiLeaks platform seems to have been secure—Bradley Manning was caught not because of a technological flaw, but because someone he trusted betrayed him—but the U.S. government seems to have successfully destroyed it as a platform. None of the spin-offs have risen to become viable yet. The New Yorker recently unveiled its Strongbox platform for leaking material, which is still new but looks good. This link contains the best advice on how to leak information to the press via phone, email, or the post office. The National Whistleblowers Center has a page on national-security whistle-blowers and their rights.

Leaking information is also very dangerous. The Obama Administration has embarked on a war on whistle-blowers, pursuing them—both legally and through intimidation—further than any previous administration has done. Mark Klein, Thomas Drake, and William Binney have all been persecuted for exposing technical details of our surveillance state. Bradley Manning has been treated cruelly and inhumanly—and possibly tortured—for his more-indiscriminate leaking of State Department secrets.

The Obama Administration’s actions against the Associated Press, its persecution of Julian Assange, and its unprecedented prosecution of Manning on charges of “aiding the enemy” demonstrate how far it’s willing to go to intimidate whistle-blowers—as well as the journalists who talk to them.

But whistle-blowing is vital, even more broadly than in government spying. It’s necessary for good government, and to protect us from abuse of power.

We need details on the full extent of the FBI’s spying capabilities. We don’t know what information it routinely collects on American citizens, what extra information it collects on those on various watch lists, and what legal justifications it invokes for its actions. We don’t know its plans for future data collection. We don’t know what scandals and illegal actions—either past or present—are currently being covered up.

We also need information about what data the NSA gathers, either domestically or internationally. We don’t know how much it collects surreptitiously, and how much it relies on arrangements with various companies. We don’t know how much it uses password cracking to get at encrypted data, and how much it exploits existing system vulnerabilities. We don’t know whether it deliberately inserts backdoors into systems it wants to monitor, either with or without the permission of the communications-system vendors.

And we need details about the sorts of analysis the organizations perform. We don’t know what they quickly cull at the point of collection, and what they store for later analysis—and how long they store it. We don’t know what sort of database profiling they do, how extensive their CCTV and surveillance-drone analysis is, how much they perform behavioral analysis, or how extensively they trace friends of people on their watch lists.

We don’t know how big the U.S. surveillance apparatus is today, either in terms of money and people or in terms of how many people are monitored or how much data is collected. Modern technology makes it possible to monitor vastly more people—yesterday’s NSA revelations demonstrate that they could easily surveil everyone—than could ever be done manually.

Whistle-blowing is the moral response to immoral activity by those in power. What’s important here are government programs and methods, not data about individuals. I understand I am asking for people to engage in illegal and dangerous behavior. Do it carefully and do it safely, but—and I am talking directly to you, person working on one of these secret and probably illegal programs—do it.

If you see something, say something. There are many people in the U.S. that will appreciate and admire you.

For the rest of us, we can help by protesting this war on whistle-blowers. We need to force our politicians not to punish them—to investigate the abuses and not the messengers—and to ensure that those unjustly persecuted can obtain redress.

Our government is putting its own self-interest ahead of the interests of the country. That needs to change.

This essay originally appeared on the Atlantic.

EDITED TO ADD (6/10): It’s not just phone records. Another secret program, PRISM, gave the NSA access to e-mails and private messages at Google, Facebook, Yahoo!, Skype, AOL, and others. And in a separate leak, we now know about the Boundless Informant NSA data mining system.

The leaker for at least some of this is Edward Snowden. I consider him an American hero.

EFF has a great timeline of NSA spying. And this and this contain some excellent speculation about what PRISM could be.

Someone needs to write an essay parsing all of the precisely worded denials. Apple has never heard the word “PRISM,” but could have known of the program under a different name. Google maintained that there is no government “back door,” but left open the possibility that the data could have been just handed over. Obama said that the government isn’t “listening to your telephone calls,” ignoring 1) the meta-data, 2) the fact that computers could be doing all of the listening, and 3) that text-to-speech results in phone calls being read and not listened to. And so on and on and on.

Here are people defending the programs. And here’s someone criticizing my essay.

Four more good essays.

I’m sure there are lots more things out there that should be read. Please include the links in comments. Not only essays I would agree with; intelligent opinions from the other sides are just as important.

EDITED TO ADD (6/10): Two essays discussing the policy issues.

My original essay is being discussed on Reddit.

EDITED TO ADD (6/11): Three more good articles: “The Irrationality of Giving Up This Much Liberty to Fight Terror,” “If the NSA Trusted Edward Snowden with Our Data, Why Should We Trust the NSA?” and “Using Metadata to Find Paul Revere.”

EDITED TO ADD (6/11): NSA surveillance reimagined as children’s books.

EDITED TO ADD (7/1): This essay has been translated into Russian and French.

EDITED TO ADD (10/2): This essay has also been translated into Finnish.

Posted on June 10, 2013 at 6:12 AMView Comments

The Politics of Security in a Democracy

Terrorism causes fear, and we overreact to that fear. Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are, and we fear them more than probability indicates we should.

Our leaders are just as prone to this overreaction as we are. But aside from basic psychology, there are other reasons that it’s smart politics to exaggerate terrorist threats, and security threats in general.

The first is that we respond to a strong leader. Bill Clinton famously said: “When people feel uncertain, they’d rather have somebody that’s strong and wrong than somebody who’s weak and right.” He’s right.

The second is that doing something—anything—is good politics. A politician wants to be seen as taking charge, demanding answers, fixing things. It just doesn’t look as good to sit back and claim that there’s nothing to do. The logic is along the lines of: “Something must be done. This is something. Therefore, we must do it.”

The third is that the “fear preacher” wins, regardless of the outcome. Imagine two politicians today. One of them preaches fear and draconian security measures. The other is someone like me, who tells people that terrorism is a negligible risk, that risk is part of life, and that while some security is necessary, we should mostly just refuse to be terrorized and get on with our lives.

Fast-forward 10 years. If I’m right and there have been no more terrorist attacks, the fear preacher takes credit for keeping us safe. But if a terrorist attack has occurred, my government career is over. Even if the incidence of terrorism is as ridiculously low as it is today, there’s no benefit for a politician to take my side of that gamble.

The fourth and final reason is money. Every new security technology, from surveillance cameras to high-tech fusion centers to airport full-body scanners, has a for-profit corporation lobbying for its purchase and use. Given the three other reasons above, it’s easy—and probably profitable—for a politician to make them happy and say yes.

For any given politician, the implications of these four reasons are straightforward. Overestimating the threat is better than underestimating it. Doing something about the threat is better than doing nothing. Doing something that is explicitly reactive is better than being proactive. (If you’re proactive and you’re wrong, you’ve wasted money. If you’re proactive and you’re right but no longer in power, whoever is in power is going to get the credit for what you did.) Visible is better than invisible. Creating something new is better than fixing something old.

Those last two maxims are why it’s better for a politician to fund a terrorist fusion center than to pay for more Arabic translators for the National Security Agency. No one’s going to see the additional appropriation in the NSA’s secret budget. On the other hand, a high-tech computerized fusion center is going to make front page news, even if it doesn’t actually do anything useful.

This leads to another phenomenon about security and government. Once a security system is in place, it can be very hard to dislodge it. Imagine a politician who objects to some aspect of airport security: the liquid ban, the shoe removal, something. If he pushes to relax security, he gets the blame if something bad happens as a result. No one wants to roll back a police power and have the lack of that power cause a well-publicized death, even if it’s a one-in-a-billion fluke.

We’re seeing this force at work in the bloated terrorist no-fly and watch lists; agents have lots of incentive to put someone on the list, but absolutely no incentive to take anyone off. We’re also seeing this in the Transportation Security Administration’s attempt to reverse the ban on small blades on airplanes. Twice it tried to make the change, and twice fearful politicians prevented it from going through with it.

Lots of unneeded and ineffective security measures are perpetrated by a government bureaucracy that is primarily concerned about the security of its members’ careers. They know the voters are more likely to punish them more if they fail to secure against a repetition of the last attack, and less if they fail to anticipate the next one.

What can we do? Well, the first step toward solving a problem is recognizing that you have one. These are not iron-clad rules; they’re tendencies. If we can keep these tendencies and their causes in mind, we’re more likely to end up with sensible security measures that are commensurate with the threat, instead of a lot of security theater and draconian police powers that are not.

Our leaders’ job is to resist these tendencies. Our job is to support politicians who do resist.

This essay originally appeared on CNN.com.

EDITED TO ADD (6/4): This essay has been translated into Swedish.

EDITED TO ADD (6/14): A similar essay, on the politics of terrorism defense.

Posted on May 28, 2013 at 5:09 AMView Comments

Intelligence Analysis and the Connect-the-Dots Metaphor

The FBI and the CIA are being criticized for not keeping better track of Tamerlan Tsarnaev in the months before the Boston Marathon bombings. How could they have ignored such a dangerous person? How do we reform the intelligence community to ensure this kind of failure doesn’t happen again?

It’s an old song by now, one we heard after the 9/11 attacks in 2001 and after the Underwear Bomber’s failed attack in 2009. The problem is that connecting the dots is a bad metaphor, and focusing on it makes us more likely to implement useless reforms.

Connecting the dots in a coloring book is easy and fun. They’re right there on the page, and they’re all numbered. All you have to do is move your pencil from one dot to the next, and when you’re done, you’ve drawn a sailboat. Or a tiger. It’s so simple that 5-year-olds can do it.

But in real life, the dots can only be numbered after the fact. With the benefit of hindsight, it’s easy to draw lines from a Russian request for information to a foreign visit to some other piece of information that might have been collected.

In hindsight, we know who the bad guys are. Before the fact, there are an enormous number of potential bad guys.

How many? We don’t know. But we know that the no-fly list had 21,000 people on it last year. The Terrorist Identities Datamart Environment, also known as the watch list, has 700,000 names on it.

We have no idea how many potential “dots” the FBI, CIA, NSA and other agencies collect, but it’s easily in the millions. It’s easy to work backwards through the data and see all the obvious warning signs. But before a terrorist attack, when there are millions of dots—some important but the vast majority unimportant—uncovering plots is a lot harder.

Rather than thinking of intelligence as a simple connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Or a random-dot stereogram. Is it a sailboat, a puppy, two guys with pressure-cooker bombs, or just an unintelligible mess of dots? You try to figure it out.

It’s not a matter of not enough data, either.

Piling more data onto the mix makes it harder, not easier. The best way to think of it is a needle-in-a-haystack problem; the last thing you want to do is increase the amount of hay you have to search through. The television show Person of Interest is fiction, not fact.

There’s a name for this sort of logical fallacy: hindsight bias. First explained by psychologists Daniel Kahneman and Amos Tversky, it’s surprisingly common. Since what actually happened is so obvious once it happens, we overestimate how obvious it was before it happened.

We actually misremember what we once thought, believing that we knew all along that what happened would happen. It’s a surprisingly strong tendency, one that has been observed in countless laboratory experiments and real-world examples of behavior. And it’s what all the post-Boston-Marathon bombing dot-connectors are doing.

Before we start blaming agencies for failing to stop the Boston bombers, and before we push “intelligence reforms” that will shred civil liberties without making us any safer, we need to stop seeing the past as a bunch of obvious dots that need connecting.

Kahneman, a Nobel prize winner, wisely noted: “Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.” Kahneman calls it “the illusion of understanding,” explaining that the past is only so understandable because we have cast it as simple inevitable stories and leave out the rest.

Nassim Taleb, an expert on risk engineering, calls this tendency the “narrative fallacy.” We humans are natural storytellers, and the world of stories is much more tidy, predictable and coherent than the real world.

Millions of people behave strangely enough to warrant the FBI’s notice, and almost all of them are harmless. It is simply not possible to find every plot beforehand, especially when the perpetrators act alone and on impulse.

We have to accept that there always will be a risk of terrorism, and that when the occasional plot succeeds, it’s not necessarily because our law enforcement systems have failed.

This essay previously appeared on CNN.

EDITED TO ADD (5/7): The hindsight bias was actually first discovered by Baruch Fischhoff: “Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty,” Journal of Experimental Psychology: Human Perception and Performance, 1(3), 1975, pp. 288-299.

Posted on May 7, 2013 at 6:10 AMView Comments

TSA Administrator John Pistole on the Future of Airport Security

There’s a lot here that’s worth watching. He talks about expanding behavioral detection. He talks about less screening for “trusted travelers.”

So, what do the next 10 years hold for transportation security? I believe it begins with TSA’s continued movement toward developing and implementing a more risk-based security system, a phrase you may have heard the last few months. When I talk about risk-based, intelligence-driven security it’s important to note that this is not about a specific program per se, or a limited initiative being evaluated at a handful of airports.

On the contrary, risk-based security is much more comprehensive. It means moving further away from what may have seemed like a one-size-fits-all approach to security. It means focusing our agency’s resources on those we know the least about, and using intelligence in better ways to inform the screening process.

[…]

Another aspect of our risk-based, intelligence-driven security system is the trusted traveler proof-of-concept that will begin this fall. As part of this proof-of-concept, we are looking at how to expedite the screening process for travelers we know and trust the most, and travelers who are willing to voluntarily share more information with us before they travel. Doing so will then allow our officers to more effectively prioritize screening and focus our resources on those passengers we know the least about and those of course on watch lists.

[…]

We’re also working with airlines already testing a known-crewmember concept, and we are evaluating changes to the security screening process for children 12-and-under. Both of these concepts reflect the principles of risk-based security, considering that airline pilots are among our country’s most trusted travelers and the preponderance of intelligence indicates that children 12-and-under pose little risk to aviation security.

Finally, we are also evaluating the value of expanding TSA’s behavior detection program, to help our officers identify people exhibiting signs that may indicate a potential threat. This reflects an expansion of the agency’s existing SPOT program, which was developed by adapting global best practices. This effort also includes additional, specialized training for our organization’s Behavior Detection Officers and is currently being tested at Boston’s Logan International airport, where the SPOT program was first introduced.

Posted on September 14, 2011 at 6:55 AMView Comments

Man Flies with Someone Else's Ticket and No Legal ID

Last week, I got a bunch of press calls about Olajide Oluwaseun Noibi, who flew from New York to Los Angeles using an expired ticket in someone else’s name and a university ID. They all wanted to know what this says about airport security.

It says that airport security isn’t perfect, and that people make mistakes. But it’s not something that anyone should worry about. It’s not like Noibi figured out a new hole in the airport security system, one that he was able to exploit repeatedly. He got lucky. He got real lucky. It’s not something a terrorist can build a plot around.

I’m even less concerned because I’ve never thought the photo ID check had any value. Noibi was screened, just like any other passenger. Even the TSA blog makes this point:

In this case, TSA did not properly authenticate the passenger’s documentation. That said, it’s important to note that this individual received the same thorough physical screening as other passengers, including being screened by advanced imaging technology (body scanner).

Seems like the TSA is regularly downplaying the value of the photo ID check. This is from a Q&A about Secure Flight, their new system to match passengers with watch lists:

Q: This particular “layer” isn’t terribly effective. If this “layer” of security can be circumvented by anyone with a printer and a word processor, this doesn’t seem to be a terribly useful “layer” … especially looking at the amount of money being expended on this particular “layer”. It might be that this money could be more effectively spent on other “layers”.

A: TSA uses layers of security to ensure the security of the traveling public and the Nation’s transportation system. Secure Flight’s watchlist name matching constitutes only one security layer of the many in place to protect aviation. Others include intelligence gathering and analysis, airport checkpoints, random canine team searches at airports, federal air marshals, federal flight deck officers and more security measures both visible and invisible to the public.

Each one of these layers alone is capable of stopping a terrorist attack. In combination their security value is multiplied, creating a much stronger, formidable system. A terrorist who has to overcome multiple security layers in order to carry out an attack is more likely to be pre-empted, deterred, or to fail during the attempt.

Yes, the answer says that they need to spend millions to ensure that terrorists with a viable plot also need a computer, but you can tell that their heart wasn’t in the answer. “Checkpoints! Dogs! Air marshals! Ignore the stupid photo ID requirement.”

Noibi is an embarrassment for the TSA and for the airline Virgin America, who are both supposed to catch this kind of thing. But I’m not worried about the security risk, and neither is the TSA.

Posted on July 6, 2011 at 5:53 AMView Comments

The Continuing Incompetence of Terrorists

The Atlantic on stupid terrorists:

Nowhere is the gap between sinister stereotype and ridiculous reality more apparent than in Afghanistan, where it’s fair to say that the Taliban employ the world’s worst suicide bombers: one in two manages to kill only himself. And this success rate hasn’t improved at all in the five years they’ve been using suicide bombers, despite the experience of hundreds of attacks—or attempted attacks. In Afghanistan, as in many cultures, a manly embrace is a time-honored tradition for warriors before they go off to face death. Thus, many suicide bombers never even make it out of their training camp or safe house, as the pressure from these group hugs triggers the explosives in suicide vests. According to several sources at the United Nations, as many as six would-be suicide bombers died last July after one such embrace in Paktika.

Many Taliban operatives are just as clumsy when suicide is not part of the plan. In November 2009, several Talibs transporting an improvised explosive device were killed when it went off unexpectedly. The blast also took out the insurgents’ shadow governor in the province of Balkh.

When terrorists do execute an attack, or come close, they often have security failures to thank, rather than their own expertise. Consider Umar Farouk Abdulmutallab—the Nigerian “Jockstrap Jihadist” who boarded a Detroit-bound jet in Amsterdam with a suicidal plan in his head and some explosives in his underwear. Although the media colored the incident as a sophisticated al-Qaeda plot, Abdulmutallab showed no great skill or cunning, and simple safeguards should have kept him off the plane in the first place. He was, after all, traveling without luggage, on a one-way ticket that he purchased with cash. All of this while being on a U.S. government watch list.

Fortunately, Abdulmutallab, a college-educated engineer, failed to detonate his underpants. A few months later another college grad, Faisal Shahzad, is alleged to have crudely rigged an SUV to blow up in Times Square. That plan fizzled and he was quickly captured, despite the fact that he was reportedly trained in a terrorist boot camp in Pakistan. Indeed, though many of the terrorists who strike in the West are well educated, their plots fail because they lack operational know-how. On June 30, 2007, two men—one a medical doctor, the other studying for his Ph.D.—attempted a brazen attack on Glasgow Airport. Their education did them little good. Planning to crash their propane-and-petrol-laden Jeep Cherokee into an airport terminal, the men instead steered the SUV, with flames spurting out its windows, into a security barrier. The fiery crash destroyed only the Jeep, and both men were easily apprehended; the driver later died from his injuries. (The day before, the same men had rigged two cars to blow up near a London nightclub. That plan was thwarted when one car was spotted by paramedics and the other, parked illegally, was removed by a tow truck. As a bonus for investigators, the would-be bombers’ cell phones, loaded with the phone numbers of possible accomplices, were salvaged from the cars.)

Reminds me of my own “Portrait of the Modern Terrorist as an Idiot.”

Posted on June 18, 2010 at 5:49 AMView Comments

Filming the Police

In at least three U.S. states, it is illegal to film an active duty policeman:

The legal justification for arresting the “shooter” rests on existing wiretapping or eavesdropping laws, with statutes against obstructing law enforcement sometimes cited. Illinois, Massachusetts, and Maryland are among the 12 states in which all parties must consent for a recording to be legal unless, as with TV news crews, it is obvious to all that recording is underway. Since the police do not consent, the camera-wielder can be arrested. Most all-party-consent states also include an exception for recording in public places where “no expectation of privacy exists” (Illinois does not) but in practice this exception is not being recognized.

Massachusetts attorney June Jensen represented Simon Glik who was arrested for such a recording. She explained, “[T]he statute has been misconstrued by Boston police. You could go to the Boston Common and snap pictures and record if you want.” Legal scholar and professor Jonathan Turley agrees, “The police are basing this claim on a ridiculous reading of the two-party consent surveillance law—requiring all parties to consent to being taped. I have written in the area of surveillance law and can say that this is utter nonsense.”

The courts, however, disagree. A few weeks ago, an Illinois judge rejected a motion to dismiss an eavesdropping charge against Christopher Drew, who recorded his own arrest for selling one-dollar artwork on the streets of Chicago. Although the misdemeanor charges of not having a peddler’s license and peddling in a prohibited area were dropped, Drew is being prosecuted for illegal recording, a Class I felony punishable by 4 to 15 years in prison.

This is a horrible idea, and will make us all less secure. I wrote in 2008:

You cannot evaluate the value of privacy and disclosure unless you account for the relative power levels of the discloser and the disclosee.

If I disclose information to you, your power with respect to me increases. One way to address this power imbalance is for you to similarly disclose information to me. We both have less privacy, but the balance of power is maintained. But this mechanism fails utterly if you and I have different power levels to begin with.

An example will make this clearer. You’re stopped by a police officer, who demands to see identification. Divulging your identity will give the officer enormous power over you: He or she can search police databases using the information on your ID; he or she can create a police record attached to your name; he or she can put you on this or that secret terrorist watch list. Asking to see the officer’s ID in return gives you no comparable power over him or her. The power imbalance is too great, and mutual disclosure does not make it OK.

You can think of your existing power as the exponent in an equation that determines the value, to you, of more information. The more power you have, the more additional power you derive from the new data.

Another example: When your doctor says “take off your clothes,” it makes no sense for you to say, “You first, doc.” The two of you are not engaging in an interaction of equals.

This is the principle that should guide decision-makers when they consider installing surveillance cameras or launching data-mining programs. It’s not enough to open the efforts to public scrutiny. All aspects of government work best when the relative power between the governors and the governed remains as small as possible—when liberty is high and control is low. Forced openness in government reduces the relative power differential between the two, and is generally good. Forced openness in laypeople increases the relative power, and is generally bad.

EDITED TO ADD (7/13): Another article. One jurisdiction in Pennsylvania has explicitly ruled the opposite: that it’s legal to record police officers no matter what.

Posted on June 16, 2010 at 1:36 PMView Comments

Terrorists Prohibited from Using iTunes

The iTunes Store Terms and Conditions prohibits it:

Notice, as I read this clause not only are terrorists—or at least those on terrorist watch lists—prohibited from using iTunes to manufacture WMD, they are also prohibited from even downloading and using iTunes. So all the Al-Qaeda operatives holed up in the Northwest Frontier Provinces of Pakistan, dodging drone attacks while listening to Britney Spears songs downloaded with iTunes are in violation of the terms and conditions, even if they paid for the music!

And you thought being harassed at airports was bad enough.

Posted on February 10, 2010 at 12:39 PMView Comments

Security and Function Creep

Security is rarely static. Technology changes both security systems and attackers. But there’s something else that changes security’s cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose—one it wasn’t suited for in the first place. And then the security system has to play catch-up.

Take driver’s licenses, for example. Originally designed to demonstrate a credential—the ability to drive a car—they looked like other credentials: medical licenses or elevator certificates of inspection. They were wallet-sized, of course, but they didn’t have much security associated with them. Then, slowly, driver’s licenses took on a second application: they became age-verification tokens in bars and liquor stores. Of course the security wasn’t up to the task—teenagers can be extraordinarily resourceful if they set their minds to it—and over the decades driver’s licenses got photographs, tamper-resistant features (once, it was easy to modify the birth year), and technologies that made counterfeiting harder. There was little value in counterfeiting a driver’s license, but a lot of value in counterfeiting an age-verification token.

Today, US driver’s licenses are taking on yet another function: security against terrorists. The Real ID Act—the government’s attempt to make driver’s licenses even more secure—has nothing to do with driving or even with buying alcohol, and everything to do with trying to make that piece of plastic an effective way to verify that someone is not on the terrorist watch list. Whether this is a good idea, or actually improves security, is another matter entirely.

You can see this kind of function creep everywhere. Internet security systems designed for informational Web sites are suddenly expected to provide security for banking Web sites. Security systems that are good enough to protect cheap commodities from being stolen are suddenly ineffective once the price of those commodities rises high enough. Application security systems, designed for locally owned networks, are expected to work even when the application is moved to a cloud computing environment. And cloud computing security, designed for the needs of corporations, is expected to be suitable for government applications as well—maybe even military applications.

Sometimes it’s obvious that security systems designed for one environment won’t work in another. We don’t arm our soldiers the same way we arm our policemen, and we can’t take commercial vehicles and easily turn them into ones outfitted for the military. We understand that we might need to upgrade our home security system if we suddenly come into possession of a bag of diamonds. Yet many think the same security that protects our home computers will also protect voting machines, and the same operating systems that run our businesses are suitable for military uses.

But these are all conscious decisions, and we security professionals often know better. The real problems arise when the changes happen in the background, without any conscious thought. We build a network security system that’s perfectly adequate for the threat and—like a driver’s license becoming an age-verification token—the network accrues more and more functions. But because it has already been pronounced “secure,” we can’t get any budget to re-evaluate and improve the security until after the bad guys have figured out the vulnerabilities and exploited them.

I don’t like having to play catch-up in security, but we seem doomed to keep doing so.

This essay originally appeared in the January/February 2010 issue of IEEE Security and Privacy.

Posted on February 4, 2010 at 6:35 AMView Comments

1 2 3 6

Sidebar photo of Bruce Schneier by Joe MacInnis.