Entries Tagged "national security policy"

Page 52 of 61

Schneier for TSA Administrator

It’s been suggested. For the record, I don’t want the job.

Since the election, the newspapers and Internet have been flooded with unsolicited advice for President-elect Barack Obama. I’ll go ahead and add mine.

[…]

And by “revamp,” I mean “start over.” Most security experts agree that the rigmarole we go through at the airport is mere security theater, designed not to make us safer, but to make us feel safer by making it increasingly inconvenient to fly. TSA’s approach to security is too reactionary—too set on preventing attacks and attempted attacks that have already happened. And please, whatever you do, resist the temptation to let TSA workers unionize. Security from terror attacks should not be a federal jobs program. You need the authority to fire underperforming screeners quickly and effortlessly. Three game-changing possibilities to head up TSA: security guru Bruce Schneier, Cato Institute security and technology scholar Jim Harper, or Ohio State University’s John Mueller.

Although I’d be happy to see either Jim or John with it.

I don’t want it because it’s too narrow. I think the right thing for the government to do is to give the TSA a lot less money. I’d rather they defend against the broad threat of terrorism than focus on the narrow threat of airplane terrorism, and I’d rather they defend against the myriad of threats that face our society than focus on the singular threat of terrorism. But the head of the TSA can’t have those opinions; he has to take the money he’s given and perform the specific function he’s assigned to perform. Not very much fun, really.

But I’d be happy to advise whoever Obama choses to head the TSA.

The job of the nation’s CTO would be more interesting, but I don’t think I want it, either. (Have you seen the screening process?)

Posted on November 18, 2008 at 1:46 PMView Comments

Data Mining for Terrorists Doesn't Work

According to a massive report from the National Research Council, data mining for terrorists doesn’t work. Here’s a good summary:

The report was written by a committee whose members include William Perry, a professor at Stanford University; Charles Vest, the former president of MIT; W. Earl Boebert, a retired senior scientist at Sandia National Laboratories; Cynthia Dwork of Microsoft Research; R. Gil Kerlikowske, Seattle’s police chief; and Daryl Pregibon, a research scientist at Google.

They admit that far more Americans live their lives online, using everything from VoIP phones to Facebook to RFID tags in automobiles, than a decade ago, and the databases created by those activities are tempting targets for federal agencies. And they draw a distinction between subject-based data mining (starting with one individual and looking for connections) compared with pattern-based data mining (looking for anomalous activities that could show illegal activities).

But the authors conclude the type of data mining that government bureaucrats would like to do—perhaps inspired by watching too many episodes of the Fox series 24—can’t work. “If it were possible to automatically find the digital tracks of terrorists and automatically monitor only the communications of terrorists, public policy choices in this domain would be much simpler. But it is not possible to do so.”

A summary of the recommendations:

  • U.S. government agencies should be required to follow a systematic process to evaluate the effectiveness, lawfulness, and consistency with U.S. values of every information-based program, whether classified or unclassified, for detecting and countering terrorists before it can be deployed, and periodically thereafter.
  • Periodically after a program has been operationally deployed, and in particular before a program enters a new phase in its life cycle, policy makers should (carefully review) the program before allowing it to continue operations or to proceed to the next phase.
  • To protect the privacy of innocent people, the research and development of any information-based counterterrorism program should be conducted with synthetic population data… At all stages of a phased deployment, data about individuals should be rigorously subjected to the full safeguards of the framework.
  • Any information-based counterterrorism program of the U.S. government should be subjected to robust, independent oversight of the operations of that program, a part of which would entail a practice of using the same data mining technologies to “mine the miners and track the trackers.”
  • Counterterrorism programs should provide meaningful redress to any individuals inappropriately harmed by their operation.
  • The U.S. government should periodically review the nation’s laws, policies, and procedures that protect individuals’ private information for relevance and effectiveness in light of changing technologies and circumstances. In particular, Congress should re-examine existing law to consider how privacy should be protected in the context of information-based programs (e.g., data mining) for counterterrorism.

Here are more news articles on the report. I explained why data mining wouldn’t find terrorists back in 2005.

EDITED TO ADD (10/10): More commentary:

As the NRC report points out, not only is the training data lacking, but the input data that you’d actually be mining has been purposely corrupted by the terrorists themselves. Terrorist plotters actively disguise their activities using operational security measures (opsec) like code words, encryption, and other forms of covert communication. So, even if we had access to a copious and pristine body of training data that we could use to generalize about the “typical terrorist,” the new data that’s coming into the data mining system is suspect.

To return to the credit reporting analogy, credit scores would be worthless to lenders if everyone could manipulate their credit history (e.g., hide past delinquencies) the way that terrorists can manipulate the data trails that they leave as they buy gas, enter buildings, make phone calls, surf the Internet, etc.

So this application of data mining bumps up against the classic GIGO (garbage in, garbage out) problem in computing, with the terrorists deliberately feeding the system garbage. What this means in real-world terms is that the success of our counter-terrorism data mining efforts is completely dependent on the failure of terrorist cells to maintain operational security.

The combination of the GIGO problem and the lack of suitable training data combine to make big investments in automated terrorist identification a futile and wasteful effort. Furthermore, these two problems are structural, so they’re not going away. All legitimate concerns about false positives and corrosive effects on civil liberties aside, data mining will never give authorities the ability to identify terrorists or terrorist networks with any degree of confidence.

Posted on October 10, 2008 at 6:35 AMView Comments

Memo to the Next President

Obama has a cyber security plan.

It’s basically what you would expect: Appoint a national cyber security advisor, invest in math and science education, establish standards for critical infrastructure, spend money on enforcement, establish national standards for securing personal data and data-breach disclosure, and work with industry and academia to develop a bunch of needed technologies.

I could comment on the plan, but with security the devil is always in the details—and, of course, at this point there are few details. But since he brought up the topic—McCain supposedly is “working on the issues” as well—I have three pieces of policy advice for the next president, whoever he is. They’re too detailed for campaign speeches or even position papers, but they’re essential for improving information security in our society. Actually, they apply to national security in general. And they’re things only government can do.

One, use your immense buying power to improve the security of commercial products and services. One property of technological products is that most of the cost is in the development of the product rather than the production. Think software: The first copy costs millions, but the second copy is free.

You have to secure your own government networks, military and civilian. You have to buy computers for all your government employees. Consolidate those contracts, and start putting explicit security requirements into the RFPs. You have the buying power to get your vendors to make serious security improvements in the products and services they sell to the government, and then we all benefit because they’ll include those improvements in the same products and services they sell to the rest of us. We’re all safer if information technology is more secure, even though the bad guys can use it, too.

Two, legislate results and not methodologies. There are a lot of areas in security where you need to pass laws, where the security externalities are such that the market fails to provide adequate security. For example, software companies who sell insecure products are exploiting an externality just as much as chemical plants that dump waste into the river. But a bad law is worse than no law. A law requiring companies to secure personal data is good; a law specifying what technologies they should use to do so is not. Mandating software liabilities for software failures is good, detailing how is not. Legislate for the results you want and implement the appropriate penalties; let the market figure out how—that’s what markets are good at.

Three, broadly invest in research. Basic research is risky; it doesn’t always pay off. That’s why companies have stopped funding it. Bell Labs is gone because nobody could afford it after the AT&T breakup, but the root cause was a desire for higher efficiency and short-term profitability—not unreasonable in an unregulated business. Government research can be used to balance that by funding long-term research.

Spread those research dollars wide. Lately, most research money has been redirected through DARPA to near-term military-related projects; that’s not good. Keep the earmark-happy Congress from dictating how the money is spent. Let the NSF, NIH and other funding agencies decide how to spend the money and don’t try to micromanage. Give the national laboratories lots of freedom, too. Yes, some research will sound silly to a layman. But you can’t predict what will be useful for what, and if funding is really peer-reviewed, the average results will be much better. Compared to corporate tax breaks and other subsidies, this is chump change.

If our research capability is to remain vibrant, we need more science and math students with decent elementary and high school preparation. The declining interest is partly from the perception that scientists don’t get rich like lawyers and dentists and stockbrokers, but also because science isn’t valued in a country full of creationists. One way the president can help is by trusting scientific advisers and not overruling them for political reasons.

Oh, and get rid of those post-9/11 restrictions on student visas that are causing so many top students to do their graduate work in Canada, Europe and Asia instead of in the United States. Those restrictions will hurt us immensely in the long run.

Those are the three big ones; the rest is in the details. And it’s the details that matter. There are lots of serious issues that you’re going to have to tackle: data privacy, data sharing, data mining, government eavesdropping, government databases, use of Social Security numbers as identifiers, and so on. It’s not enough to get the broad policy goals right. You can have good intentions and enact a good law, and have the whole thing completely gutted by two sentences sneaked in during rulemaking by some lobbyist.

Security is both subtle and complex, and—unfortunately—doesn’t readily lend itself to normal legislative processes. You’re used to finding consensus, but security by consensus rarely works. On the internet, security standards are much worse when they’re developed by a consensus body, and much better when someone just does them. This doesn’t always work—a lot of crap security has come from companies that have “just done it”—but nothing but mediocre standards come from consensus bodies. The point is that you won’t get good security without pissing someone off: The information broker industry, the voting machine industry, the telcos. The normal legislative process makes it hard to get security right, which is why I don’t have much optimism about what you can get done.

And if you’re going to appoint a cyber security czar, you have to give him actual budgetary authority. Otherwise he won’t be able to get anything done, either.

This essay originally appeared on Wired.com.

Posted on August 12, 2008 at 6:36 AMView Comments

Cost/Benefit Analysis of Airline Security

This report, “Assessing the risks, costs and benefits of United States aviation security measures” by Mark Stewart and John Mueller, is excellent reading:

The United States Office of Management and Budget has recommended the use of cost-benefit assessment for all proposed federal regulations. Since 9/11 government agencies in Australia, United States, Canada, Europe and elsewhere have devoted much effort and expenditure to attempt to ensure that a 9/11 type attack involving hijacked aircraft is not repeated. This effort has come at considerable cost, running in excess of US$6 billion per year for the United States Transportation Security Administration (TSA) alone. In particular, significant expenditure has been dedicated to two aviation security measures aimed at preventing terrorists from hijacking and crashing an aircraft into buildings and other infrastructure: (i) Hardened cockpit doors and (ii) Federal Air Marshal Service. These two security measures cost the United States government and the airlines nearly $1 billion per year. This paper seeks to discover whether aviation security measures are cost-effective by considering their effectiveness, their cost and expected lives saved as a result of such expenditure. An assessment of the Federal Air Marshal Service suggests that the annual cost is $180 million per life saved. This is greatly in excess of the regulatory safety goal of $1-$10 million per life saved. As such, the air marshal program would seem to fail a cost-benefit analysis. In addition, the opportunity cost of these expenditures is considerable, and it is highly likely that far more lives would have been saved if the money had been invested instead in a wide range of more cost-effective risk mitigation programs. On the other hand, hardening of cockpit doors has an annual cost of only $800,000 per life saved, showing that this is a cost-effective security measure.

From the body:

Hardening cockpit doors has the highest risk reduction (16.67%) at lowest additional cost of $40 million. On the other hand, the Federal Air Marshal Service costs $900 million pa but reduces risk by only 1.67%. The Federal Air Marshal Service may be more cost-effective if it is able to show extra benefit over the cheaper measure of hardening cockpit doors. However, the Federal Air Marshal Service seems to have significantly less benefit which means that hardening cockpit doors is the more cost-effective measure.

Cost-benefit analysis is definitely the way to look at these security measures. It’s hard for people to do, because it requires putting a dollar value on a human life—something we can’t possibly do with our own. But as a society, it is something we do again and again: when we raise or lower speed limits, when we ban a certain pesticide, when we enact building codes. Insurance companies do it all the time. We do it implicitly, because we can’t talk about it explicitly. I think there is considerable value in talking about it.

(Note the table on page 5 of the report, which lists the cost per lives saved for a variety of safety and security measures.)

The final paper will eventually be published in the Journal of Transportation Security. I never even knew there was such a thing.

EDITED TO ADD (8/13): New York Times op-ed on the subject.

Posted on July 21, 2008 at 5:53 AMView Comments

Homeland Security Cost-Benefit Analysis

This is an excellent paper by Ohio State political science professor John Mueller. Titled “The Quixotic Quest for Invulnerability: Assessing the Costs, Benefits, and Probabilities of Protecting the Homeland,” it lays out some common send premises and policy implications.

The premises:

1. The number of potential terrorist targets is essentially infinite.

2. The probability that any individual target will be attacked is essentially zero.

3. If one potential target happens to enjoy a degree of protection, the agile terrorist usually can readily move on to another one.

4. Most targets are “vulnerable” in that it is not very difficult to damage them, but invulnerable in that they can be rebuilt in fairly short order and at tolerable expense.

5. It is essentially impossible to make a very wide variety of potential terrorist targets invulnerable except by completely closing them down.

The policy implications:

1. Any protective policy should be compared to a “null case”: do nothing, and use the money saved to rebuild and to compensate any victims.

2. Abandon any effort to imagine a terrorist target list.

3. Consider negative effects of protection measures: not only direct cost, but inconvenience, enhancement of fear, negative economic impacts, reduction of liberties.

4. Consider the opportunity costs, the tradeoffs, of protection measures.

Here’s the abstract:

This paper attempts to set out some general parameters for coming to grips with a central homeland security concern: the effort to make potential targets invulnerable, or at least notably less vulnerable, to terrorist attack. It argues that protection makes sense only when protection is feasible for an entire class of potential targets and when the destruction of something in that target set would have quite large physical, economic, psychological, and/or political consequences. There are a very large number of potential targets where protection is essentially a waste of resources and a much more limited one where it may be effective.

The whole paper is worth reading.

Posted on July 17, 2008 at 6:43 AMView Comments

Daniel Solove on the New FISA Law

From his blog:

Future presidents can learn a lot from all this—do exactly what the Bush Administration did! If the law holds you back, don’t first go to Congress and try to work something out. Secretly violate that law, and then when you get caught, staunchly demand that Congress change the law to your liking and then immunize any company that might have illegally cooperated with you. That’s the lesson. You spit in Congress’s face, and they’ll give you what you want.

The past eight years have witnessed a dramatic expansion of Executive Branch power, with a rather anemic push-back from the Legislative and Judicial Branches. We have extensive surveillance on a mass scale by agencies with hardly any public scrutiny, operating mostly in secret, with very limited judicial oversight, and also with very minimal legislative oversight. Most citizens know little about what is going on, and it will be difficult for them to find out, since everything is kept so secret. Secrecy and accountability rarely go well together. The telecomm lawsuits were at least one way that citizens could demand some information and accountability, but now that avenue appears to be shut down significantly with the retroactive immunity grant. There appear to be fewer ways for the individual citizen or citizen advocacy groups to ensure accountability of the government in the context of national security.

That’s the direction we’re heading in—more surveillance, more systemic government monitoring and data mining, and minimal oversight and accountability—with most of the oversight being very general, not particularly rigorous, and nearly always secret—and with the public being almost completely shut out of the process. But don’t worry, you shouldn’t get too upset about all this. You probably won’t know much about it. They’ll keep the dirty details from you, because what you don’t know can’t hurt you.

Posted on July 14, 2008 at 12:08 PMView Comments

Pentagon Consulting Social Scientists on Security

This seems like a good idea:

Eager to embrace eggheads and ideas, the Pentagon has started an ambitious and unusual program to recruit social scientists and direct the nation’s brainpower to combating security threats like the Chinese military, Iraq, terrorism and religious fundamentalism.

The article talks a lot about potential conflicts of interest and such, and less on what sorts of insights the social scientists can offer. I think there is a lot of potential value here.

Posted on June 30, 2008 at 12:13 PMView Comments

1 50 51 52 53 54 61

Sidebar photo of Bruce Schneier by Joe MacInnis.