Entries Tagged "national security policy"

Page 53 of 55

No Funding for Homeland Security

Really interesting article by Robert X. Cringely on the lack of federal funding for security technologies.

After the 9-11 terrorist attacks, the United States threw its considerable fortune into the War on Terror, of which a large component was Homeland Security. We conducted a couple wars abroad, both of which still seem to be going on, and took a vast domestic security bureaucracy and turned it into a different and even more vast domestic security bureaucracy. We could argue all day about whether or not America is more secure as a result of these changes, but we’d all agree that a lot of money has been spent. In fact, from a pragmatic point of view, ALL the money has been spent, and that’s the point of this particular column. For a variety of reasons, there is no money left to spend on homeland security ­ none, nada, zilch. We’re busted.

I think his assessment is spot on.

Posted on March 21, 2006 at 12:39 PMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost — in money, liberties, etc. — then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern — purchase expensive luxury goods, purchase things that can be easily fenced, etc. — and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls — as long as they’re infrequent — so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events — things that the data mining system will have to “look at” — and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events — e-mails, phone calls, purchases, web surfings, whatever — per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day — but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low — a phone call from a Visa operator, or an uninteresting ad — and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

Fighting Misuse of the Patriot Act

I like this idea:

I had to sign a tedious business contract the other day. They wanted my corporation number — fair enough — plus my Social Security number — well, if you insist — and also my driver’s license number — hang on, what’s the deal with that?

Well, we e-mailed over a query and they e-mailed back that it was a requirement of the Patriot Act. So we asked where exactly in the Patriot Act could this particular requirement be found and, after a bit of a delay, we got an answer.

And on discovering that there was no mention of driver’s licenses in that particular subsection, I wrote back that we have a policy of reporting all erroneous invocations of the Patriot Act to the Department of Homeland Security on the grounds that such invocations weaken the rationale for the act, and thereby undermine public support for genuine anti-terrorism measures and thus constitute a threat to America’s national security.

And about 10 minutes after that the guy sent back an e-mail saying he didn’t need the driver’s license number after all.

Posted on March 8, 2006 at 7:17 AMView Comments

U.S. Port Security and Proxies

My twelfth essay for Wired.com is about U.S. port security, and more generally about trust and proxies:

Pull aside the rhetoric, and this is everyone’s point. There are those who don’t trust the Bush administration and believe its motivations are political. There are those who don’t trust the UAE because of its terrorist ties — two of the 9/11 terrorists and some of the funding for the attack came out of that country — and those who don’t trust it because of racial prejudices. There are those who don’t trust security at our nation’s ports generally and see this as just another example of the problem.

The solution is openness. The Bush administration needs to better explain how port security works, and the decision process by which the sale of P&O was approved. If this deal doesn’t compromise security, voters — at least the particular lawmakers we trust — need to understand that.

Regardless of the outcome of the Dubai deal, we need more transparency in how our government approaches counter-terrorism in general. Secrecy simply isn’t serving our nation well in this case. It’s not making us safer, and it’s properly reducing faith in our government.

Proxies are a natural outgrowth of society, an inevitable byproduct of specialization. But our proxies are not us and they have different motivations — they simply won’t make the same security decisions as we would. Whether a king is hiring mercenaries, an organization is hiring a network security company or a person is asking some guy to watch his bags while he gets a drink of water, successful security proxies are based on trust. And when it comes to government, trust comes through transparency and openness.

Posted on February 23, 2006 at 7:07 AMView Comments

DHS Funding Open Source Security

From eWeek:

The U.S. government’s Department of Homeland Security plans to spend $1.24 million over three years to fund an ambitious software auditing project aimed at beefing up the security and reliability of several widely deployed open-source products.

The grant, called the “Vulnerability Discovery and Remediation Open Source Hardening Project,” is part of a broad federal initiative to perform daily security audits of approximately 40 open-source software packages, including Linux, Apache, MySQL and Sendmail.

The plan is to use source code analysis technology from San Francisco-based Coverity Inc. to pinpoint and correct security vulnerabilities and other potentially dangerous defects in key open-source packages.

Software engineers at Stanford University will manage the project and maintain a publicly available database of bugs and defects.

Anti-virus vendor Symantec Corp. is providing guidance as to where security gaps might be in certain open-source projects.

I think this is a great use of public funds. One of the limitations of open-source development is that it’s hard to fund tools like Coverity. And this kind of thing improves security for a lot of different organizations against a wide variety of threats. And it increases competition with Microsoft, which will force them to improve their OS as well. Everybody wins.

What’s affected?

In addition to Linux, Apache, MySQL and Sendmail, the project will also pore over the code bases for FreeBSD, Mozilla, PostgreSQL and the GTK (GIMP Tool Kit) library.

And from ZDNet:

The list of open-source projects that Stanford and Coverity plan to check for security bugs includes Apache, BIND, Ethereal, KDE, Linux, Firefox, FreeBSD, OpenBSD, OpenSSL and MySQL, Coverity said.

Posted on January 17, 2006 at 1:04 PMView Comments

How Much High Explosive Does Any One Person Need?

Four hundred pounds:

The stolen goods include 150 pounds of C-4 plastic explosive and 250 pounds of thin sheets of explosives that could be used in letter bombs. Also, 2,500 detonators were missing from a storage explosive container, or magazine, in a bunker owned by Cherry Engineering.

The theft was professional:

Thieves apparently used blowtorches to cut through the storage trailers — suggesting they knew what they were after.

Most likely it’s a criminal who will resell the stuff, but it could be a terrorist organization. My guess is criminals, though.

By the way, this is in America…

The material was taken from Cherry Engineering, a company owned by Chris Cherry, a scientist at Sandia National Labs.

…where security is an afterthought:

The site, located outside Albuquerque, had no guards and no surveillance cameras.

Or maybe not even an afterthought:

It was the site’s second theft in the past two years.

If anyone is looking for something to spend national security money on that will actually make us safer, securing high-explosive-filled trailers would be high on my list.

EDITED TO ADD (12/29): The explosives were recovered.

Posted on December 20, 2005 at 2:20 PMView Comments

Limitations on Police Power Shouldn't Be a Partisan Issue

In response to my op ed last week, the Minneapolis Star Tribune published this letter:

THE PATRIOT ACT

Where are the abuses?

The Nov. 22 commentary “The erosion of freedom” is yet another example of how liberal hysteria is conspicuously light on details.

While the Patriot Act may allow for potential abuses of power, flaws undoubtedly to be fine-tuned over time, the “erosion of freedom” it may foster absolutely pales in comparison to the freedom it is designed to protect in the new age of global terrorism.

I have yet to read of one incident of infringement of any private citizen’s rights as a direct result of the Patriot Act — nor does this commentary point out any, either.

While I’m a firm believer in the Fourth Amendment, I also want our law enforcement to have the legal tools necessary, unfettered by restrictions to counter liberals’ paranoid fixation on “fascism,” in order to combat the threat that terrorism has on all our freedoms.

I have enough trust in our free democratic society and the coequal branches of government that we won’t evolve into a sinister “police state,” as ominously predicted by this commentary.

CHRIS GARDNER, MINNEAPOLIS

Two things strike me in this letter. The first is his “I have yet to read of one incident of infringement of any private citizen’s rights as a direct result of the Patriot Act….” line. It’s just odd. A simple Googling of “patriot act abuses” comes up with almost 3 million hits, many of them pretty extensive descriptions of Patriot Act abuses. Now, he could decide that none of them are abuses. He could choose not to believe any of them are true. He could choose to believe, as he seems to, that it’s all in some liberal fantasy. But to simply not even bother reading about them…isn’t he just admitting that he’s not qualified to have an opinion on the matter? (There’s also that “direct result” weaseling, which I’m not sure what to make of either. Are infringements that are an indirect result of the Patriot Act somehow better?)

I suppose that’s just being petty, though.

The more important thing that strikes me is how partisan he is. He writes about “liberal hysteria” and “liberals’ paranoid fixation on ‘fascism.'” In his last paragraph, he writes about his trust in government.

Most laws don’t matter when we all trust each other. Contracts are rarely if ever looked at if the parties trust each other. The whole point of laws and contracts is to protect us when the parties don’t trust each other. It’s not enough that this guy, and everyone else with this opinion, trusts the Bush government to judiciously balance his rights with the need to fight global terrorism. This guy has to believe that when the Democrats are in power that his rights are just as protected: that he is just as secure against police and government abuse.

Because that’s how you should think about laws, contracts, and government power. When reading through a contract, don’t think about how much you like the other person who’s signing it; imagine how the contract will protect you if you become enemies. When thinking about a law, imagine how it will protect you when your worst nightmare — Hillary Clinton as President, Janet Reno as Attorney General, Howard Dean as something-or-other, and a Democratic Senate and House — is in power.

Laws and contracts are not written for one political party, or for one side. They’re written for everybody. History teaches us this lesson again and again. In the United States, the Bill of Rights was opposed on the grounds that it wasn’t necessary; the Alien and Sedition Act of 1798 proved that it was, only nine years later.

It makes no sense to me that this is a partisan issue.

Posted on December 2, 2005 at 6:11 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.