Entries Tagged "FBI"

Page 20 of 21

Chameleon Weapons

You can’t detect them, because they look normal:

One type is the exact size and shape of a credit card, except that two of the edges are lethally sharp. It’s made of G10 laminate, an ultra-hard material normally employed for circuit boards. You need a diamond file to get an edge on it.

[…]

Another configuration is a stabbing weapon which is indistinguishable from a pen. This one is made from melamine fiber, and can sit snugly inside a Bic casing. You would only find out it was not the real thing if you tried to write with it. It’s sharpened with a blade edge at the tip which Defense Review describes as “scary sharp.”

Also:

The FBI’s extensive Guide to Concealable Weapons has 89 pages of weapons intended to get through security. These are generally variations of a knifeblade concealed in a pen, comb or a cross — and most of them are pretty obvious on X-ray.

Posted on March 29, 2006 at 6:58 AMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost — in money, liberties, etc. — then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern — purchase expensive luxury goods, purchase things that can be easily fenced, etc. — and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls — as long as they’re infrequent — so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events — things that the data mining system will have to “look at” — and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events — e-mails, phone calls, purchases, web surfings, whatever — per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day — but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low — a phone call from a Visa operator, or an uninteresting ad — and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

NSA Eavesdropping Yields Dead Ends

All of that extra-legal NSA eavesdropping resulted in a whole lot of dead ends.

In the anxious months after the Sept. 11 attacks, the National Security Agency began sending a steady stream of telephone numbers, e-mail addresses and names to the F.B.I. in search of terrorists. The stream soon became a flood, requiring hundreds of agents to check out thousands of tips a month.

But virtually all of them, current and former officials say, led to dead ends or innocent Americans.

Surely this can’t be a surprise to anyone? And as I’ve been arguing for years, programs like this divert resources from real investigations.

President Bush has characterized the eavesdropping program as a “vital tool” against terrorism; Vice President Dick Cheney has said it has saved “thousands of lives.”

But the results of the program look very different to some officials charged with tracking terrorism in the United States. More than a dozen current and former law enforcement and counterterrorism officials, including some in the small circle who knew of the secret program and how it played out at the F.B.I., said the torrent of tips led them to few potential terrorists inside the country they did not know of from other sources and diverted agents from counterterrorism work they viewed as more productive.

A lot of this article reads like a turf war between the NSA and the FBI, but the “inside baseball” aspects are interesting.

EDITED TO ADD (1/18): Jennifer Granick has written on the topic.

Posted on January 18, 2006 at 6:51 AMView Comments

More Erosion of Police Oversight in the U.S.

From EPIC:

Documents obtained by EPIC in a Freedom of Information Act lawsuit reveal FBI agents expressing frustration that the Office of Intelligence Policy and Review, an office that reviews FBI search requests, had not approved applications for orders under Section 215 of the Patriot Act. A subsequent memo refers to “recent changes” allowing the FBI to “bypass”; the office. EPIC is expecting to receive further information about this matter.

Some background:

Under Section 215, the FBI must show only “relevance” to a foreign intelligence or terrorism investigation to obtain vast amounts of personal information. It is unclear why the Office of Intelligence Policy and Review did not approve these applications. The FBI has not revealed this information, nor did it explain whether other search methods had failed.

Remember, the issue here is not whether or not the FBI can engage in counterterrorism. The issue is the erosion of judicial oversight — the only check we have on police power. And this power grab is dangerous regardless of which party is in the White House at the moment.

Posted on December 16, 2005 at 10:03 AMView Comments

FBI to Approve All Software?

Sounds implausible, I know. But how else do you explain this FCC ruling (from September — I missed it until now):

The Federal Communications Commission thinks you have the right to use software on your computer only if the FBI approves.

No, really. In an obscure “policy” document released around 9 p.m. ET last Friday, the FCC announced this remarkable decision.

According to the three-page document, to preserve the openness that characterizes today’s Internet, “consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement.” Read the last seven words again.

The FCC didn’t offer much in the way of clarification. But the clearest reading of the pronouncement is that some unelected bureaucrats at the commission have decreeed that Americans don’t have the right to use software such as Skype or PGPfone if it doesn’t support mandatory backdoors for wiretapping. (That interpretation was confirmed by an FCC spokesman on Monday, who asked not to be identified by name. Also, the announcement came at the same time as the FCC posted its wiretapping rules for Internet telephony.)

Posted on December 2, 2005 at 11:24 AMView Comments

Giving the U.S. Military the Power to Conduct Domestic Surveillance

More nonsense in the name of defending ourselves from terrorism:

The Defense Department has expanded its programs aimed at gathering and analyzing intelligence within the United States, creating new agencies, adding personnel and seeking additional legal authority for domestic security activities in the post-9/11 world.

The moves have taken place on several fronts. The White House is considering expanding the power of a little-known Pentagon agency called the Counterintelligence Field Activity, or CIFA, which was created three years ago. The proposal, made by a presidential commission, would transform CIFA from an office that coordinates Pentagon security efforts — including protecting military facilities from attack — to one that also has authority to investigate crimes within the United States such as treason, foreign or terrorist sabotage or even economic espionage.

The Pentagon has pushed legislation on Capitol Hill that would create an intelligence exception to the Privacy Act, allowing the FBI and others to share information gathered about U.S. citizens with the Pentagon, CIA and other intelligence agencies, as long as the data is deemed to be related to foreign intelligence. Backers say the measure is needed to strengthen investigations into terrorism or weapons of mass destruction.

The police and the military have fundamentally different missions. The police protect citizens. The military attacks the enemy. When you start giving police powers to the military, citizens start looking like the enemy.

We gain a lot of security because we separate the functions of the police and the military, and we will all be much less safer if we allow those functions to blur. This kind of thing worries me far more than terrorist threats.

Posted on November 28, 2005 at 2:11 PMView Comments

Surveillance and Oversight

Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year’s Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database — probably close to a million people overall — that the FBI’s computers analyzed, looking for links to known terrorists. Of course, no terrorist attack occurred and no plot was discovered: The intelligence was wrong.

A typical American citizen spending the holidays in Vegas might be surprised to learn that the FBI collected his personal data, but this kind of thing is increasingly common. Since 9/11, the FBI has been collecting all sorts of personal information on ordinary Americans, and it shows no signs of letting up.

The FBI has two basic tools for gathering information on large groups of Americans. Both were created in the 1970s to gather information solely on foreign terrorists and spies. Both were greatly expanded by the USA Patriot Act and other laws, and are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism. Together, they represent an enormous increase in police power in the United States.

The first are FISA warrants (sometimes called Section 215 warrants, after the section of the Patriot Act that expanded their scope). These are issued in secret, by a secret court. The second are national security letters, less well known but much more powerful, and which FBI field supervisors can issue all by themselves. The exact numbers are secret, but a recent Washington Post article estimated that 30,000 letters each year demand telephone records, banking data, customer data, library records, and so on.

In both cases, the recipients of these orders are prohibited by law from disclosing the fact that they received them. And two years ago, Attorney General John Ashcroft rescinded a 1995 guideline that this information be destroyed if it is not relevant to whatever investigation it was collected for. Now, it can be saved indefinitely, and disseminated freely.

September 2005, Rotterdam. The police had already identified some of the 250 suspects in a soccer riot from the previous April, but most were unidentified but captured on video. In an effort to help, they sent text messages to 17,000 phones known to be in the vicinity of the riots, asking that anyone with information contact the police. The result was more evidence, and more arrests.

The differences between the Rotterdam and Las Vegas incidents are instructive. The Rotterdam police needed specific data for a specific purpose. Its members worked with federal justice officials to ensure that they complied with the country’s strict privacy laws. They obtained the phone numbers without any names attached, and deleted them immediately after sending the single text message. And their actions were public, widely reported in the press.

On the other hand, the FBI has no judicial oversight. With only a vague hinting that a Las Vegas attack might occur, the bureau vacuumed up an enormous amount of information. First its members tried asking for the data; then they turned to national security letters and, in some cases, subpoenas. There was no requirement to delete the data, and there is every reason to believe that the FBI still has it all. And the bureau worked in secret; the only reason we know this happened is that the operation leaked.

These differences illustrate four principles that should guide our use of personal information by the police. The first is oversight: In order to obtain personal information, the police should be required to show probable cause, and convince a judge to issue a warrant for the specific information needed. Second, minimization: The police should only get the specific information they need, and not any more. Nor should they be allowed to collect large blocks of information in order to go on “fishing expeditions,” looking for suspicious behavior. The third is transparency: The public should know, if not immediately then eventually, what information the police are getting and how it is being used. And fourth, destruction. Any data the police obtains should be destroyed immediately after its court-authorized purpose is achieved. The police should not be able to hold on to it, just in case it might become useful at some future date.

This isn’t about our ability to combat terrorism; it’s about police power. Traditional law already gives police enormous power to peer into the personal lives of people, to use new crime-fighting technologies, and to correlate that information. But unfettered police power quickly resembles a police state, and checks on that power make us all safer.

As more of our lives become digital, we leave an ever-widening audit trail in our wake. This information has enormous social value — not just for national security and law enforcement, but for purposes as mundane as using cell-phone data to track road congestion, and as important as using medical data to track the spread of diseases. Our challenge is to make this information available when and where it needs to be, but also to protect the principles of privacy and liberty our country is built on.

This essay originally appeared in the Minneapolis Star-Tribune.

Posted on November 22, 2005 at 6:06 AMView Comments

Reminiscences of a 75-Year-Old Jewel Thief

The amazing story of Doris Payne:

Never did she grab the jewels and run. That wasn’t her way. Instead, she glided in, engaged the clerk in one of her stories, confused them and easily slipped away with a diamond ring, usually to a waiting taxi cab.

Don’t think that she never got caught:

She wasn’t always so lucky. She’s been arrested more times than she can remember. One detective said her arrest report is more than 6 feet long — she’s done time in Ohio, Kentucky, West Virginia, Colorado and Wisconsin. Still, the arrests are really “just the tip of the iceberg,” said FBI supervisory special agent Paul G. Graupmann.

Posted on November 21, 2005 at 3:00 PMView Comments

The FBI is Spying on Us

From TalkLeft:

The Washington Post reports that the FBI has been obtaining and reviewing records of ordinary Americans in the name of the war on terror through the use of national security letters that gag the recipients.

Merritt’s entire post is worth reading.

The closing:

The ACLU has been actively litigating the legality of the National Security Letters. Their latest press release is here.

Also, the ACLU is less critical than I am of activity taking place in Congress now where conferees of the Senate and House are working out a compromise version of Patriot Act extension legislation that will resolve differences in versions passed by each in the last Congress. The ACLU reports that the Senate version contains some modest improvements respecting your privacy rights while the House version contains further intrusions. There is still time to contact the conferees. The ACLU provides more information and a sample letter here.

History shows that once new power is granted to the government, it rarely gives it back. Even if you wouldn’t recognize a terrorist if he were standing in front of you, let alone consort with one, now is the time to raise your voice.

EDITED TO ADD: Here’s a good personal story of someone’s FBI file.

EDITED TO ADD: Several people have written to tell me that the CapitolHillBlue website, above, is not reliable. I don’t know one way or the other, but consider yourself warned.

Posted on November 7, 2005 at 3:13 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.