Entries Tagged "FBI"

Page 21 of 23

Applying CALEA to VoIP

Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP,” paper by Steve Bellovin, Matt Blaze, Ernie Brickell, Clint Brooks, Vint Cerf, Whit Diffie, Susan Landau, Jon Peterson, and John Treichler.

Executive Summary

For many people, Voice over Internet Protocol (VoIP) looks like a nimble way of using a computer to make phone calls. Download the software, pick an identifier and then wherever there is an Internet connection, you can make a phone call. From this perspective, it makes perfect sense that anything that can be done with a telephone, including the graceful accommodation of wiretapping, should be able to be done readily with VoIP as well.

The FCC has issued an order for all “interconnected” and all broadband access VoIP services to comply with Communications Assistance for Law Enforcement Act (CALEA)—without specific regulations on what compliance would mean. The FBI has suggested that CALEA should apply to all forms of VoIP, regardless of the technology involved in the VoIP implementation.

Intercept against a VoIP call made from a fixed location with a fixed IP address directly to a big internet provider’s access router is equivalent to wiretapping a normal phone call, and classical PSTN-style CALEA concepts can be applied directly. In fact, these intercept capabilities can be exactly the same in the VoIP case if the ISP properly secures its infrastructure and wiretap control process as the PSTN’s central offices are assumed to do.

However, the network architectures of the Internet and the Public Switched Telephone Network (PSTN) are substantially different, and these differences lead to security risks in applying the CALEA to VoIP. VoIP, like most Internet communications, are communications for a mobile environment. The feasibility of applying CALEA to more decentralized VoIP services is quite problematic. Neither the manageability of such a wiretapping regime nor whether it can be made secure against subversion seem clear. The real danger is that a CALEA-type regimen is likely to introduce serious vulnerabilities through its “architected security breach.”

Potential problems include the difficulty of determining where the traffic is coming from (the VoIP provider enables the connection but may not provide the services for the actual conversation), the difficulty of ensuring safe transport of the signals to the law-enforcement facility, the risk of introducing new vulnerabilities into Internet communications, and the difficulty of ensuring proper minimization. VOIP implementations vary substantially across the Internet making it impossible to implement CALEA uniformly. Mobility and the ease of creating new identities on the Internet exacerbate the problem.

Building a comprehensive VoIP intercept capability into the Internet appears to require the cooperation of a very large portion of the routing infrastructure, and the fact that packets are carrying voice is largely irrelevant. Indeed, most of the provisions of the wiretap law do not distinguish among different types of electronic communications. Currently the FBI is focused on applying CALEA’s design mandates to VoIP, but there is nothing in wiretapping law that would argue against the extension of intercept design mandates to all types of Internet communications. Indeed, the changes necessary to meet CALEA requirements for VoIP would likely have to be implemented in a way that covered all forms of Internet communication.

In order to extend authorized interception much beyond the easy scenario, it is necessary either to eliminate the flexibility that Internet communications allow, or else introduce serious security risks to domestic VoIP implementations. The former would have significant negative effects on U.S. ability to innovate, while the latter is simply dangerous. The current FBI and FCC direction on CALEA applied to VoIP carries great risks.

Posted on June 28, 2006 at 12:01 PMView Comments

Lying to Government Agents

“How to Avoid Going to Jail under 18 U.S.C. Section 1001 for Lying to Government Agents”

Title 18, United States Code, Section 1001 makes it a crime to: 1) knowingly and willfully; 2) make any materially false, fictitious or fraudulent statement or representation; 3) in any matter within the jurisdiction of the executive, legislative or judicial branch of the United States. Your lie does not even have to be made directly to an employee of the national government as long as it is “within the jurisdiction” of the ever expanding federal bureaucracy. Though the falsehood must be “material” this requirement is met if the statement has the “natural tendency to influence or [is] capable of influencing, the decision of the decisionmaking body to which it is addressed.” United States v. Gaudin, 515 U.S. 506, 510 (1995). (In other words, it is not necessary to show that your particular lie ever really influenced anyone.) Although you must know that your statement is false at the time you make it in order to be guilty of this crime, you do not have to know that lying to the government is a crime or even that the matter you are lying about is “within the jurisdiction” of a government agency. United States v. Yermian, 468 U.S. 63, 69 (1984). For example, if you lie to your employer on your time and attendance records and, unbeknownst to you, he submits your records, along with those of other employees, to the federal government pursuant to some regulatory duty, you could be criminally liable.

Posted on June 5, 2006 at 1:24 PMView Comments

Man Sues Compaq for False Advertising

Convicted felon Michael Crooker is suing Compaq (now HP) for false advertising. He bought a computer promised to be secure, but the FBI got his data anyway:

He bought it in September 2002, expressly because it had a feature called DriveLock, which freezes up the hard drive if you don’t have the proper password.

The computer’s manual claims that “if one were to lose his Master Password and his User Password, then the hard drive is useless and the data cannot be resurrected even by Compaq’s headquarters staff,” Crooker wrote in the suit.

Crooker has a copy of an ATF search warrant for files on the computer, which includes a handwritten notation: “Computer lock not able to be broken/disabled. Computer forwarded to FBI lab.” Crooker says he refused to give investigators the password, and was told the computer would be broken into “through a backdoor provided by Compaq,” which is now part of HP.

It’s unclear what was done with the laptop, but Crooker says a subsequent search warrant for his e-mail account, issued in January 2005, showed investigators had somehow gained access to his 40 gigabyte hard drive. The FBI had broken through DriveLock and accessed his e-mails (both deleted and not) as well as lists of websites he’d visited and other information. The only files they couldn’t read were ones he’d encrypted using Wexcrypt, a software program freely available on the Internet.

I think this is great. It’s about time that computer companies were held liable for their advertising claims.

But his lawsuit against HP may be a long shot. Crooker appears to face strong counterarguments to his claim that HP is guilty of breach of contract, especially if the FBI made the company provide a backdoor.

“If they had a warrant, then I don’t see how his case has any merit at all,” said Steven Certilman, a Stamford attorney who heads the Technology Law section of the Connecticut Bar Association. “Whatever means they used, if it’s covered by the warrant, it’s legitimate.”

If HP claimed DriveLock was unbreakable when the company knew it was not, that might be a kind of false advertising.

But while documents on HP’s web site do claim that without the correct passwords, a DriveLock’ed hard drive is “permanently unusable,” such warnings may not constitute actual legal guarantees.

According to Certilman and other computer security experts, hardware and software makers are careful not to make themselves liable for the performance of their products.

“I haven’t heard of manufacturers, at least for the consumer market, making a promise of computer security. Usually you buy naked hardware and you’re on your own,” Certilman said. In general, computer warrantees are “limited only to replacement and repair of the component, and not to incidental consequential damages such as the exposure of the underlying data to snooping third parties,” he said. “So I would be quite surprised if there were a gaping hole in their warranty that would allow that kind of claim.”

That point meets with agreement from the noted computer security skeptic Bruce Schneier, the chief technology officer at Counterpane Internet Security in Mountain View, Calif.

“I mean, the computer industry promises nothing,” he said last week. “Did you ever read a shrink-wrapped license agreement? You should read one. It basically says, if this product deliberately kills your children, and we knew it would, and we decided not to tell you because it might harm sales, we’re not liable. I mean, it says stuff like that. They’re absurd documents. You have no rights.”

My final quote in the article:

“Unfortunately, this probably isn’t a great case,” Schneier said. “Here’s a man who’s not going to get much sympathy. You want a defendant who bought the Compaq computer, and then, you know, his competitor, or a rogue employee, or someone who broke into his office, got the data. That’s a much more sympathetic defendant.”

Posted on May 3, 2006 at 9:26 AMView Comments

Chameleon Weapons

You can’t detect them, because they look normal:

One type is the exact size and shape of a credit card, except that two of the edges are lethally sharp. It’s made of G10 laminate, an ultra-hard material normally employed for circuit boards. You need a diamond file to get an edge on it.

[…]

Another configuration is a stabbing weapon which is indistinguishable from a pen. This one is made from melamine fiber, and can sit snugly inside a Bic casing. You would only find out it was not the real thing if you tried to write with it. It’s sharpened with a blade edge at the tip which Defense Review describes as “scary sharp.”

Also:

The FBI’s extensive Guide to Concealable Weapons has 89 pages of weapons intended to get through security. These are generally variations of a knifeblade concealed in a pen, comb or a cross—and most of them are pretty obvious on X-ray.

Posted on March 29, 2006 at 6:58 AMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost—in money, liberties, etc.—then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern—purchase expensive luxury goods, purchase things that can be easily fenced, etc.—and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls—as long as they’re infrequent—so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events—things that the data mining system will have to “look at”—and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events—e-mails, phone calls, purchases, web surfings, whatever—per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day—but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low—a phone call from a Visa operator, or an uninteresting ad—and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

NSA Eavesdropping Yields Dead Ends

All of that extra-legal NSA eavesdropping resulted in a whole lot of dead ends.

In the anxious months after the Sept. 11 attacks, the National Security Agency began sending a steady stream of telephone numbers, e-mail addresses and names to the F.B.I. in search of terrorists. The stream soon became a flood, requiring hundreds of agents to check out thousands of tips a month.

But virtually all of them, current and former officials say, led to dead ends or innocent Americans.

Surely this can’t be a surprise to anyone? And as I’ve been arguing for years, programs like this divert resources from real investigations.

President Bush has characterized the eavesdropping program as a “vital tool” against terrorism; Vice President Dick Cheney has said it has saved “thousands of lives.”

But the results of the program look very different to some officials charged with tracking terrorism in the United States. More than a dozen current and former law enforcement and counterterrorism officials, including some in the small circle who knew of the secret program and how it played out at the F.B.I., said the torrent of tips led them to few potential terrorists inside the country they did not know of from other sources and diverted agents from counterterrorism work they viewed as more productive.

A lot of this article reads like a turf war between the NSA and the FBI, but the “inside baseball” aspects are interesting.

EDITED TO ADD (1/18): Jennifer Granick has written on the topic.

Posted on January 18, 2006 at 6:51 AMView Comments

More Erosion of Police Oversight in the U.S.

From EPIC:

Documents obtained by EPIC in a Freedom of Information Act lawsuit reveal FBI agents expressing frustration that the Office of Intelligence Policy and Review, an office that reviews FBI search requests, had not approved applications for orders under Section 215 of the Patriot Act. A subsequent memo refers to “recent changes” allowing the FBI to “bypass”; the office. EPIC is expecting to receive further information about this matter.

Some background:

Under Section 215, the FBI must show only “relevance” to a foreign intelligence or terrorism investigation to obtain vast amounts of personal information. It is unclear why the Office of Intelligence Policy and Review did not approve these applications. The FBI has not revealed this information, nor did it explain whether other search methods had failed.

Remember, the issue here is not whether or not the FBI can engage in counterterrorism. The issue is the erosion of judicial oversight—the only check we have on police power. And this power grab is dangerous regardless of which party is in the White House at the moment.

Posted on December 16, 2005 at 10:03 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.