Entries Tagged "email"

Page 6 of 12

The Problems with CALEA-II

The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it’s really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won’t do much to hinder actual criminals and terrorists.

As the FBI sees it, the problem is that people are moving away from traditional communication systems like telephones onto computer systems like Skype. Eavesdropping on telephones used to be easy. The FBI would call the phone company, which would bring agents into a switching room and allow them to literally tap the wires with a pair of alligator clips and a tape recorder. In the 1990s, the government forced phone companies to provide an analogous capability on digital switches; but today, more and more communications happens over the Internet.

What the FBI wants is the ability to eavesdrop on everything. Depending on the system, this ranges from easy to impossible. E-mail systems like Gmail are easy. The mail resides in Google’s servers, and the company has an office full of people who respond to requests for lawful access to individual accounts from governments all over the world. Encrypted voice systems like Silent Circle are impossible to eavesdrop on—the calls are encrypted from one computer to the other, and there’s no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI’s proposal. Companies that refuse to comply would be fined $25,000 a day.

The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else’s eavesdropping. That’s just not possible. It’s impossible to build a communications system that allows the FBI surreptitious access but doesn’t allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.

This is an old debate, and one we’ve been through many times. The NSA even has a name for it: the equities issue. In the 1980s, the equities debate was about export control of cryptography. The government deliberately weakened U.S. cryptography products because it didn’t want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan “Don’t buy the U.S. stuff—it’s lousy.”

In 1993, the debate was about the Clipper Chip. This was another deliberately weakened security product, an encrypted telephone. The FBI convinced AT&T to add a backdoor that allowed for surreptitious wiretapping. The product was a complete failure. Again, why would anyone buy a deliberately weakened security system?

In 1994, the Communications Assistance for Law Enforcement Act mandated that U.S. companies build eavesdropping capabilities into phone switches. These were sold internationally; some countries liked having the ability to spy on their citizens. Of course, so did criminals, and there were public scandals in Greece (2005) and Italy (2006) as a result.

In 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system. And just this May, we learned that Chinese hackers breached Google’s system for providing surveillance data for the FBI.

The new FBI proposal will fail in all these ways and more. The bad guys will be able to get around the eavesdropping capability, either by building their own security systems—not very difficult—or buying the more-secure foreign products that will inevitably be made available. Most of the good guys, who don’t understand the risks or the technology, will not know enough to bother and will be less secure. The eavesdropping functions will 1) result in more obscure—and less secure—product designs, and 2) be vulnerable to exploitation by criminals, spies, and everyone else. U.S. companies will be forced to compete at a disadvantage; smart customers won’t buy the substandard stuff when there are more-secure foreign alternatives. Even worse, there are lots of foreign governments who want to use these sorts of systems to spy on their own citizens. Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?

The FBI’s shortsighted agenda also works against the parts of the government that are still working to secure the Internet for everyone. Initiatives within the NSA, the DOD, and DHS to do everything from securing computer operating systems to enabling anonymous web browsing will all be harmed by this.

What to do, then? The FBI claims that the Internet is “going dark,” and that it’s simply trying to maintain the status quo of being able to eavesdrop. This characterization is disingenuous at best. We are entering a golden age of surveillance; there’s more electronic communications available for eavesdropping than ever before, including whole new classes of information: location tracking, financial tracking, and vast databases of historical communications such as e-mails and text messages. The FBI’s surveillance department has it better than ever. With regard to voice communications, yes, software phone calls will be harder to eavesdrop upon. (Although there are questions about Skype’s security.) That’s just part of the evolution of technology, and one that on balance is a positive thing.

Think of it this way: We don’t hand the government copies of our house keys and safe combinations. If agents want access, they get a warrant and then pick the locks or bust open the doors, just as a criminal would do. A similar system would work on computers. The FBI, with its increasingly non-transparent procedures and systems, has failed to make the case that this isn’t good enough.

Finally there’s a general principle at work that’s worth explicitly stating. All tools can be used by the good guys and the bad guys. Cars have enormous societal value, even though bank robbers can use them as getaway cars. Cash is no different. Both good guys and bad guys send e-mails, use Skype, and eat at all-night restaurants. But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses. Strong Internet security makes us all safer, even though it helps the bad guys as well. And it makes no sense to harm all of us in an attempt to harm a small subset of us.

This essay originally appeared in Foreign Policy.

Posted on June 4, 2013 at 12:44 PMView Comments

E-Mail Security in the Wake of Petraeus

I’ve been reading lots of articles discussing how little e-mail and Internet privacy we actually have in the U.S. This is a good one to start with:

The FBI obliged—apparently obtaining subpoenas for Internet Protocol logs, which allowed them to connect the sender’s anonymous Google Mail account to others accessed from the same computers, accounts that belonged to Petraeus biographer Paula Broadwell. The bureau could then subpoena guest records from hotels, tracking the WiFi networks, and confirm that they matched Broadwell’s travel history. None of this would have required judicial approval—let alone a Fourth Amendment search warrant based on probable cause.

While we don’t know the investigators’ other methods, the FBI has an impressive arsenal of tools to track Broadwell’s digital footprints—all without a warrant. On a mere showing of “relevance,” they can obtain a court order for cell phone location records, providing a detailed history of her movements, as well as all people she called. Little wonder that law enforcement requests to cell providers have exploded—with a staggering 1.3 million demands for user data just last year, according to major carriers.

An order under this same weak standard could reveal all her e-mail correspondents and Web surfing activity. With the rapid decline of data storage costs, an ever larger treasure trove is routinely retained for ever longer time periods by phone and Internet companies.

Had the FBI chosen to pursue this investigation as a counterintelligence inquiry rather than a cyberstalking case, much of that data could have been obtained without even a subpoena. National Security Letters, secret tools for obtaining sensitive financial and telecommunications records, require only the say-so of an FBI field office chief.

And:

While the details of this investigation that have leaked thus far provide us all a fascinating glimpse into the usually sensitive methods used by FBI agents, this should also serve as a warning, by demonstrating the extent to which the government can pierce the veil of communications anonymity without ever having to obtain a search warrant or other court order from a neutral judge.

The guest lists from hotels, IP login records, as well as the creative request to email providers for “information about other accounts that have logged in from this IP address” are all forms of data that the government can obtain with a subpoena. There is no independent review, no check against abuse, and further, the target of the subpoena will often never learn that the government obtained data (unless charges are filed, or, as in this particular case, government officials eagerly leak details of the investigation to the press). Unfortunately, our existing surveillance laws really only protect the “what” being communicated; the government’s powers to determine “who” communicated remain largely unchecked.

This is good, too.

The EFF tries to explain the relevant laws. Summary: they’re confusing, and they don’t protect us very much.

My favorite quote is from the New York Times:

Marc Rotenberg, executive director of the Electronic Privacy Information Center in Washington, said the chain of unexpected disclosures was not unusual in computer-centric cases.

“It’s a particular problem with cyberinvestigations ­—they rapidly become open-ended because there’s such a huge quantity of information available and it’s so easily searchable,” he said, adding, “If the C.I.A. director can get caught, it’s pretty much open season on everyone else.”

And a day later:

“If the director of central intelligence isn’t able to successfully keep his emails private, what chance do I have?” said Kurt Opsahl, a senior staff attorney at the Electronic Frontier Foundation, a digital-liberties advocacy group.

In more words:

But there’s another, more important lesson to be gleaned from this tale of a biographer run amok. Broadwell’s debacle confirms something that some privacy experts have been warning about for years: Government surveillance of ordinary citizens is now cheaper and easier than ever before. Without needing to go before a judge, the government can gather vast amounts of information about us with minimal expenditure of manpower. We used to be able to count on a certain amount of privacy protection simply because invading our privacy was hard work. That is no longer the case. Our always-on, Internet-connected, cellphone-enabled lives are an open door to Big Brother.

Remember that this problem is bigger than Petraeus. The FBI goes after electronic records all the time:

In Google’s semi-annual transparency report released Tuesday, the company stated that it received 20,938 requests from governments around the world for its users’ private data in the first six months of 2012. Nearly 8,000 of those requests came from the U.S. government, and 7,172 of them were fulfilled to some degree, an increase of 26% from the prior six months, according to Google’s stats.

So what’s the answer? Would they have been safe if they’d used Tor or a regular old VPN? Silent Circle? Something else? This article attempts to give advice; this is the article’s most important caveat:

DON’T MESS UP It is hard to pull off one of these steps, let alone all of them all the time. It takes just one mistake ­—forgetting to use Tor, leaving your encryption keys where someone can find them, connecting to an airport Wi-Fi just once ­—to ruin you.

“Robust tools for privacy and anonymity exist, but they are not integrated in a way that makes them easy to use,” Mr. Blaze warned. “We’ve all made the mistake of accidentally hitting ‘Reply All.’ Well, if you’re trying to hide your e-mails or account or I.P. address, there are a thousand other mistakes you can make.”

In the end, Mr. Kaminsky noted, if the F.B.I. is after your e-mails, it will find a way to read them. In that case, any attempt to stand in its way may just lull you into a false sense of security.

Some people think that if something is difficult to do, “it has security benefits, but that’s all fake—everything is logged,” said Mr. Kaminsky. “The reality is if you don’t want something to show up on the front page of The New York Times, then don’t say it.”

The real answer is to rein in the FBI, of course:

If we don’t take steps to rein in the burgeoning surveillance state now, there’s no guarantee we’ll even be aware of the ways in which control is exercised through this information architecture. We will all remain exposed but the extent of our exposure, and the potential damage done to democracy, is likely to remain invisible.

More here:

“Hopefully this [case] will be a wake-up call for Congress that the Stored Communications Act is old and busted,” Mr Fakhoury says.

I don’t see any chance of that happening anytime soon.

EDITED TO ADD (12/12): E-mail security might not have mattered.

Posted on November 19, 2012 at 12:40 PMView Comments

Webmail as Dead Drop

I noticed this amongst the details of the Petraeus scandal:

Petraeus and Broadwell apparently used a trick, known to terrorists and teenagers alike, to conceal their email traffic, one of the law enforcement officials said.

Rather than transmitting emails to the other’s inbox, they composed at least some messages and instead of transmitting them, left them in a draft folder or in an electronic “dropbox,” the official said. Then the other person could log onto the same account and read the draft emails there. This avoids creating an email trail that is easier to trace.

I remember that the 9/11 terrorists did this.

Posted on November 14, 2012 at 12:28 PMView Comments

Cryptocat

I’m late writing about this one. Cryptocat is a web-based encrypted chat application. After Wired published a pretty fluffy profile on the program and its author, security researcher Chris Soghoian wrote an essay criticizing the unskeptical coverage. Ryan Singel, the editor (not the writer) of the Wired piece, responded by defending the original article and attacking Soghoian.

At this point, I would have considered writing a long essay explaining what’s wrong with the whole concept behind Cryptocat, and echoing my complaints about the dangers of uncritically accepting the security claims of people and companies that write security software, but Patrick Ball did a great job:

CryptoCat is one of a whole class of applications that rely on what’s called “host-based security”. The most famous tool in this group is Hushmail, an encrypted e-mail service that takes the same approach. Unfortunately, these tools are subject to a well-known attack. I’ll detail it below, but the short version is if you use one of these applications, your security depends entirely the security of the host. This means that in practice, CryptoCat is no more secure than Yahoo chat, and Hushmail is no more secure than Gmail. More generally, your security in a host-based encryption system is no better than having no crypto at all.

Sometimes it’s nice to come in late.

EDITED TO ADD (8/14): As a result of this, CryptoCat is moving to a browser plug-in model.

Posted on August 14, 2012 at 6:00 AMView Comments

E-Mail Accounts More Valuable than Bank Accounts

This informal survey produced the following result: “45% of the users found their email accounts more valuable than their bank accounts.”

The author believes this is evidence of some sophisticated security reasoning on the part of users:

From a security standpoint, I can’t agree more with these people. Email accounts are used most commonly to reset other websites’ account passwords, so if it gets compromised, the others will fall like dominos.

I disagree. I think something a lot simpler is going on. People believe that if their bank account is hacked, the bank will help them clean up the mess and they’ll get their money back. And in most cases, they will. They know that if their e-mail is hacked, all the damage will be theirs to deal with. I think this is public opinion reflecting reality.

Posted on June 26, 2012 at 1:57 PMView Comments

Password Sharing Among American Teenagers

Interesting article from the New York Times on password sharing as a show of affection.

“It’s a sign of trust,” Tiffany Carandang, a high school senior in San Francisco, said of the decision she and her boyfriend made several months ago to share passwords for e-mail and Facebook. “I have nothing to hide from him, and he has nothing to hide from me.”

“That is so cute,” said Cherry Ng, 16, listening in to her friend’s comments to a reporter outside school. “They really trust each other.”

We do, said Ms. Carandang, 17. “I know he’d never do anything to hurt my reputation,” she added.

It doesn’t always end so well, of course. Changing a password is simple, but students, counselors and parents say that damage is often done before a password is changed, or that the sharing of online lives can be the reason a relationship falters.

Ethnologist danah boyd discusses what’s happening:

For Meixing, sharing her password with her boyfriend is a way of being connected. But it’s precisely these kinds of narratives that have prompted all sorts of horror by adults over the last week since that NYTimes article came out. I can’t count the number of people who have gasped “How could they!?!” at me. For this reason, I feel the need to pick up on an issue that the NYTimes let out.

The idea of teens sharing passwords didn’t come out of thin air. In fact, it was normalized by adults. And not just any adult. This practice is the product of parental online safety norms. In most households, it’s quite common for young children to give their parents their passwords. With elementary and middle school youth, this is often a practical matter: children lose their passwords pretty quickly. Furthermore, most parents reasonably believe that young children should be supervised online. As tweens turn into teens, the narrative shifts. Some parents continue to require passwords be forked over, using explanations like “because I’m your mother.” But many parents use the language of “trust” to explain why teens should share their passwords with them.

Much more in her post.

Related: a profile of danah boyd.

Posted on January 27, 2012 at 6:39 AMView Comments

1 4 5 6 7 8 12

Sidebar photo of Bruce Schneier by Joe MacInnis.