Entries Tagged "essays"

Page 20 of 48

Chinese Hacking of the US

Chinese hacking of American computer networks is old news. For years we’ve known about their attacks against U.S. government and corporate targets. We’ve seen detailed reports of how they hacked The New York Times. Google has detected them going after Gmail accounts of dissidents. They’ve built sophisticated worldwide eavesdropping networks. These hacks target both military secrets and corporate intellectual property. They’re perpetrated by a combination of state, state-sponsored and state-tolerated hackers. It’s been going on for years.

On Monday, the Justice Department indicted five Chinese hackers in absentia, all associated with the Chinese military, for stealing corporate secrets from U.S. energy, metals and manufacturing companies. It’s entirely for show; the odds that the Chinese are going to send these people to the U.S. to stand trial is zero. But it does move what had been mostly a technical security problem into the world of diplomacy and foreign policy. By doing this, the U.S. government is taking a very public stand and saying “enough.”

The problem with that stand is that we’ve been doing much the same thing to China. Documents revealed by the whistleblower Edward Snowden show that the NSA has penetrated Chinese government and commercial networks, and is exfiltrating—that’s NSA talk for stealing—an enormous amount of secret data. We’ve hacked the networking hardware of one of their own companies, Huawei. We’ve intercepted networking equipment being sent there and installed monitoring devices. We’ve been listening in on their private communications channels.

The only difference between the U.S. and China’s actions is that the U.S. doesn’t engage in direct industrial espionage. That is, we don’t steal secrets from Chinese companies and pass them directly to U.S. competitors. But we do engage in economic espionage; we steal secrets from Chinese companies for an advantage in government trade negotiations, which directly benefits U.S. competitors. We might think this difference is important, but other countries are not as as impressed with our nuance.

Already the Chinese are retaliating against the U.S. actions with rhetoric of their own. I don’t know the Chinese expression for ‘pot calling the kettle black,’ but it certainly fits in this case.

Again, none of this is new. The U.S. and the Chinese have been conducting electronic espionage on each other throughout the Cold War, and there’s no reason to think it’s going to change anytime soon. What’s different now is the ease with which the two countries can do this safely and remotely, over the Internet, as well as the massive amount of information that can be stolen with a few computer commands.

On the Internet today, it is much easier to attack systems and break into them than it is to defend those systems against attack, so the advantage is to the attacker. This is true for a combination of reasons: the ability of an attacker to concentrate his attack, the nature of vulnerabilities in computer systems, poor software quality and the enormous complexity of computer systems.

The computer security industry is used to coping with criminal attacks. In general, such attacks are untargeted. Criminals might have broken into Target’s network last year and stolen 40 million credit and debit card numbers, but they would have been happy with any retailer’s large credit card database. If Target’s security had been better than its competitors, the criminals would have gone elsewhere. In this way, security is relative.

The Chinese attacks are different. For whatever reason, the government hackers wanted certain information inside the networks of Alcoa World Alumina, Westinghouse Electric, Allegheny Technologies, U.S. Steel, United Steelworkers Union and SolarWorld. It wouldn’t have mattered how those companies’ security compared with other companies; all that mattered was whether it was better than the ability of the attackers.

This is a fundamentally different security model—often called APT or Advanced Persistent Threat—and one that is much more difficult to defend against.

In a sense, American corporations are collateral damage in this battle of espionage between the U.S. and China. Taking the battle from the technical sphere into the foreign policy sphere might be a good idea, but it will work only if we have some moral high ground from which to demand that others not spy on us. As long as we run the largest surveillance network in the world and hack computer networks in foreign countries, we’re going to have trouble convincing others not to attempt the same on us.

This essay previously appeared on Time.com.

Posted on June 2, 2014 at 6:37 AMView Comments

Disclosing vs. Hoarding Vulnerabilities

There’s a debate going on about whether the US government—specifically, the NSA and United States Cyber Command—should stockpile Internet vulnerabilities or disclose and fix them. It’s a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.

A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.

Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.

When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patching their systems regularly.

Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn’t even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software’s vendor finds out—the timing depends on how extensively the vulnerability is used—and issues a patch to close the vulnerability.

If an offensive military cyber unit discovers the vulnerability—or a cyber-weapons arms manufacturer—it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it’ll remain secret until someone else discovers it.

Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes—both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.

The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability—or of the vulnerability becoming public and criminals starting to use it.

There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense—and vice versa.”

To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won’t have any cyber-weapons to use against other countries.

Many people have weighed in on this debate. The president’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.

It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.

If vulnerabilities are sparse, then it’s obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.

If vulnerabilities are plentiful—and this seems to be true—the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

But while vulnerabilities are plentiful, they’re not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when a person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.

The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, “nobody but us.” The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some—we don’t know how many—vulnerabilities that “nobody but us” could find for attack purposes.

This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don’t know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.

Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.

There’s one more interesting wrinkle. Cyber-weapons are a combination of a payload—the damage the weapon does—and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.

The implications of US policy can be felt on a variety of levels. The NSA’s actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we’re putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.

An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable—North Korea much less—so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.

Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that’s what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.

We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.

In today’s cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world’s militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

This essay previously appeared on TheAtlantic.com.

Posted on May 22, 2014 at 6:15 AMView Comments

Espionage vs. Surveillance

According to NSA documents published in Glenn Greenwald’s new book No Place to Hide, we now know that the NSA spies on embassies and missions all over the world, including those of Brazil, Bulgaria, Colombia, the European Union, France, Georgia, Greece, India, Italy, Japan, Mexico, Slovakia, South Africa, South Korea, Taiwan, Venezuela and Vietnam.

This will certainly strain international relations, as happened when it was revealed that the U.S. is eavesdropping on German Chancellor Angela Merkel’s cell phone—but is anyone really surprised? Spying on foreign governments is what the NSA is supposed to do. Much more problematic, and dangerous, is that the NSA is spying on entire populations. It’s a mistake to have the same laws and organizations involved with both activities, and it’s time we separated the two.

The former is espionage: the traditional mission of the NSA. It’s an important military mission, both in peacetime and wartime, and something that’s not going to go away. It’s targeted. It’s focused. Decisions of whom to target are decisions of foreign policy. And secrecy is paramount.

The latter is very different. Terrorists are a different type of enemy; they’re individual actors instead of state governments. We know who foreign government officials are and where they’re located: in government offices in their home countries, and embassies abroad. Terrorists could be anyone, anywhere in the world. To find them, the NSA has to look for individual bad actors swimming in a sea of innocent people. This is why the NSA turned to broad surveillance of populations, both in the U.S. and internationally.

If you think about it, this is much more of a law enforcement sort of activity than a military activity. Both involve security, but just as the NSA’s traditional focus was governments, the FBI’s traditional focus was individuals. Before and after 9/11, both the NSA and the FBI were involved in counterterrorism. The FBI did work in the U.S. and abroad. After 9/11, the primary mission of counterterrorist surveillance was given to the NSA because it had existing capabilities, but the decision could have gone the other way.

Because the NSA got the mission, both the military norms and the legal framework from the espionage world carried over. Our surveillance efforts against entire populations were kept as secret as our espionage efforts against governments. And we modified our laws accordingly. The 1978 Foreign Intelligence Surveillance Act (FISA) that regulated NSA surveillance required targets to be “agents of a foreign power.” When the law was amended in 2008 under the FISA Amendments Act, a target could be any foreigner anywhere.

Government-on-government espionage is as old as governments themselves, and is the proper purview of the military. So let the Commander in Chief make the determination on whose cell phones to eavesdrop on, and let the NSA carry those orders out.

Surveillance is a large-scale activity, potentially affecting billions of people, and different rules have to apply – the rules of the police. Any organization doing such surveillance should apply the police norms of probable cause, due process, and oversight to population surveillance activities. It should make its activities much less secret and more transparent. It should be accountable in open courts. This is how we, and the rest of the world, regains the trust in the US’s actions.

In January, President Obama gave a speech on the NSA where he said two very important things. He said that the NSA would no longer spy on Angela Merkel’s cell phone. And while he didn’t extend that courtesy to the other 82 million citizens of Germany, he did say that he would extend some of the U.S.’s constitutional protections against warrantless surveillance to the rest of the world.

Breaking up the NSA by separating espionage from surveillance, and putting the latter under a law enforcement regime instead of a military regime, is a step toward achieving that.

This essay originally appeared on CNN.com.

Posted on May 14, 2014 at 12:08 PMView Comments

Internet Subversion

In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.

Trust is inherently social. It is personal, relative, situational, and fluid. It is not uniquely human, but it is the underpinning of everything we have accomplished as a species. We trust other people, but we also trust organizations and processes. The psychology is complex, but when we trust a technology, we basically believe that it will work as intended.

This is how we technologists trusted the security of the Internet. We didn’t have any illusions that the Internet was secure, or that governments, criminals, hackers, and others couldn’t break into systems and networks if they were sufficiently skilled and motivated. We didn’t trust that the programmers were perfect, that the code was bug-free, or even that our crypto math was unbreakable. We knew that Internet security was an arms race, and the attackers had most of the advantages.

What we trusted was that the technologies would stand or fall on their own merits.

We now know that trust was misplaced. Through cooperation, bribery, threats, and compulsion, the NSA—and the United Kingdom’s GCHQ—forced companies to weaken the security of their products and services, then lie about it to their customers.

We know of a few examples of this weakening. The NSA convinced Microsoft to make some unknown changes to Skype in order to make eavesdropping on conversations easier. The NSA also inserted a degraded random number generator into a common standard, then worked to get that generator used more widely.

I have heard engineers working for the NSA, FBI, and other government agencies delicately talk around the topic of inserting a “backdoor” into security products to allow for government access. One of them told me, “It’s like going on a date. Sex is never explicitly mentioned, but you know it’s on the table.” The NSA’s SIGINT Enabling Project has a $250 million annual budget; presumably it has more to show for itself than the fragments that have become public. Reed Hundt calls for the government to support a secure Internet, but given its history of installing backdoors, why would we trust claims that it has turned the page?

We also have to assume that other countries have been doing the same things. We have long believed that networking products from the Chinese company Huawei have been backdoored by the Chinese government. Do we trust hardware and software from Russia? France? Israel? Anywhere?

This mistrust is poison. Because we don’t know, we can’t trust any of them. Internet governance was largely left to the benign dictatorship of the United States because everyone more or less believed that we were working for the security of the Internet instead of against it. But now that system is in turmoil. Foreign companies are fleeing US suppliers because they don’t trust American firms’ security claims. Far worse governments are using these revelations to push for a more isolationist Internet, giving them more control over what their citizens see and say.

All so we could eavesdrop better.

There is a term in the NSA: “nobus,” short for “nobody but us.” The NSA believes it can subvert security in such a way that only it can take advantage of that subversion. But that is hubris. There is no way to determine if or when someone else will discover a vulnerability. These subverted systems become part of our infrastructure; the harms to everyone, once the flaws are discovered, far outweigh the benefits to the NSA while they are secret.

We can’t both weaken the enemy’s networks and protect our own. Because we all use the same products, technologies, protocols, and standards, we either allow everyone to spy on everyone, or prevent anyone from spying on anyone. By weakening security, we are weakening it against all attackers. By inserting vulnerabilities, we are making everyone vulnerable. The same vulnerabilities used by intelligence agencies to spy on each other are used by criminals to steal your passwords. It is surveillance versus security, and we all rise and fall together.

Security needs to win. The Internet is too important to the world—and trust is too important to the Internet—to squander it like this. We’ll never get every power in the world to agree not to subvert the parts of the Internet they control, but we can stop subverting the parts we control. Most of the high-tech companies that make the Internet work are US companies, so our influence is disproportionate. And once we stop subverting, we can credibly devote our resources to detecting and preventing subversion by others.

This essay previously appeared in the Boston Review.

Posted on May 12, 2014 at 6:26 AMView Comments

Ephemeral Apps

Ephemeral messaging apps such as Snapchat, Wickr and Frankly, all of which advertise that your photo, message or update will only be accessible for a short period, are on the rise. Snapchat and Frankly, for example, claim they permanently delete messages, photos and videos after 10 seconds. After that, there’s no record.

This notion is especially popular with young people, and these apps are an antidote to sites such as Facebook where everything you post lasts forever unless you take it down—and taking it down is no guarantee that it isn’t still available.

These ephemeral apps are the first concerted push against the permanence of Internet conversation. We started losing ephemeral conversation when computers began to mediate our communications. Computers naturally produce conversation records, and that data was often saved and archived.

The powerful and famous—from Oliver North back in 1987 to Anthony Weiner in 2011—have been brought down by e-mails, texts, tweets and posts they thought private. Lots of us have been embroiled in more personal embarrassments resulting from things we’ve said either being saved for too long or shared too widely.

People have reacted to this permanent nature of Internet communications in ad hoc ways. We’ve deleted our stuff where possible and asked others not to forward our writings without permission. “Wall scrubbing” is the term used to describe the deletion of Facebook posts.

Sociologist danah boyd has written about teens who systematically delete every post they make on Facebook soon after they make it. Apps such as Wickr just automate the process. And it turns out there’s a huge market in that.

Ephemeral conversation is easy to promise but hard to get right. In 2013, researchers discovered that Snapchat doesn’t delete images as advertised; it merely changes their names so they’re not easy to see. Whether this is a problem for users depends on how technically savvy their adversaries are, but it illustrates the difficulty of making instant deletion actually work.

The problem is that these new “ephemeral” conversations aren’t really ephemeral the way a face-to-face unrecorded conversation would be. They’re not ephemeral like a conversation during a walk in a deserted woods used to be before the invention of cell phones and GPS receivers.

At best, the data is recorded, used, saved and then deliberately deleted. At worst, the ephemeral nature is faked. While the apps make the posts, texts or messages unavailable to users quickly, they probably don’t erase them off their systems immediately. They certainly don’t erase them from their backup tapes, if they end up there.

The companies offering these apps might very well analyze their content and make that information available to advertisers. We don’t know how much metadata is saved. In SnapChat, users can see the metadata even though they can’t see the content and what it’s used for. And if the government demanded copies of those conversations—either through a secret NSA demand or a more normal legal process involving an employer or school—the companies would have no choice but to hand them over.

Even worse, if the FBI or NSA demanded that American companies secretly store those conversations and not tell their users, breaking their promise of deletion, the companies would have no choice but to comply.

That last bit isn’t just paranoia.

We know the U.S. government has done this to companies large and small. Lavabit was a small secure e-mail service, with an encryption system designed so that even the company had no access to users’ e-mail. Last year, the NSA presented it with a secret court order demanding that it turn over its master key, thereby compromising the security of every user. Lavabit shut down its service rather than comply, but that option isn’t feasible for larger companies. In 2011, Microsoft made some still-unknown changes to Skype to make NSA eavesdropping easier, but the security promises they advertised didn’t change.

This is one of the reasons President Barack Obama’s announcement that he will end one particular NSA collection program under one particular legal authority barely begins to solve the problem: the surveillance state is so robust that anything other than a major overhaul won’t make a difference.

Of course, the typical Snapchat user doesn’t care whether the U.S. government is monitoring his conversations. He’s more concerned about his high school friends and his parents. But if these platforms are insecure, it’s not just the NSA that one should worry about.

Dissidents in the Ukraine and elsewhere need security, and if they rely on ephemeral apps, they need to know that their own governments aren’t saving copies of their chats. And even U.S. high school students need to know that their photos won’t be surreptitiously saved and used against them years later.

The need for ephemeral conversation isn’t some weird privacy fetish or the exclusive purview of criminals with something to hide. It represents a basic need for human privacy, and something every one of us had as a matter of course before the invention of microphones and recording devices.

We need ephemeral apps, but we need credible assurances from the companies that they are actually secure and credible assurances from the government that they won’t be subverted.

This essay previously appeared on CNN.com.

EDITED TO ADD (4/14): There are apps to permanently save Snapchat photos.

At Financial Cryptography 2014, Franziska Roesner presented a paper that questions whether users expect ephemeral messaging from Snapchat.

Posted on April 2, 2014 at 5:07 AMView Comments

The Continuing Public/Private Surveillance Partnership

If you’ve been reading the news recently, you might think that corporate America is doing its best to thwart NSA surveillance.

Google just announced that it is encrypting Gmail when you access it from your computer or phone, and between data centers. Last week, Mark Zuckerberg personally called President Obama to complain about the NSA using Facebook as a means to hack computers, and Facebook’s Chief Security Officer explained to reporters that the attack technique has not worked since last summer. Yahoo, Google, Microsoft, and others are now regularly publishing “transparency reports,” listing approximately how many government data requests the companies have received and complied with.

On the government side, last week the NSA’s General Counsel Rajesh De seemed to have thrown those companies under a bus by stating that—despite their denials—they knew all about the NSA’s collection of data under both the PRISM program and some unnamed “upstream” collections on the communications links.

Yes, it may seem like the public/private surveillance partnership has frayed—but, unfortunately, it is alive and well. The main focus of massive Internet companies and government agencies both still largely align: to keep us all under constant surveillance. When they bicker, it’s mostly role-playing designed to keep us blasé about what’s really going on.

The U.S. intelligence community is still playing word games with us. The NSA collects our data based on four different legal authorities: the Foreign Intelligence Surveillance Act (FISA) of 1978, Executive Order 12333 of 1981 and modified in 2004 and 2008, Section 215 of the Patriot Act of 2001, and Section 702 of the FISA Amendments Act (FAA) of 2008. Be careful when someone from the intelligence community uses the caveat “not under this program” or “not under this authority”; almost certainly it means that whatever it is they’re denying is done under some other program or authority. So when De said that companies knew about NSA collection under Section 702, it doesn’t mean they knew about the other collection programs.

The big Internet companies know of PRISM—although not under that code name—because that’s how the program works; the NSA serves them with FISA orders. Those same companies did not know about any of the other surveillance against their users conducted on the far more permissive EO 12333. Google and Yahoo did not know about MUSCULAR, the NSA’s secret program to eavesdrop on their trunk connections between data centers. Facebook did not know about QUANTUMHAND, the NSA’s secret program to attack Facebook users. And none of the target companies knew that the NSA was harvesting their users’ address books and buddy lists.

These companies are certainly pissed that the publicity surrounding the NSA’s actions is undermining their users’ trust in their services, and they’re losing money because of it. Cisco, IBM, cloud service providers, and others have announced that they’re losing billions, mostly in foreign sales.

These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like. IBM’s letter to its clients last week is an excellent example. The letter lists five "simple facts" that it hopes will mollify its customers, but the items are so qualified with caveats that they do the exact opposite to anyone who understands the full extent of NSA surveillance. And IBM’s spending $1.2B on data centers outside the U.S. will only reassure customers who don’t realize that National Security Letters require a company to turn over data, regardless of where in the world it is stored.

Google’s recent actions, and similar actions of many Internet companies, will definitely improve its users’ security against surreptitious government collection programs—both the NSA’s and other governments’—but their assurances deliberately ignores the massive security vulnerability built into its services by design. Google, and by extension, the U.S. government, still has access to your communications on Google’s servers.

Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop.

It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others.

Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations. Surveillance is still the business model of the Internet, and every one of those companies wants access to your communications and your metadata. Your private thoughts and conversations are the product they sell to their customers. We also have learned that they read your e-mail for their own internal investigations.

But even if this were not true, even if—for example—Google were willing to forgo data mining your e-mail and video conversations in exchange for the marketing advantage it would give it over Microsoft, it still won’t offer you real security. It can’t.

The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.

This isn’t paranoia. We know that the U.S. government ordered the secure e-mail provider Lavabit to turn over its master keys and compromise every one of its users. We know that the U.S. government convinced Microsoft—either through bribery, coercion, threat, or legal compulsion—to make changes in how Skype operates, to make eavesdropping easier.

We don’t know what sort of pressure the U.S. government has put on Google and the others. We don’t know what secret agreements those companies have reached with the NSA. We do know the NSA’s BULLRUN program to subvert Internet cryptography was successful against many common protocols. Did the NSA demand Google’s keys, as it did with Lavabit? Did its Tailored Access Operations group break into to Google’s servers and steal the keys?

We just don’t know.

The best we have are caveat-laden pseudo-assurances. At SXSW earlier this month, CEO Eric Schmidt tried to reassure the audience by saying that he was “pretty sure that information within Google is now safe from any government’s prying eyes.” A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

Google, Facebook, Microsoft, and the others are already on the record as supporting these legislative changes. It would be better if they openly acknowledged their users’ insecurity and increased their pressure on the government to change, rather than trying to fool their users and customers.

This essay previously appeared on TheAtlantic.com.

Posted on March 31, 2014 at 9:18 AMView Comments

Computer Network Exploitation vs. Computer Network Attack

Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyberattacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.

When Edward Snowden revealed that the NSA has been doing exactly the same thing as the Chinese to computer networks around the world, we used much more moderate language to describe U.S. actions: words like espionage, or intelligence gathering, or spying. We stressed that it’s a peacetime activity, and that everyone does it.

The reality is somewhere in the middle, and the problem is that our intuitions are based on history.

Electronic espionage is different today than it was in the pre-Internet days of the Cold War. Eavesdropping isn’t passive anymore. It’s not the electronic equivalent of sitting close to someone and overhearing a conversation. It’s not passively monitoring a communications circuit. It’s more likely to involve actively breaking into an adversary’s computer network—be it Chinese, Brazilian, or Belgian—and installing malicious software designed to take over that network.

In other words, it’s hacking. Cyber-espionage is a form of cyber-attack. It’s an offensive action. It violates the sovereignty of another country, and we’re doing it with far too little consideration of its diplomatic and geopolitical costs.

The abbreviation-happy U.S. military has two related terms for what it does in cyberspace. CNE stands for “computer network exploitation.” That’s spying. CNA stands for “computer network attack.” That includes actions designed to destroy or otherwise incapacitate enemy networks. That’s—among other things—sabotage.

CNE and CNA are not solely in the purview of the U.S.; everyone does it. We know that other countries are building their offensive cyberwar capabilities. We have discovered sophisticated surveillance networks from other countries with names like GhostNet, Red October, The Mask. We don’t know who was behind them—these networks are very difficult to trace back to their source—but we suspect China, Russia, and Spain, respectively. We recently learned of a hacking tool called RCS that’s used by 21 governments: Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Korea, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, Sudan, Thailand, Turkey, UAE, and Uzbekistan.

When the Chinese company Huawei tried to sell networking equipment to the U.S., the government considered that equipment a “national security threat,” rightly fearing that those switches were backdoored to allow the Chinese government both to eavesdrop and attack US networks. Now we know that the NSA is doing the exact same thing to Americanmade equipment sold in China, as well as to those very same Huawei switches.

The problem is that, from the point of view of the object of an attack, CNE and CNA look the same as each other, except for the end result. Today’s surveillance systems involve breaking into the computers and installing malware, just as cybercriminals do when they want your money. And just like Stuxnet: the U.S./Israeli cyberweapon that disabled the Natanz nuclear facility in Iran in 2010.

This is what Microsoft’s General Counsel Brad Smith meant when he said: “Indeed, government snooping potentially now constitutes an ‘advanced persistent threat,’ alongside sophisticated malware and cyber attacks.”

When the Chinese penetrate U.S. computer networks, which they do with alarming regularity, we don’t really know what they’re doing. Are they modifying our hardware and software to just eavesdrop, or are they leaving :logic bombs” that could be triggered to do real damage at some future time? It can be impossible to tell. As a 2011 EU cybersecurity policy document stated (page 7):

…technically speaking, CNA requires CNE to be effective. In other words, what may be preparations for cyberwarfare can well be cyberespionage initially or simply be disguised as such.

We can’t tell the intentions of the Chinese, and they can’t tell ours, either.

Much of the current debate in the U.S. is over what the NSA should be allowed to do, and whether limiting the NSA somehow empowers other governments. That’s the wrong debate. We don’t get to choose between a world where the NSA spies and one where the Chinese spy. Our choice is between a world where our information infrastructure is vulnerable to all attackers or secure for all users.

As long as cyber-espionage equals cyber-attack, we would be much safer if we focused the NSA’s efforts on securing the Internet from these attacks. True, we wouldn’t get the same level of access to information flows around the world. But we would be protecting the world’s information flows—including our own—from both eavesdropping and more damaging attacks. We would be protecting our information flows from governments, nonstate actors, and criminals. We would be making the world safer.

Offensive military operations in cyberspace, be they CNE or CNA, should be the purview of the military. In the U.S., that’s CyberCommand. Such operations should be recognized as offensive military actions, and should be approved at the highest levels of the executive branch, and be subject to the same international law standards that govern acts of war in the offline world.

If we’re going to attack another country’s electronic infrastructure, we should treat it like any other attack on a foreign country. It’s no longer just espionage, it’s a cyber-attack.

This essay previously appeared on TheAtlantic.com.

Posted on March 10, 2014 at 6:46 AMView Comments

Surveillance by Algorithm

Increasingly, we are watched not by people but by algorithms. Amazon and Netflix track the books we buy and the movies we stream, and suggest other books and movies based on our habits. Google and Facebook watch what we do and what we say, and show us advertisements based on our behavior. Google even modifies our web search results based on our previous behavior. Smartphone navigation apps watch us as we drive, and update suggested route information based on traffic congestion. And the National Security Agency, of course, monitors our phone calls, emails and locations, then uses that information to try to identify terrorists.

Documents provided by Edward Snowden and revealed by the Guardian today show that the UK spy agency GHCQ, with help from the NSA, has been collecting millions of webcam images from innocent Yahoo users. And that speaks to a key distinction in the age of algorithmic surveillance: is it really okay for a computer to monitor you online, and for that data collection and analysis only to count as a potential privacy invasion when a person sees it? I say it’s not, and the latest Snowden leaks only make more clear how important this distinction is.

The robots-vs-spies divide is especially important as we decide what to do about NSA and GCHQ surveillance. The spy community and the Justice Department have reported back early on President Obama’s request for changing how the NSA “collects” your data, but the potential reforms—FBI monitoring, holding on to your phone records and more—still largely depend on what the meaning of “collects” is.

Indeed, ever since Snowden provided reporters with a trove of top secret documents, we’ve been subjected to all sorts of NSA word games. And the word “collect” has a very special definition, according to the Department of Defense (DoD). A 1982 procedures manual (pdf; page 15) says: “information shall be considered as ‘collected’ only when it has been received for use by an employee of a DoD intelligence component in the course of his official duties.” And “data acquired by electronic means is ‘collected’ only when it has been processed into intelligible form.”

Director of National Intelligence James Clapper likened the NSA’s accumulation of data to a library. All those books are stored on the shelves, but very few are actually read. “So the task for us in the interest of preserving security and preserving civil liberties and privacy,” says Clapper, “is to be as precise as we possibly can be when we go in that library and look for the books that we need to open up and actually read.” Only when an individual book is read does it count as “collection,” in government parlance.

So, think of that friend of yours who has thousands of books in his house. According to the NSA, he’s not actually “collecting” books. He’s doing something else with them, and the only books he can claim to have “collected” are the ones he’s actually read.

This is why Clapper claims—to this day—that he didn’t lie in a Senate hearing when he replied “no” to this question: “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?”

If the NSA collects—I’m using the everyday definition of the word here—all of the contents of everyone’s e-mail, it doesn’t count it as being collected in NSA terms until someone reads it. And if it collects—I’m sorry, but that’s really the correct word—everyone’s phone records or location information and stores it in an enormous database, that doesn’t count as being collected—NSA definition—until someone looks at it. If the agency uses computers to search those emails for keywords, or correlates that location information for relationships between people, it doesn’t count as collection, either. Only when those computers spit out a particular person has the data—in NSA terms—actually been collected.

If the modern spy dictionary has you confused, maybe dogs can help us understand why this legal workaround, by big tech companies and the government alike, is still a serious invasion of privacy.

Back when Gmail was introduced, this was Google’s defense, too, about its context-sensitive advertising. Google’s computers examine each individual email and insert an advertisement nearby, related to the contents of your email. But no person at Google reads any Gmail messages; only a computer does. In the words of one Google executive: “Worrying about a computer reading your email is like worrying about your dog seeing you naked.”

But now that we have an example of a spy agency seeing people naked—there are a surprising number of sexually explicit images in the newly revealed Yahoo image collection—we can more viscerally understand the difference.

To wit: when you’re watched by a dog, you know that what you’re doing will go no further than the dog. The dog can’t remember the details of what you’ve done. The dog can’t tell anyone else. When you’re watched by a computer, that’s not true. You might be told that the computer isn’t saving a copy of the video, but you have no assurance that that’s true. You might be told that the computer won’t alert a person if it perceives something of interest, but you can’t know if that’s true. You do know that the computer is making decisions based on what it receives, and you have no way of confirming that no human being will access that decision.

When a computer stores your data, there’s always a risk of exposure. There’s the risk of accidental exposure, when some hacker or criminal breaks in and steals the data. There’s the risk of purposeful exposure, when the organization that has your data uses it in some manner. And there’s the risk that another organization will demand access to the data. The FBI can serve a National Security Letter on Google, demanding details on your email and browsing habits. There isn’t a court order in the world that can get that information out of your dog.

Of course, any time we’re judged by algorithms, there’s the potential for false positives. You are already familiar with this; just think of all the irrelevant advertisements you’ve been shown on the Internet, based on some algorithm misinterpreting your interests. In advertising, that’s okay. It’s annoying, but there’s little actual harm, and you were busy reading your email anyway, right? But that harm increases as the accompanying judgments become more important: our credit ratings depend on algorithms; how we’re treated at airport security does, too. And most alarming of all, drone targeting is partly based on algorithmic surveillance.

The primary difference between a computer and a dog is that the computer interacts with other people in the real world, and the dog does not. If someone could isolate the computer in the same way a dog is isolated, we wouldn’t have any reason to worry about algorithms crawling around in our data. But we can’t. Computer algorithms are intimately tied to people. And when we think of computer algorithms surveilling us or analyzing our personal data, we need to think about the people behind those algorithms. Whether or not anyone actually looks at our data, the very fact that they even could is what makes it surveillance.

This is why Yahoo called GCHQ’s webcam-image collection “a whole new level of violation of our users’ privacy.” This is why we’re not mollified by attempts from the UK equivalent of the NSA to apply facial recognition algorithms to the data, or to limit how many people viewed the sexually explicit images. This is why Google’s eavesdropping is different than a dog’s eavesdropping, and why the NSA’s definition of “collect” makes no sense whatsoever.

This essay previously appeared on theguardian.com.

Posted on March 5, 2014 at 6:13 AMView Comments

Choosing Secure Passwords

As insecure as passwords generally are, they’re not going away anytime soon. Every year you have more and more passwords to deal with, and every year they get easier and easier to break. You need a strategy.

The best way to explain how to choose a good password is to explain how they’re broken. The general attack model is what’s known as an offline password-guessing attack. In this scenario, the attacker gets a file of encrypted passwords from somewhere people want to authenticate to. His goal is to turn that encrypted file into unencrypted passwords he can use to authenticate himself. He does this by guessing passwords, and then seeing if they’re correct. He can try guesses as fast as his computer will process them—and he can parallelize the attack—and gets immediate confirmation if he guesses correctly. Yes, there are ways to foil this attack, and that’s why we can still have four-digit PINs on ATM cards, but it’s the correct model for breaking passwords.

There are commercial programs that do password cracking, sold primarily to police departments. There are also hacker tools that do the same thing. And they’re really good.

The efficiency of password cracking depends on two largely independent things: power and efficiency.

Power is simply computing power. As computers have become faster, they’re able to test more passwords per second; one program advertises eight million per second. These crackers might run for days, on many machines simultaneously. For a high-profile police case, they might run for months.

Efficiency is the ability to guess passwords cleverly. It doesn’t make sense to run through every eight-letter combination from “aaaaaaaa” to “zzzzzzzz” in order. That’s 200 billion possible passwords, most of them very unlikely. Password crackers try the most common passwords first.

A typical password consists of a root plus an appendage. The root isn’t necessarily a dictionary word, but it’s usually something pronounceable. An appendage is either a suffix (90% of the time) or a prefix (10% of the time). One cracking program I saw started with a dictionary of about 1,000 common passwords, things like “letmein,” “temp,” “123456,” and so on. Then it tested them each with about 100 common suffix appendages: “1,” “4u,” “69,” “abc,” “!,” and so on. It recovered about a quarter of all passwords with just these 100,000 combinations.

Crackers use different dictionaries: English words, names, foreign words, phonetic patterns and so on for roots; two digits, dates, single symbols and so on for appendages. They run the dictionaries with various capitalizations and common substitutions: “$” for “s”, “@” for “a,” “1” for “l” and so on. This guessing strategy quickly breaks about two-thirds of all passwords.

Modern password crackers combine different words from their dictionaries:

What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”

This is why the oft-cited XKCD scheme for generating passwords—string together individual words like “correcthorsebatterystaple”—is no longer good advice. The password crackers are on to this trick.

The attacker will feed any personal information he has access to about the password creator into the password crackers. A good password cracker will test names and addresses from the address book, meaningful dates, and any other personal information it has. Postal codes are common appendages. If it can, the guesser will index the target hard drive and create a dictionary that includes every printable string, including deleted files. If you ever saved an e-mail with your password, or kept it in an obscure file somewhere, or if your program ever stored it in memory, this process will grab it. And it will speed the process of recovering your password.

Last year, Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break as many as possible. The winner got 90% of them, the loser 62%—in a few hours. It’s the same sort of thing we saw in 2012, 2007, and earlier. If there’s any new news, it’s that this kind of thing is getting easier faster than people think.

Pretty much anything that can be remembered can be cracked.

There’s still one scheme that works. Back in 2008, I described the “Schneier scheme”:

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence—something personal.

Here are some examples:

  • WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
  • Wow…doestcst = Wow, does that couch smell terrible.
  • Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
  • uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence with some personally memorable tricks to modify that sentence into a password to create a lengthy password. Of course, the site has to accept all of those non-alpha-numeric characters and an arbitrarily long password. Otherwise, it’s much harder.

Even better is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to create and store them. Password Safe includes a random password generation function. Tell it how many characters you want—twelve is my default—and it’ll give you passwords like y.)v_|.7)7Bl, B3h4_[%}kgv), and QG6,FN4nFAm_. The program supports cut and paste, so you’re not actually typing those characters very much. I’m recommending Password Safe for Windows because I wrote the first version, know the person currently in charge of the code, and trust its security. There are ports of Password Safe to other OSs, but I had nothing to do with those. There are also other password managers out there, if you want to shop around.

There’s more to passwords than simply choosing a good one:

  1. Never reuse a password you care about. Even if you choose a secure password, the site it’s for could leak it because of its own incompetence. You don’t want someone who gets your password for one application or site to be able to use it for another.
  2. Don’t bother updating your password regularly. Sites that require 90-day—or whatever—password upgrades do more harm than good. Unless you think your password might be compromised, don’t change it.
  3. Beware the “secret question.” You don’t want a backup system for when you forget your password to be easier to break than your password. Really, it’s smart to use a password manager. Or to write your passwords down on a piece of paper and secure that piece of paper.
  4. One more piece of advice: if a site offers two-factor authentication, seriously consider using it. It’s almost certainly a security improvement.

This essay previously appeared on BoingBoing.

Posted on March 3, 2014 at 7:48 AMView Comments

1 18 19 20 21 22 48

Sidebar photo of Bruce Schneier by Joe MacInnis.