Entries Tagged "trust"

Page 4 of 15

Internet Subversion

In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.

Trust is inherently social. It is personal, relative, situational, and fluid. It is not uniquely human, but it is the underpinning of everything we have accomplished as a species. We trust other people, but we also trust organizations and processes. The psychology is complex, but when we trust a technology, we basically believe that it will work as intended.

This is how we technologists trusted the security of the Internet. We didn’t have any illusions that the Internet was secure, or that governments, criminals, hackers, and others couldn’t break into systems and networks if they were sufficiently skilled and motivated. We didn’t trust that the programmers were perfect, that the code was bug-free, or even that our crypto math was unbreakable. We knew that Internet security was an arms race, and the attackers had most of the advantages.

What we trusted was that the technologies would stand or fall on their own merits.

We now know that trust was misplaced. Through cooperation, bribery, threats, and compulsion, the NSA—and the United Kingdom’s GCHQ—forced companies to weaken the security of their products and services, then lie about it to their customers.

We know of a few examples of this weakening. The NSA convinced Microsoft to make some unknown changes to Skype in order to make eavesdropping on conversations easier. The NSA also inserted a degraded random number generator into a common standard, then worked to get that generator used more widely.

I have heard engineers working for the NSA, FBI, and other government agencies delicately talk around the topic of inserting a “backdoor” into security products to allow for government access. One of them told me, “It’s like going on a date. Sex is never explicitly mentioned, but you know it’s on the table.” The NSA’s SIGINT Enabling Project has a $250 million annual budget; presumably it has more to show for itself than the fragments that have become public. Reed Hundt calls for the government to support a secure Internet, but given its history of installing backdoors, why would we trust claims that it has turned the page?

We also have to assume that other countries have been doing the same things. We have long believed that networking products from the Chinese company Huawei have been backdoored by the Chinese government. Do we trust hardware and software from Russia? France? Israel? Anywhere?

This mistrust is poison. Because we don’t know, we can’t trust any of them. Internet governance was largely left to the benign dictatorship of the United States because everyone more or less believed that we were working for the security of the Internet instead of against it. But now that system is in turmoil. Foreign companies are fleeing US suppliers because they don’t trust American firms’ security claims. Far worse governments are using these revelations to push for a more isolationist Internet, giving them more control over what their citizens see and say.

All so we could eavesdrop better.

There is a term in the NSA: “nobus,” short for “nobody but us.” The NSA believes it can subvert security in such a way that only it can take advantage of that subversion. But that is hubris. There is no way to determine if or when someone else will discover a vulnerability. These subverted systems become part of our infrastructure; the harms to everyone, once the flaws are discovered, far outweigh the benefits to the NSA while they are secret.

We can’t both weaken the enemy’s networks and protect our own. Because we all use the same products, technologies, protocols, and standards, we either allow everyone to spy on everyone, or prevent anyone from spying on anyone. By weakening security, we are weakening it against all attackers. By inserting vulnerabilities, we are making everyone vulnerable. The same vulnerabilities used by intelligence agencies to spy on each other are used by criminals to steal your passwords. It is surveillance versus security, and we all rise and fall together.

Security needs to win. The Internet is too important to the world—and trust is too important to the Internet—to squander it like this. We’ll never get every power in the world to agree not to subvert the parts of the Internet they control, but we can stop subverting the parts we control. Most of the high-tech companies that make the Internet work are US companies, so our influence is disproportionate. And once we stop subverting, we can credibly devote our resources to detecting and preventing subversion by others.

This essay previously appeared in the Boston Review.

Posted on May 12, 2014 at 6:26 AMView Comments

Is Google Too Big to Trust?

Interesting essay about how Google’s lack of transparency is hurting their trust:

The reality is that Google’s business is and has always been about mining as much data as possible to be able to present information to users. After all, it can’t display what it doesn’t know. Google Search has always been an ad-supported service, so it needs a way to sell those users to advertisers—that’s how the industry works. Its Google Now voice-based service is simply a form of Google Search, so it too serves advertisers’ needs.

In the digital world, advertisers want to know more than the 100,000 people who might be interested in buying a new car. They now want to know who those people are, so they can reach out to them with custom messages that are more likely to be effective. They may not know you personally, but they know your digital persona—basically, you. Google needs to know about you to satisfy its advertisers’ demands.

Once you understand that, you understand why Google does what it does. That’s simply its business. Nothing is free, so if you won’t pay cash, you’ll have to pay with personal information. That business model has been around for decades; Google didn’t invent that business model, but Google did figure out how to make it work globally, pervasively, appealingly, and nearly instantaneously.

I don’t blame Google for doing that, but I blame it for being nontransparent. Putting unmarked sponsored ads in the “regular” search results section is misleading, because people have been trained by Google to see that section of the search results as neutral. They are in fact not. Once you know that, you never quite trust Google search results again. (Yes, Bing’s results are similarly tainted. But Microsoft never promised to do no evil, and most people use Google.)

Posted on April 24, 2014 at 6:45 AMView Comments

Smarter People are More Trusting

Interesting research.

Both vocabulary and question comprehension were positively correlated with generalized trust. Those with the highest vocab scores were 34 percent more likely to trust others than those with the lowest scores, and someone who had a good perceived understanding of the survey questions was 11 percent more likely to trust others than someone with a perceived poor understanding. The correlation stayed strong even when researchers controlled for socio-economic class.

This study, too, found a correlation between trust and self-reported health and happiness. The trusting were 6 percent more likely to say they were “very happy,” and 7 percent more likely to report good or excellent health.

Full study results.

Posted on March 27, 2014 at 6:52 AMView Comments

Income Inequality as a Security Issue

This is an interesting way to characterizing income inequality as a security issue:

…growing inequality menaces vigorous societies. It is a proxy for how effectively an elite has constructed institutions that extract value from the rest of society. Professor Sam Bowles, also part of the INET network, goes further. He argues that inequality pulls production away from value creation to protecting and securing the wealthy’s assets: one in five of the British workforce, for example, works as “guard labour”—in security, policing, law, surveillance and forms of IT that control and monitor. The higher inequality, the greater the proportion of a workforce deployed as guard workers, who generate little value and lower overall productivity.”

This is an expansion of my notion of security as a tax on the honest. From Liars and Outliers:

Francis Fukuyama wrote: “Widespread distrust in society…imposes a kind of tax on all forms of economic activity, a tax that high-trust societies do not have to pay.” It’s a tax on the honest. It’s a tax imposed on ourselves by ourselves, because, human nature being what it is, too many of us would otherwise become hawks and take advantage of the rest of us. And it’s an expensive tax.

The argument here is that the greater the inequality, the greater the tax. And because much of this security tax protects the wealthy from the poor, it’s a regressive tax.

Posted on January 24, 2014 at 6:51 AMView Comments

How the NSA Threatens National Security

Secret NSA eavesdropping is still in the news. Details about once secret programs continue to leak. The Director of National Intelligence has recently declassified additional information, and the President’s Review Group has just released its report and recommendations.

With all this going on, it’s easy to become inured to the breadth and depth of the NSA’s activities. But through the disclosures, we’ve learned an enormous amount about the agency’s capabilities, how it is failing to protect us, and what we need to do to regain security in the Information Age.

First and foremost, the surveillance state is robust. It is robust politically, legally, and technically. I can name three different NSA programs to collect Gmail user data. These programs are based on three different technical eavesdropping capabilities. They rely on three different legal authorities. They involve collaborations with three different companies. And this is just Gmail. The same is true for cell phone call records, Internet chats, cell-phone location data.

Second, the NSA continues to lie about its capabilities. It hides behind tortured interpretations of words like “collect,” “incidentally,” “target,” and “directed.” It cloaks programs in multiple code names to obscure their full extent and capabilities. Officials testify that a particular surveillance activity is not done under one particular program or authority, conveniently omitting that it is done under some other program or authority.

Third, US government surveillance is not just about the NSA. The Snowden documents have given us extraordinary details about the NSA’s activities, but we now know that the CIA, NRO, FBI, DEA, and local police all engage in ubiquitous surveillance using the same sorts of eavesdropping tools, and that they regularly share information with each other.

The NSA’s collect-everything mentality is largely a hold-over from the Cold War, when a voyeuristic interest in the Soviet Union was the norm. Still, it is unclear how effective targeted surveillance against “enemy” countries really is. Even when we learn actual secrets, as we did regarding Syria’s use of chemical weapons earlier this year, we often can’t do anything with the information.

Ubiquitous surveillance should have died with the fall of Communism, but it got a new—and even more dangerous—life with the intelligence community’s post-9/11 “never again” terrorism mission. This quixotic goal of preventing something from happening forces us to try to know everything that does happen. This pushes the NSA to eavesdrop on online gaming worlds and on every cell phone in the world. But it’s a fool’s errand; there are simply too many ways to communicate.

We have no evidence that any of this surveillance makes us safer. NSA Director General Keith Alexander responded to these stories in June by claiming that he disrupted 54 terrorist plots. In October, he revised that number downward to 13, and then to “one or two.” At this point, the only “plot” prevented was that of a San Diego man sending $8,500 to support a Somali militant group. We have been repeatedly told that these surveillance programs would have been able to stop 9/11, yet the NSA didn’t detect the Boston bombings—even though one of the two terrorists was on the watch list and the other had a sloppy social media trail. Bulk collection of data and metadata is an ineffective counterterrorism tool.

Not only is ubiquitous surveillance ineffective, it is extraordinarily costly. I don’t mean just the budgets, which will continue to skyrocket. Or the diplomatic costs, as country after country learns of our surveillance programs against their citizens. I’m also talking about the cost to our society. It breaks so much of what our society has built. It breaks our political systems, as Congress is unable to provide any meaningful oversight and citizens are kept in the dark about what government does. It breaks our legal systems, as laws are ignored or reinterpreted, and people are unable to challenge government actions in court. It breaks our commercial systems, as US computer products and services are no longer trusted worldwide. It breaks our technical systems, as the very protocols of the Internet become untrusted. And it breaks our social systems; the loss of privacy, freedom, and liberty is much more damaging to our society than the occasional act of random violence.

And finally, these systems are susceptible to abuse. This is not just a hypothetical problem. Recent history illustrates many episodes where this information was, or would have been, abused: Hoover and his FBI spying, McCarthy, Martin Luther King Jr. and the civil rights movement, anti-war Vietnam protesters, and—more recently—the Occupy movement. Outside the US, there are even more extreme examples. Building the surveillance state makes it too easy for people and organizations to slip over the line into abuse.

It’s not just domestic abuse we have to worry about; it’s the rest of the world, too. The more we choose to eavesdrop on the Internet and other communications technologies, the less we are secure from eavesdropping by others. Our choice isn’t between a digital world where the NSA can eavesdrop and one where the NSA is prevented from eavesdropping; it’s between a digital world that is vulnerable to all attackers, and one that is secure for all users.

Fixing this problem is going to be hard. We are long past the point where simple legal interventions can help. The bill in Congress to limit NSA surveillance won’t actually do much to limit NSA surveillance. Maybe the NSA will figure out an interpretation of the law that will allow it to do what it wants anyway. Maybe it’ll do it another way, using another justification. Maybe the FBI will do it and give it a copy. And when asked, it’ll lie about it.

NSA-level surveillance is like the Maginot Line was in the years before World War II: ineffective and wasteful. We need to openly disclose what surveillance we have been doing, and the known insecurities that make it possible. We need to work toward security, even if other countries like China continue to use the Internet as a giant surveillance platform. We need to build a coalition of free-world nations dedicated to a secure global Internet, and we need to continually push back against bad actors—both state and non-state—that work against that goal.

Securing the Internet requires both laws and technology. It requires Internet technology that secures data wherever it is and however it travels. It requires broad laws that put security ahead of both domestic and international surveillance. It requires additional technology to enforce those laws, and a worldwide enforcement regime to deal with bad actors. It’s not easy, and has all the problems that other international issues have: nuclear, chemical, and biological weapon non-proliferation; small arms trafficking; human trafficking; money laundering; intellectual property. Global information security and anti-surveillance needs to join those difficult global problems, so we can start making progress.

The President’s Review Group recommendations are largely positive, but they don’t go nearly far enough. We need to recognize that security is more important than surveillance, and work towards that goal.

This essay previously appeared on TheAtlantic.com.

Posted on January 13, 2014 at 6:28 AMView Comments

Joseph Stiglitz on Trust

Joseph Stiglitz has an excellent essay on the value of trust, and the lack of it in today’s society.

Trust is what makes contracts, plans and everyday transactions possible; it facilitates the democratic process, from voting to law creation, and is necessary for social stability. It is essential for our lives. It is trust, more than money, that makes the world go round.

At the end, he discusses a bit about the security mechanisms necessary to restore it:

I suspect there is only one way to really get trust back. We need to pass strong regulations, embodying norms of good behavior, and appoint bold regulators to enforce them. We did just that after the roaring ’20s crashed; our efforts since 2007 have been sputtering and incomplete. Firms also need to do better than skirt the edges of regulations. We need higher norms for what constitutes acceptable behavior, like those embodied in the United Nations’ Guiding Principles on Business and Human Rights. But we also need regulations to enforce these norms ­ a new version of trust but verify. No rules will be strong enough to prevent every abuse, yet good, strong regulations can stop the worst of it.

This, of course, is what my book Liars and Outliers is about.

Posted on December 30, 2013 at 9:55 AMView Comments

NSA Spying: Whom Do You Believe?

On Friday, Reuters reported that RSA entered into a secret contract to make DUAL_EC_PRNG the default random number generator in the BSAFE toolkit. DUA_EC_PRNG is now known to have been backdoored by the NSA.

Yesterday, RSA denied it:

Recent press coverage has asserted that RSA entered into a “secret contract” with the NSA to incorporate a known flawed random number generator into its BSAFE encryption libraries. We categorically deny this allegation.

[…]

We made the decision to use Dual EC DRBG as the default in BSAFE toolkits in 2004, in the context of an industry-wide effort to develop newer, stronger methods of encryption. At that time, the NSA had a trusted role in the community-wide effort to strengthen, not weaken, encryption.

We know from both Mark Klein and Edward Snowden—and pretty much everything else about the NSA—that the NSA directly taps the trunk lines of AT&T (and pretty much every other telcom carrier). On Friday, AT&T denied that:

In its statement, AT&T sought to push back against the notion that it provides the government with such access. “We do not allow any government agency to connect directly to our network to gather, review or retrieve our customers’ information,” said Watts.

I’ve written before about how the NSA has corroded our trust in the Internet and communications technologies. The debates over these companies’ statements, and about exactly how they are using and abusing individual words to lie while claiming they are not lying, is a manifestation of that.

Me again:

This sort of thing can destroy our country. Trust is essential in our society. And if we can’t trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.

Rebuilding trust is not easy, as anyone who has betrayed or been betrayed by a friend or lover knows, but the path involves transparency, oversight and accountability. Transparency first involves coming clean. Not a little bit at a time, not only when you have to, but complete disclosure about everything. Then it involves continuing disclosure. No more secret rulings by secret courts about secret laws. No more secret programs whose costs and benefits remain hidden.

Oversight involves meaningful constraints on the NSA, the FBI and others. This will be a combination of things: a court system that acts as a third-party advocate for the rule of law rather than a rubber-stamp organization, a legislature that understands what these organizations are doing and regularly debates requests for increased power, and vibrant public-sector watchdog groups that analyze and debate the government’s actions.

Accountability means that those who break the law, lie to Congress or deceive the American people are held accountable. The NSA has gone rogue, and while it’s probably not possible to prosecute people for what they did under the enormous veil of secrecy it currently enjoys, we need to make it clear that this behavior will not be tolerated in the future. Accountability also means voting, which means voters need to know what our leaders are doing in our name.

This is the only way we can restore trust. A market economy doesn’t work unless consumers can make intelligent buying decisions based on accurate product information. That’s why we have agencies like the FDA, truth-in-packaging laws and prohibitions against false advertising.

We no longer know whom to trust. This is the greatest damage the NSA has done to the Internet, and will be the hardest to fix.

EDITED TO ADD (12/23): The requested removal of an NSA employee from an IETF group co-chairmanship is another manifestation of this mistrust.

Posted on December 23, 2013 at 6:26 AMView Comments

World War II Anecdote about Trust and Security

This is an interesting story from World War II about trust:

Jones notes that the Germans doubted their system because they knew the British could radio false orders to the German bombers with no trouble. As Jones recalls, “In fact we did not do this, but it seemed such an easy countermeasure that the German crews thought that we might, and they therefore began to be suspicious about the instructions that they received.”

The implications of this are perhaps obvious but worth stating nonetheless: a lack of trust can exist even if an adversary fails to exploit a weakness in the system. More importantly, this doubt can become a shadow adversary. According to Jones, “…it was not long before the crews found substance to their theory [that is, their doubt].” In support of this, he offers the anecdote of a German pilot who, returning to base after wandering off course, grumbled that “the British had given him a false order.”

I think about this all the time with respect to our IT systems and the NSA. Even though we don’t know which companies the NSA has compromised—or by what means—knowing that they could have compromised any of them is enough to make us mistrustful of all of them. This is going to make it hard for large companies like Google and Microsoft to get back the trust they lost. Even if they succeed in limiting government surveillance. Even if they succeed in improving their own internal security. The best they’ll be able to say is: “We have secured ourselves from the NSA, except for the parts that we either don’t know about or can’t talk about.”

Posted on December 13, 2013 at 11:20 AMView Comments

Can I Be Trusted?

Slashdot asks the question:

I’m a big fan of Bruce Schneier, but just to play devil’s advocate, let’s say, hypothetically, that Schneier is actually in cahoots with the NSA. Who better to reinstate public trust in weakened cryptosystems? As an exercise in security that Schneier himself may find interesting, what methods are available for proving (or at least affirming) that we can trust Bruce Schneier?

So far, I haven’t seen the good reasons why I might be untrustworthy. I’d help, but that seems unfair.

Posted on October 22, 2013 at 11:32 AMView Comments

Will Keccak = SHA-3?

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the security levels were reduced and some internal changes to the algorithm were made, all in the name of software performance.

Normally, this wouldn’t be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal. There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.

At this point, they simply have to standardize on Keccak as submitted and as selected.

CDT has a great post about this.

Also this Slashdot thread.

EDITED TO ADD (10/5): It’s worth reading the response from the Keccak team on this issue.

I misspoke when I wrote that NIST made “internal changes” to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function’s capacity in the name of performance. One of Keccak’s nice features is that it’s highly tunable.

I do not believe that the NIST changes were suggested by the NSA. Nor do I believe that the changes make the algorithm easier to break by the NSA. I believe NIST made the changes in good faith, and the result is a better security/performance trade-off. My problem with the changes isn’t cryptographic, it’s perceptual. There is so little trust in the NSA right now, and that mistrust is reflecting on NIST. I worry that the changed algorithm won’t be accepted by an understandably skeptical security community, and that no one will use SHA-3 as a result.

This is a lousy outcome. NIST has done a great job with cryptographic competitions: both a decade ago with AES and now with SHA-3. This is just another effect of the NSA’s actions draining the trust out of the Internet.

Posted on October 1, 2013 at 10:50 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.