September 15, 2013
by Bruce Schneier
BT Security Futurologist
schneier@schneier.com
http://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1309.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Take Back the Internet
- More on the NSA Commandeering the Internet
- Detaining David Miranda
- Government Secrecy and the Generation Gap
- Conspiracy Theories and the NSA
- The NSA’s Cryptographic Capabilities
- How to Remain Secure Against the NSA
- Protecting Against Leakers
- NSA/Snowden News
- Our Newfound Fear of Risk
- Human-Machine Trust Failures
- Excess Automobile Deaths as a Result of 9/11
- News
- iPhone Fingerprint Authentication
- Hacking Consumer Devices
- Syrian Electronic Army Cyberattacks
- Schneier News
- The Cryptopocalypse
- Measuring Entropy and its Applications to Encryption
Take Back the Internet
Government and industry have betrayed the Internet, and us.
By subverting the Internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our Internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical Internet stewards.
This is not the Internet the world needs, or the Internet its creators envisioned. We need to take it back.
And by we, I mean the engineering community.
Yes, this is primarily a political problem, a policy matter that requires political intervention.
But this is also an engineering problem, and there are several things engineers can—and should—do.
One, we should expose. If you do not have a security clearance, and if you have not received a National Security Letter, you are not bound by a federal confidentially requirements or a gag order. If you have been contacted by the NSA to subvert a product or protocol, you need to come forward with your story. Your employer obligations don’t cover illegal or unethical activity. If you work with classified data and are truly brave, expose what you know. We need whistleblowers.
We need to know how exactly how the NSA and other agencies are subverting routers, switches, the Internet backbone, encryption technologies and cloud systems. I already have five stories from people like you, and I’ve just started collecting. I want 50. There’s safety in numbers, and this form of civil disobedience is the moral thing to do.
Two, we can design. We need to figure out how to re-engineer the Internet to prevent this kind of wholesale spying. We need new techniques to prevent communications intermediaries from leaking private information.
We can make surveillance expensive again. In particular, we need open protocols, open implementations, open systems—these will be harder for the NSA to subvert.
The Internet Engineering Task Force, the group that defines the standards that make the Internet run, has a meeting planned for early November in Vancouver. This group needs to dedicate its next meeting to this task. This is an emergency, and demands an emergency response.
Three, we can influence governance. I have resisted saying this up to now, and I am saddened to say it, but the US has proved to be an unethical steward of the Internet. The UK is no better. The NSA’s actions are legitimizing the Internet abuses by China, Russia, Iran and others. We need to figure out new means of Internet governance, ones that makes it harder for powerful tech countries to monitor everything. For example, we need to demand transparency, oversight, and accountability from our governments and corporations.
Unfortunately, this is going play directly into the hands of totalitarian governments that want to control their country’s Internet for even more extreme forms of surveillance. We need to figure out how to prevent that, too. We need to avoid the mistakes of the International Telecommunications Union, which has become a forum to legitimize bad government behavior, and create truly international governance that can’t be dominated or abused by any one country.
Generations from now, when people look back on these early decades of the Internet, I hope they will not be disappointed in us. We can ensure that they don’t only if each of us makes this a priority, and engages in the debate. We have a moral duty to do this, and we have no time to lose.
Dismantling the surveillance state won’t be easy. Has any country that engaged in mass surveillance of its own citizens voluntarily given up that capability? Has any mass surveillance country avoided becoming totalitarian? Whatever happens, we’re going to be breaking new ground.
Again, the politics of this is a bigger task than the engineering, but the engineering is critical. We need to demand that real technologists be involved in any key government decision making on these issues. We’ve had enough of lawyers and politicians not fully understanding technology; we need technologists at the table when we build tech policy.
To the engineers, I say this: we built the Internet, and some of us have helped to subvert it. Now, those of us who love liberty have to fix it.
This essay originally appeared in the “Guardian.”
http://www.theguardian.com/commentisfree/2013/sep/…
The need for whistleblowers:
https://www.schneier.com/essay-429.html
The need for transparency, oversight, and accountability:
https://www.schneier.com/essay-435.html
Snowden’s statement on the morality of his actions:
http://wikileaks.org/…
This is presented as disagreeing with what I’ve written, but I agree with it.
http://continuations.com/post/60444129080/…
A rebuttal to this essay:
http://americanscience.blogspot.com/2013/09/…
More on the NSA Commandeering the Internet
If there’s any confirmation that the US government has commandeered the Internet for worldwide surveillance, it is what happened with Lavabit earlier this month.
Lavabit is—well, was—an e-mail service that offered more privacy than the typical large-Internet-corporation services that most of us use. It was a small company, owned and operated by Ladar Levison, and it was popular among the tech-savvy. NSA whistleblower Edward Snowden among its half-million users.
Last month, Levison reportedly received an order—probably a National Security Letter—to allow the NSA to eavesdrop on everyone’s e-mail accounts on Lavabit. Rather than “become complicit in crimes against the American people,” he turned the service off. Note that we don’t know for sure that he received a NSL—that’s the order authorized by the Patriot Act that doesn’t require a judge’s signature and prohibits the recipient from talking about it—or what it covered, but Levison has said that he had complied with requests for individual e-mail access in the past, but this was very different.
So far, we just have an extreme moral act in the face of government pressure. It’s what happened next that is the most chilling. The government threatened him with arrest, arguing that shutting down this e-mail service was a violation of the order.
There it is. If you run a business, and the FBI or NSA want to turn it into a mass surveillance tool, they believe they can do so, solely on their own initiative. They can force you to modify your system. They can do it all in secret and then force your business to keep that secret. Once they do that, you no longer control that part of your business. You can’t shut it down. You can’t terminate part of your service. In a very real sense, it is not your business anymore. It is an arm of the vast US surveillance apparatus, and if your interest conflicts with theirs then they win. Your business has been commandeered.
For most Internet companies, this isn’t a problem. They are already engaging in massive surveillance of their customers and users—collecting and using this data is the primary business model of the Internet—so it’s easy to comply with government demands and give the NSA complete access to everything. This is what we learned from Edward Snowden. Through programs like PRISM, BLARNEY and OAKSTAR, the NSA obtained bulk access to services like Gmail and Facebook, and to Internet backbone connections throughout the US and the rest of the world. But if it were a problem for those companies, presumably the government would not allow them to shut down.
To be fair, we don’t know if the government can actually convict someone of closing a business. It might just be part of their coercion tactics. Intimidation, and retaliation, is part of how the NSA does business.
Former Qwest CEO Joseph Nacchio has a story of what happens to a large company that refuses to cooperate. In February 2001—before the 9/11 terrorist attacks—the NSA approached the four major US telecoms and asked for their cooperation in a secret data collection program, the one we now know to be the bulk metadata collection program exposed by Edward Snowden. Qwest was the only telecom to refuse, leaving the NSA with a hole in its spying efforts. The NSA retaliated by canceling a series of big government contracts with Qwest. The company has since been purchased by CenturyLink, which we presume is more cooperative with NSA demands.
That was before the Patriot Act and National Security Letters. Now, presumably, Nacchio would just comply. Protection rackets are easier when you have the law backing you up.
As the Snowden whistleblowing documents continue to be made public, we’re getting further glimpses into the surveillance state that has been secretly growing around us. The collusion of corporate and government surveillance interests is a big part of this, but so is the government’s resorting to intimidation. Every Lavabit-like service that shuts down—and there have been several—gives us consumers less choice, and pushes us into the large services that cooperate with the NSA. It’s past time we demanded that Congress repeal National Security Letters, give us privacy rights in this new information age, and force meaningful oversight on this rogue agency.
This essay previously appeared in “USA Today.”
http://www.usatoday.com/story/opinion/2013/08/27/…
Blog entry URL:
https://www.schneier.com/blog/archives/2013/08/…
The NSA Commandeering the Internet:
https://www.schneier.com/essay-438.html
Lavabit story:
http://www.usatoday.com/story/money/columnist/…
http://www.theguardian.com/technology/2013/aug/08/…
http://www.usatoday.com/story/money/columnist/…
http://www.slashgear.com/…
http://boingboing.net/2013/08/08/…
http://lavabit.com/
http://m.theatlantic.com/politics/archive/2013/08/…
http://rt.com/usa/…
http://investigations.nbcnews.com/_news/2013/08/13/…
http://arstechnica.com/tech-policy/2013/08/…
Patriot Act:
https://www.eff.org/issues/patriot-act
Government pressures Internet companies:
http://news.cnet.com/8301-13578_3-57595202-38/…
Joseph Nacchio’s story:
http://www.businessinsider.com/…
http://usatoday30.usatoday.com/news/washington/…
http://www.businessinsider.com/…
http://www.businessinsider.com/…
http://www.marketwatch.com/story/…
The surveillance state:
https://www.schneier.com/essay-418.html
https://www.schneier.com/essay-436.html
Other shut downs in the face of the NSA:
http://www.usatoday.com/story/money/columnist/…
Detaining David Miranda
On August 18, David Miranda was detained while changing planes at London Heathrow Airport by British authorities for nine hours under a controversial British law—the maximum time allowable without making an arrest. There has been much made of the fact that he’s the partner of Glenn Greenwald, the “Guardian” reporter whom Edward Snowden trusted with many of his NSA documents and the most prolific reporter of the surveillance abuses disclosed in those documents. There’s less discussion of what I feel was the real reason for Miranda’s detention. He was ferrying documents between Greenwald and Laura Poitras, a filmmaker and his co-reporter on Snowden and his information. These document were on several USB memory sticks he had with him. He had already carried documents from Greenwald in Rio de Janeiro to Poitras in Berlin, and was on his way back with different documents when he was detained.
The memory sticks were encrypted, of course, and Miranda did not know the key. This didn’t stop the British authorities from repeatedly asking for the key, and from confiscating the memory sticks along with his other electronics.
The incident prompted a major outcry in the UK. The UK’s Terrorist Act has always been controversial, and this clear misuse—it was intended to give authorities the right to detain and question suspected terrorists—is prompting new calls for its review. Certainly the UK police will be more reluctant to misuse the law again in this manner.
I have to admit this story has me puzzled. Why would the British do something like this? What did they hope to gain, and why did they think it worth the cost? And—of course—were the British acting on their own under the Official Secrets Act, or were they acting on behalf of the United States? (My initial assumption was that they were acting on behalf of the US, but after the bizarre story of the British GCHQ demanding the destruction of “Guardian” computers last month, I’m not sure anymore.)
We do know the British were waiting for Miranda. It’s reasonable to assume they knew his itinerary, and had good reason to suspect that he was ferrying documents back and forth between Greenwald and Poitras. These documents could be source documents provided by Snowden, new documents that the two were working on either separately or together, or both. That being said, it’s inconceivable that the memory sticks would contain the only copies of these documents. Poitras retained copies of everything she gave Miranda. So the British authorities couldn’t possibly destroy the documents; the best they could hope for is that they would be able to read them.
Is it truly possible that the NSA doesn’t already know what Snowden has? They claim they don’t, but after Snowden’s name became public, the NSA would have conducted the mother of all audits. It would try to figure out what computer systems Snowden had access to, and therefore what documents he could have accessed. Hopefully, the audit information would give more detail, such as which documents he downloaded. I have a hard time believing that its internal auditing systems would be so bad that it wouldn’t be able to discover this.
So if the NSA knows what Snowden has, or what he could have, then the most it could learn from the USB sticks is what Greenwald and Poitras are currently working on, or thinking about working on. But presumably the things the two of them are working on are the things they’re going to publish next. Did the intelligence agencies really do all this simply for a few weeks’ heads-up on what was coming? Given how ham-handedly the NSA has handled PR as each document was exposed, it seems implausible that it wanted advance knowledge so it could work on a response. It’s been two months since the first Snowden revelation, and it still doesn’t have a decent PR story.
Furthermore, the UK authorities must have known that the data would be encrypted. Greenwald might have been a crypto newbie at the start of the Snowden affair, but Poitras is known to be good at security. The two have been communicating securely by e-mail when they do communicate. Maybe the UK authorities thought there was a good chance that one of them would make a security mistake, or that Miranda would be carrying paper documents.
Another possibility is that this was just intimidation. If so, it’s misguided. Anyone who regularly reads Greenwald could have told them that he would not have been intimidated—and, in fact, he expressed the exact opposite sentiment—and anyone who follows Poitras knows that she is even more strident in her views. Going after the loved ones of state enemies is a typically thuggish tactic, but it’s not a very good one in this case. The Snowden documents will get released. There’s no way to put this cat back in the bag, not even by killing the principal players.
It could possibly have been intended to intimidate others who are helping Greenwald and Poitras, or the “Guardian” and its advertisers. This will have some effect. Lavabit, Silent Circle, and now Groklaw have all been successfully intimidated. Certainly others have as well. But public opinion is shifting against the intelligence community. I don’t think it will intimidate future whistleblowers. If the treatment of Chelsea Manning didn’t discourage them, nothing will.
This leaves one last possible explanation—those in power were angry and impulsively acted on that anger. They’re lashing out: sending a message and demonstrating that they’re not to be messed with—that the normal rules of polite conduct don’t apply to people who screw with them. That’s probably the scariest explanation of all. Both the US and UK intelligence apparatuses have enormous money and power, and they have already demonstrated that they are willing to ignore their own laws. Once they start wielding that power unthinkingly, it could get really bad for everyone.
And it’s not going to be good for them, either. They seem to want Snowden so badly that that they’ll burn the world down to get him. But every time they act impulsively aggressive—convincing the governments of Portugal and France to block the plane carrying the Bolivian president because they thought Snowden was on it is another example—they lose a small amount of moral authority around the world, and some ability to act in the same way again. The more pressure Snowden feels, the more likely he is to give up on releasing the documents slowly and responsibly, and publish all of them at once—the same way that WikiLeaks published the US State Department cables.
Just this week, the “Wall Street Journal” reported on some new NSA secret programs that are spying on Americans. It got the information from “interviews with current and former intelligence and government officials and people from companies that help build or operate the systems, or provide data,” not from Snowden. This is only the beginning. The media will not be intimidated. I will not be intimidated. But it scares me that the NSA is so blind that it doesn’t see it.
This essay previously appeared on TheAtlantic.com.
http://www.theatlantic.com/international/archive/…
I’ve been thinking about it, and there’s a good chance that the NSA doesn’t know what Snowden has. He was a sysadmin. He had access. Most of the audits and controls protect against normal users; someone with root access is going to be able to bypass a lot of them. And he had the technical chops to cover his tracks when he couldn’t just evade the auditing systems.
And, to be clear, I didn’t mean to say that intimidation wasn’t the government’s motive. I believe it was, and that it was poorly thought out intimidation: lashing out in anger, rather than from some Machiavellian strategy. If they wanted Miranda’s electronics, they could have confiscated them and sent him on his way in fifteen minutes. Holding him for nine hours—the absolute maximum they could under the current law—was intimidation.
I am reminded of the phone call the “Guardian” received from British government. The exact quote reported was: “You’ve had your fun. Now we want the stuff back.” That’s something you would tell your child. And that’s the power dynamic that’s going on here.
Miranda’s detainment:
http://www.theguardian.com/world/2013/aug/18/…
http://www.theguardian.com/world/2013/aug/19/…
The reaction from the UK:
http://cpj.org/2013/08/…
http://www.bbc.co.uk/news/world-latin-america-23750289
http://www.theguardian.com/politics//2013/aug/…
http://cpj.org/2013/08/…
Other editors react:
http://www.theguardian.com/theobserver/2013/aug/24/…
http://www.theguardian.com/world/2013/aug/24/…
Did the US direct the operation?:
http://www.bbc.co.uk/news/uk-23769324
The story of GCHQ destroying a “Guardian” computer:
http://www.wired.com/threatlevel/2013/08/…
Claim that the NSA doesn’t know what Snowden has:
http://investigations.nbcnews.com/_news/2013/08/20/…
Commentary on Greenwald’s and Poitras’s operational security:
http://www.nytimes.com/2013/08/18/magazine/…
Greenwald’s reaction:
http://www.theguardian.com/commentisfree/2013/aug/…
Why detaining Miranda is scary:
http://www.theguardian.com/commentisfree/2013/aug/…
Lavabit, Silent Circle, and Groklaw stories:
https://www.schneier.com/blog/archives/2013/08/…
http://silentcircle.wordpress.com/2013/08/09/…
http://www.groklaw.net/article.php?…
Sending a message:
http://www.theguardian.com/commentisfree/2013/aug/…
Blocking the Bolivian presidential plane:
http://edition.cnn.com/2013/07/02/world/americas/…
New Wall Street Journal reporting:
http://online.wsj.com/article/…
A similar view:
http://barryeisler.blogspot.com/2013/08/…
The Guardian story:
http://www.theguardian.com/commentisfree/2013/aug/…
Rosen’s article:
http://pressthink.org/2013/08/…
Government Secrecy and the Generation Gap
Big-government secrets require a lot of secret-keepers. As of October 2012, almost 5m people in the US have security clearances, with 1.4m at the top-secret level or higher, according to the Office of the Director of National Intelligence.
Most of these people do not have access to as much information as Edward Snowden, the former National Security Agency contractor turned leaker, or even Chelsea Manning, the former US army soldier previously known as Bradley who was convicted for giving material to WikiLeaks. But a lot of them do—and that may prove the Achilles heel of government. Keeping secrets is an act of loyalty as much as anything else, and that sort of loyalty is becoming harder to find in the younger generations. If the NSA and other intelligence bodies are going to survive in their present form, they are going to have to figure out how to reduce the number of secrets.
As the writer Charles Stross has explained, the old way of keeping intelligence secrets was to make it part of a life-long culture. The intelligence world would recruit people early in their careers and give them jobs for life. It was a private club, one filled with code words and secret knowledge.
You can see part of this in Mr Snowden’s leaked documents. The NSA has its own lingo—the documents are riddled with codenames—its own conferences, its own awards and recognitions. An intelligence career meant that you had access to a new world, one to which “normal” people on the outside were completely oblivious. Membership of the private club meant people were loyal to their organisations, which were in turn loyal back to them.
Those days are gone. Yes, there are still the codenames and the secret knowledge, but a lot of the loyalty is gone. Many jobs in intelligence are now outsourced, and there is no job-for-life culture in the corporate world any more. Workforces are flexible, jobs are interchangeable and people are expendable.
Sure, it is possible to build a career in the classified world of government contracting, but there are no guarantees. Younger people grew up knowing this: there are no employment guarantees anywhere. They see it in their friends. They see it all around them.
Many will also believe in openness, especially the hacker types the NSA needs to recruit. They believe that information wants to be free, and that security comes from public knowledge and debate. Yes, there are important reasons why some intelligence secrets need to be secret, and the NSA culture reinforces secrecy daily. But this is a crowd that is used to radical openness. They have been writing about themselves on the Internet for years. They have said very personal things on Twitter; they have had embarrassing photographs of themselves posted on Facebook. They have been dumped by a lover in public. They have overshared in the most compromising ways—and they have got through it. It is a tougher sell convincing this crowd that government secrecy trumps the public’s right to know.
Psychologically, it is hard to be a whistleblower. There is an enormous amount of pressure to be loyal to our peer group: to conform to their beliefs, and not to let them down. Loyalty is a natural human trait; it is one of the social mechanisms we use to thrive in our complex social world. This is why good people sometimes do bad things at work.
When someone becomes a whistleblower, he or she is deliberately eschewing that loyalty. In essence, they are deciding that allegiance to society at large trumps that to peers at work. That is the difficult part. They know their work buddies by name, but “society at large” is amorphous and anonymous. Believing that your bosses ultimately do not care about you makes that switch easier.
Whistleblowing is the civil disobedience of the information age. It is a way that someone without power can make a difference. And in the information age—the fact that everything is stored on computers and potentially accessible with a few keystrokes and mouse clicks—whistleblowing is easier than ever.
Mr Snowden is 30 years old; Manning 25. They are members of the generation we taught not to expect anything long-term from their employers. As such, employers should not expect anything long-term from them. It is still hard to be a whistleblower, but for this generation it is a whole lot easier.
A lot has been written about the problem of over-classification in US government. It has long been thought of as anti-democratic and a barrier to government oversight. Now we know that it is also a security risk. Organizations such as the NSA need to change their culture of secrecy, and concentrate their security efforts on what truly needs to remain secret. Their default practice of classifying everything is not going to work any more.
Hey, NSA, you’ve got a problem.
This essay previously appeared in the Financial Times.
http://www.ft.com/cms/s/0/…
Security clearances:
http://www.dni.gov/files/documents/…
Charles Stross’s essay:
http://www.antipope.org/charlie/-static/2013/08/…
Good people doing bad things:
http://papers.ssrn.com/sol3/papers.cfm?…
Whistleblowing as civil disobedience:
http://www.zephoria.org/thoughts/archives/2013/07/…
Blog comments on this essay are particularly interesting.
https://www.schneier.com/blog/archives/2013/09/…
Conspiracy Theories and the NSA
I’ve recently seen two articles speculating on the NSA’s capability, and practice, of spying on members of Congress and other elected officials. The evidence is all circumstantial and smacks of conspiracy thinking—and I have no idea whether any of it is true or not—but it’s a good illustration of what happens when trust in a public institution fails.
The NSA has repeatedly lied about the extent of its spying program. James R. Clapper, the director of national intelligence, has lied about it to Congress. Top-secret documents provided by Edward Snowden, and reported on by the “Guardian” and other newspapers, repeatedly show that the NSA’s surveillance systems are monitoring the communications of American citizens. The DEA has used this information to apprehend drug smugglers, then lied about it in court. The IRS has used this information to find tax cheats, then lied about it. It’s even been used to arrest a copyright violator. It seems that every time there is an allegation against the NSA, no matter how outlandish, it turns out to be true.
“Guardian” reporter Glenn Greenwald has been playing this well, dribbling the information out one scandal at a time. It’s looking more and more as if the NSA doesn’t know what Snowden took. It’s hard for someone to lie convincingly if he doesn’t know what the opposition actually knows.
All of this denying and lying results in us not trusting anything the NSA says, anything the president says about the NSA, or anything companies say about their involvement with the NSA. We know secrecy corrupts, and we see that corruption. There’s simply no credibility, and—the real problem—no way for us to verify anything these people might say.
It’s a perfect environment for conspiracy theories to take root: no trust, assuming the worst, no way to verify the facts. Think JFK assassination theories. Think 9/11 conspiracies. Think UFOs. For all we know, the NSA *might* be spying on elected officials. Edward Snowden said that he had the ability to spy on anyone in the US, in real time, from his desk. His remarks were belittled, but it turns out he was right.
This is not going to improve anytime soon. Greenwald and other reporters are still poring over Snowden’s documents, and will continue to report stories about NSA overreach, lawbreaking, abuses, and privacy violations well into next year. The “independent” review that Obama promised of these surveillance programs will not help, because it will lack both the power to discover everything the NSA is doing and the ability to relay that information to the public.
It’s time to start cleaning up this mess. We need a special prosecutor, one not tied to the military, the corporations complicit in these programs, or the current political leadership, whether Democrat or Republican. This prosecutor needs free rein to go through the NSA’s files and discover the full extent of what the agency is doing, as well as enough technical staff who have the capability to understand it. He needs the power to subpoena government officials and take their sworn testimony. He needs the ability to bring criminal indictments where appropriate. And, of course, he needs the requisite security clearance to see it all.
We also need something like South Africa’s Truth and Reconciliation Commission, where both government and corporate employees can come forward and tell their stories about NSA eavesdropping without fear of reprisal.
Yes, this will overturn the paradigm of keeping everything the NSA does secret, but Snowden and the reporters he’s shared documents with have already done that. The secrets are going to come out, and the journalists doing the outing are not going to be sympathetic to the NSA. If the agency were smart, it’d realize that the best thing it could do would be to get ahead of the leaks.
The result needs to be a public report about the NSA’s abuses, detailed enough that public watchdog groups can be convinced that everything is known. Only then can our country go about cleaning up the mess: shutting down programs, reforming the Foreign Intelligence Surveillance Act system, and reforming surveillance law to make it absolutely clear that even the NSA cannot eavesdrop on Americans without a warrant.
Comparisons are springing up between today’s NSA and the FBI of the 1950s and 1960s, and between NSA Director Keith Alexander and J. Edgar Hoover. We never managed to rein in Hoover’s FBI—it took his death for change to occur. I don’t think we’ll get so lucky with the NSA. While Alexander has enormous personal power, much of his power comes from the institution he leads. When he is replaced, that institution will remain.
Trust is essential for society to function. Without it, conspiracy theories naturally take hold. Even worse, without it we fail as a country and as a culture. It’s time to reinstitute the ideals of democracy: The government works for the people, open government is the best way to protect against government abuse, and a government keeping secrets from its people is a rare exception, not the norm.
This essay originally appeared on TheAtlantic.com.
http://www.theatlantic.com/politics/archive/2013/09/…
Speculations that the NSA is spying on Congress:
http://news.firedoglake.com/2013/08/28/…
http://www.nsfwcorp.com/scribble/5695/…
When trust fails:
https://www.schneier.com/essay-435.html
The Director of National Intelligence lying:
http://www.eff.org/deeplinks/2013/06/…
NSA data used for other purposes:
http://www.reuters.com/article/2013/08/05/…
http://www.itnews.com.au/News/…
The NSA doesn’t know what Snowden has:
http://www.cbsnews.com/8301-201_162-57600000/…
Obama’s deceptions:
http://www.washingtonpost.com/s/the-switch/wp/…
http://reason.com//2013/08/27/…
Companies lying:
https://www.eff.org/nsa-spying/wordgames
How Snowden could have spied on anyone from his desk:
http://www.theguardian.com/world/2013/jul/31/…
Comparing Alexander with J. Edgar Hoover:
http://www.forbes.com/sites/jennifergranick/2013/08/…
Trust is essential:
http://www.schneier.com/essay-412.html
The NSA’s Cryptographic Capabilities
The latest Snowden document is the US intelligence “black budget.” There’s a lot of information in the few pages the “Washington Post” decided to publish, including an introduction by Director of National Intelligence James Clapper. In it, he drops a tantalizing hint: “Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit Internet traffic.”
Honestly, I’m skeptical. Whatever the NSA has up its top-secret sleeves, the mathematics of cryptography will still be the most secure part of any encryption system. I worry a lot more about poorly designed cryptographic products, software bugs, bad passwords, companies that collaborate with the NSA to leak all or part of the keys, and insecure computers and networks. Those are where the real vulnerabilities are, and where the NSA spends the bulk of its efforts.
This isn’t the first time we’ve heard this rumor. In a WIRED article last year, longtime NSA-watcher James Bamford wrote:
According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US.
We have no further information from Clapper, Snowden, or this other source of Bamford’s. But we can speculate.
Perhaps the NSA has some new mathematics that breaks one or more of the popular encryption algorithms: AES, Twofish, Serpent, triple-DES, Serpent. It wouldn’t be the first time this happened. Back in the 1970s, the NSA knew of a cryptanalytic technique called “differential cryptanalysis” that was unknown in the academic world. That technique broke a variety of other academic and commercial algorithms that we all thought secure. We learned better in the early 1990s, and now design algorithms to be resistant to that technique.
It’s very probable that the NSA has newer techniques that remain undiscovered in academia. Even so, such techniques are unlikely to result in a practical attack that can break actual encrypted plaintext.
The naive way to break an encryption algorithm is to brute-force the key. The complexity of that attack is 2**n, where n is the key length. All cryptanalytic attacks can be viewed as shortcuts to that method. And since the efficacy of a brute-force attack is a direct function of key length, these attacks effectively shorten the key. So if, for example, the best attack against DES has a complexity of 2**39, that effectively shortens DES’s 56-bit key by 17 bits.
That’s a really good attack, by the way.
Right now the upper practical limit on brute force is somewhere under 80 bits. However, using that as a guide gives us some indication as to how good an attack has to be to break any of the modern algorithms. These days, encryption algorithms have, at a minimum, 128-bit keys. That means any NSA cryptanalytic breakthrough has to reduce the effective key length by at least 48 bits in order to be practical.
There’s more, though. That DES attack requires an impractical 70 terabytes of known plaintext encrypted with the key we’re trying to break. Other mathematical attacks require similar amounts of data. In order to be effective in decrypting actual operational traffic, the NSA needs an attack that can be executed with the known plaintext in a common MS-Word header: much, much less.
So while the NSA certainly has symmetric cryptanalysis capabilities that we in the academic world do not, converting that into practical attacks on the sorts of data it is likely to encounter seems so impossible as to be fanciful.
More likely is that the NSA has some mathematical breakthrough that affects one or more public-key algorithms. There are a lot of mathematical tricks involved in public-key cryptanalysis, and absolutely no theory that provides any limits on how powerful those tricks can be.
Breakthroughs in factoring have occurred regularly over the past several decades, allowing us to break ever-larger public keys. Much of the public-key cryptography we use today involves elliptic curves, something that is even more ripe for mathematical breakthroughs. It is not unreasonable to assume that the NSA has some techniques in this area that we in the academic world do not. Certainly the fact that the NSA is pushing elliptic-curve cryptography is some indication that it can break them more easily.
If we think that’s the case, the fix is easy: increase the key lengths.
Assuming the hypothetical NSA breakthroughs don’t totally break public-cryptography—and that’s a very reasonable assumption—it’s pretty easy to stay a few steps ahead of the NSA by using ever-longer keys. We’re already trying to phase out 1024-bit RSA keys in favor of 2048-bit keys. Perhaps we need to jump even further ahead and consider 3072-bit keys. And maybe we should be even more paranoid about elliptic curves and use key lengths above 500 bits.
One last blue-sky possibility: a quantum computer. Quantum computers are still toys in the academic world, but have the theoretical ability to quickly break common public-key algorithms—regardless of key length—and to effectively halve the key length of any symmetric algorithm. I think it extraordinarily unlikely that the NSA has built a quantum computer capable of performing the magnitude of calculation necessary to do this, but it’s possible. The defense is easy, if annoying: stick with symmetric cryptography based on shared secrets, and use 256-bit keys.
There’s a saying inside the NSA: “Cryptanalysis always gets better. It never gets worse.” It’s naive to assume that, in 2013, we have discovered all the mathematical breakthroughs in cryptography that can ever be discovered. There’s a lot more out there, and there will be for centuries.
And the NSA is in a privileged position: It can make use of everything discovered and openly published by the academic world, as well as everything discovered by it in secret.
The NSA has a lot of people thinking about this problem full-time. According to the black budget summary, 35,000 people and $11 billion annually are part of the Department of Defense-wide Consolidated Cryptologic Program. Of that, 4 percent—or $440 million—goes to “Research and Technology.”
That’s an enormous amount of money; probably more than everyone else on the planet spends on cryptography research put together. I’m sure that results in a lot of interesting—and occasionally groundbreaking—cryptanalytic research results, maybe some of it even practical.
Still, I trust the mathematics.
This essay originally appeared on Wired.com, before the news of the NSA hacking cryptographic systems broke.
http://www.wired.com/opinion/2013/09/…
The intelligence “black budget”:
http://www.washingtonpost.com/world/…
Speculation about the NSA’s cryptanalytic capabilities:
http://www.wired.com/threatlevel/2013/08/black-budget/
Bamford article:
http://www.wired.com/threatlevel/2012/03/…
The DES attack:
http://crypto.junod.info/sac01.html
The NSA pushing elliptic curves:
http://www.nsa.gov/business/programs/…
The Economist expresses much the same opinion:
http://www.economist.com/s/babbage/2013/09/…
How to Remain Secure Against the NSA
Now that we have enough details about how the NSA eavesdrops on the Internet, including recent disclosures of the NSA’s deliberate weakening of cryptographic systems, we can finally start to figure out how to protect ourselves.
For the past two weeks, I have been working with the Guardian on NSA stories, and have read hundreds of top-secret NSA documents provided by whistleblower Edward Snowden. I wasn’t part of today’s story—it was in process well before I showed up—but everything I read confirms what the Guardian is reporting.
At this point, I feel I can provide some advice for keeping secure against such an adversary.
The primary way the NSA eavesdrops on Internet communications is in the network. That’s where their capabilities best scale. They have invested in enormous programs to automatically collect and analyze network traffic. Anything that requires them to attack individual endpoint computers is significantly more costly and risky for them, and they will do those things carefully and sparingly.
Leveraging its secret agreements with telecommunications companies—all the US and UK ones, and many other “partners” around the world—the NSA gets access to the communications trunks that move Internet traffic. In cases where it doesn’t have that sort of friendly access, it does its best to surreptitiously monitor communications channels: tapping undersea cables, intercepting satellite communications, and so on.
That’s an enormous amount of data, and the NSA has equivalently enormous capabilities to quickly sift through it all, looking for interesting traffic. “Interesting” can be defined in many ways: by the source, the destination, the content, the individuals involved, and so on. This data is funneled into the vast NSA system for future analysis.
The NSA collects much more metadata about Internet traffic: who is talking to whom, when, how much, and by what mode of communication. Metadata is a lot easier to store and analyze than content. It can be extremely personal to the individual, and is enormously valuable intelligence.
The Systems Intelligence Directorate is in charge of data collection, and the resources it devotes to this is staggering. I read status report after status report about these programs, discussing capabilities, operational details, planned upgrades, and so on. Each individual problem—recovering electronic signals from fiber, keeping up with the terabyte streams as they go by, filtering out the interesting stuff—has its own group dedicated to solving it. Its reach is global.
The NSA also attacks network devices directly: routers, switches, firewalls, etc. Most of these devices have surveillance capabilities already built in; the trick is to surreptitiously turn them on. This is an especially fruitful avenue of attack; routers are updated less frequently, tend not to have security software installed on them, and are generally ignored as a vulnerability.
The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO—Tailored Access Operations—group. TAO has a menu of exploits it can serve up against your computer—whether you’re running Windows, Mac OS, Linux, iOS, or something else—and a variety of tricks to get them onto your computer. Your anti-virus software won’t detect them, and you’d have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.
The NSA deals with any encrypted data it encounters more by subverting the underlying cryptography than by leveraging any secret mathematical breakthroughs. First, there’s a lot of bad cryptography out there. If it finds an Internet connection protected by MS-CHAP, for example, that’s easy to break and recover the key. It exploits poorly chosen user passwords, using the same dictionary attacks hackers use in the unclassified world.
As was revealed today, the NSA also works with security product vendors to ensure that commercial encryption products are broken in secret ways that only it knows about. We know this has happened historically: CryptoAG and Lotus Notes are the most public examples, and there is evidence of a back door in Windows. A few people have told me some recent stories about their experiences, and I plan to write about them soon. Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. If the back door is discovered, it’s explained away as a mistake. And as we now know, the NSA has enjoyed enormous success from this program.
TAO also hacks into computers to recover long-term keys. So if you’re running a VPN that uses a complex shared secret to protect your data and the NSA decides it cares, it might try to steal that secret. This kind of thing is only done against high-value targets.
How do you communicate securely against such an adversary? Snowden said it in an online Q&A soon after he made his first document public: “Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on.”
I believe this is true, despite today’s revelations and tantalizing hints of “groundbreaking cryptanalytic capabilities” made by James Clapper, the director of national intelligence in another top-secret document. Those capabilities involve deliberately weakening the cryptography.
Snowden’s follow-on sentence is equally important: “Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.”
Endpoint means the software you’re using, the computer you’re using it on, and the local network you’re using it in. If the NSA can modify the encryption algorithm or drop a Trojan on your computer, all the cryptography in the world doesn’t matter at all. If you want to remain secure against the NSA, you need to do your best to ensure that the encryption can operate unimpeded.
With all this in mind, I have five pieces of advice:
1) Hide in the network. Implement hidden services. Use Tor to anonymize yourself. Yes, the NSA targets Tor users, but it’s work for them. The less obvious you are, the safer you are.
2) Encrypt your communications. Use TLS. Use IPsec. Again, while it’s true that the NSA targets encrypted connections—and it may have explicit exploits against these protocols—you’re much better protected than if you communicate in the clear.
3) Assume that while your computer can be compromised, it would take work and risk on the part of the NSA—so it probably isn’t. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the Internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my Internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good.
4) Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors, and many foreign ones probably do as well. It’s prudent to assume that foreign products also have foreign-installed backdoors. Closed-source software is easier for the NSA to backdoor than open-source software. Systems relying on master secrets are vulnerable to the NSA, through either legal or more clandestine means.
5) Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself, giving the NSA a lot more freedom to make changes. And because BitLocker is proprietary, it’s far less likely those changes will be discovered. Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.
Since I started working with Snowden’s documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I’m not going to write about. There’s an undocumented encryption feature in my Password Safe program from the command line; I’ve been using that as well.
I understand that most of this is impossible for the typical Internet user. Even I don’t use all these tools for most everything I am working on. And I’m still primarily on Windows, unfortunately. Linux would be safer.
The NSA has turned the fabric of the Internet into a vast surveillance platform, but they are not magical. They’re limited by the same economic realities as the rest of us, and our best defense is to make surveillance of us as expensive as possible.
Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.
This essay originally appeared in the “Guardian.”
http://www.theguardian.com/world/2013/sep/05/…
NSA links:
http://www.theguardian.com/world/2013/sep/05/…
http://online.wsj.com/article/…
http://www.theguardian.com/business/2013/aug/02/…
http://www.washingtonpost.com/business/technology/…
http://www.theguardian.com/world/2013/jul/31/…
http://www.theguardian.com/world/2013/jun/27/…
http://www.wired.com/threatlevel/2013/09/…
http://www.foreignpolicy.com/articles/2013/06/10/…
http://www.informationweek.com/security/government/…
Other NSA backdoors:
https://www.schneier.com/blog/archives/2008/01/…
http://www.heise.de/tp/artikel/2/2898/1.html
http://www.heise.de/tp/artikel/5/5263/1.html
Snowden’s interview:
http://www.theguardian.com/world/2013/jun/17/…
Clapper’s comments:
http://www.wired.com/threatlevel/2013/08/black-budget/
Surveillance built in to the routers:
https://www.rfc-editor.org/rfc/rfc3924.txt
My tools:
http://www.gnupg.org/
https://silentcircle.com/
https://tails.boum.org/
http://www.cypherpunks.ca/otr/
http://www.truecrypt.org/
http://bleachbit.sourceforge.net/
https://www.schneier.com/passsafe.html
Protecting Against Leakers
Ever since Edward Snowden walked out of a National Security Agency facility in May with electronic copies of thousands of classified documents, the finger-pointing has concentrated on government’s security failures. Yet the debacle illustrates the challenge with trusting people in any organization.
The problem is easy to describe. Organizations require trusted people, but they don’t necessarily know whether those people are trustworthy. These individuals are essential, and can also betray organizations.
So how does an organization protect itself?
Securing trusted people requires three basic mechanisms (as I describe in my book “Beyond Fear”). The first is compartmentalization. Trust doesn’t have to be all or nothing; it makes sense to give relevant workers only the access, capabilities and information they need to accomplish their assigned tasks. In the military, even if they have the requisite clearance, people are only told what they “need to know.” The same policy occurs naturally in companies.
This isn’t simply a matter of always granting more senior employees a higher degree of trust. For example, only authorized armored-car delivery people can unlock automated teller machines and put money inside; even the bank president can’t do so. Think of an employee as operating within a sphere of trust—a set of assets and functions he or she has access to. Organizations act in their best interest by making that sphere as small as possible.
The idea is that if someone turns out to be untrustworthy, he or she can only do so much damage. This is where the NSA failed with Snowden. As a system administrator, he needed access to many of the agency’s computer systems—and he needed access to everything on those machines. This allowed him to make copies of documents he didn’t need to see.
The second mechanism for securing trust is defense in depth: Make sure a single person can’t compromise an entire system. NSA Director General Keith Alexander has said he is doing this inside the agency by instituting what is called two-person control: There will always be two people performing system-administration tasks on highly classified computers.
Defense in depth reduces the ability of a single person to betray the organization. If this system had been in place and Snowden’s superior had been notified every time he downloaded a file, Snowden would have been caught well before his flight to Hong Kong.
The final mechanism is to try to ensure that trusted people are, in fact, trustworthy. The NSA does this through its clearance process, which at high levels includes lie-detector tests (even though they don’t work) and background investigations. Many organizations perform reference and credit checks and drug tests when they hire new employees. Companies may refuse to hire people with criminal records or noncitizens; they might hire only those with a particular certification or membership in certain professional organizations. Some of these measures aren’t very effective—it’s pretty clear that personality profiling doesn’t tell you anything useful, for example—but the general idea is to verify, certify and test individuals to increase the chance they can be trusted.
These measures are expensive. It costs the US government about $4,000 to qualify someone for top-secret clearance. Even in a corporation, background checks and screenings are expensive and add considerable time to the hiring process. Giving employees access to only the information they need can hamper them in an agile organization in which needs constantly change. Security audits are expensive, and two-person control is even more expensive: it can double personnel costs. We’re always making trade-offs between security and efficiency.
The best defense is to limit the number of trusted people needed within an organization. Alexander is doing this at the NSA—albeit too late—by trying to reduce the number of system administrators by 90 percent. This is just a tiny part of the problem; in the US government, as many as 4 million people, including contractors, hold top-secret or higher security clearances. That’s far too many.
More surprising than Snowden’s ability to get away with taking the information he downloaded is that there haven’t been dozens more like him. His uniqueness—along with the few who have gone before him and how rare whistle-blowers are in general—is a testament to how well we normally do at building security around trusted people.
Here’s one last piece of advice, specifically about whistle-blowers. It’s much harder to keep secrets in a networked world, and whistle-blowing has become the civil disobedience of the information age. A public or private organization’s best defense against whistle-blowers is to refrain from doing things it doesn’t want to read about on the front page of the newspaper. This may come as a shock in a market-based system, in which morally dubious behavior is often rewarded as long as it’s legal and illegal activity is rewarded as long as you can get away with it.
No organization, whether it’s a bank entrusted with the privacy of its customer data, an organized-crime syndicate intent on ruling the world, or a government agency spying on its citizens, wants to have its secrets disclosed. In the information age, though, it may be impossible to avoid.
This essay previously appeared on Bloomberg.com.
http://www.bloomberg.com/news/2013-08-21/…
A commenter on the Bloomberg site added another security measure: pay your people more. Better paid people are less likely to “betray the organization that employs them. I should have added that, especially since I make that exact point in Liars and Outliers.”
Two-person control for sysadmins inside the NSA:
https://www.schneier.com/blog/archives/2013/08/…
Lie detectors don’t work:
http://www.senseaboutscience.org/news.php/266/…
Cost of a government clearance:
http://news.clearancejobs.com/2011/08/07/…
Reducing the number of sysadmins inside the NSA:
http://www.reuters.com/article/2013/08/09/…
4 million people hold a US security clearance:
http://www.washingtonpost.com/s/worldviews/wp/…
Whistle-blowing:
http://www.zephoria.org/thoughts/archives/2013/07/…
The Brazilian television show “Fantastico” exposed an NSA training presentation that discusses how the agency runs man-in-the-middle attacks on the Internet. The point of the story was that the NSA engages in economic espionage against Petrobras, the Brazilian giant oil company, but I’m more interested in the tactical details.
http://g1.globo.com/fantastico/noticia/2013/09/…
http://www.theguardian.com/world/2013/sep/09/…
The video on the webpage is long, and includes what I assume is a dramatization of an NSA classroom, but a few screen shots are important. The pages from the training presentation describe how the NSA’s MITM attack works:
http://www.slate.com/s/future_tense/2013/09/09/… or http://www.slate.com/s/future_tense/2013/09/09/…
http://www.motherjones.com/politics/2013/09/…
Here’s the page that shows the MITM attack against Google and its users:
https://www.documentcloud.org/documents/…
Another screenshot from the “Fantastico” piece implies is that the 2011 DigiNotar hack was either the work of the NSA, or exploited by the NSA.
http://imgur.com/a/g3UGP#1
NSA/Snowden News
Since the end of August, I have been working with Glenn Greenwald on the Snowden documents. I have flown down to Rio, and I have read through a lot of them. None of my reporting has been published yet.
Really good article by Susan Landau on the Snowden documents and what they mean.
http://www.computer.org/cms/Computer.org/…
There’s an article from Wednesday’s “Wall Street Journal” that gives more details about the NSA’s data collection efforts.
http://online.wsj.com/article/…
The NSA seems to have finally found a PR agency with a TS/SI clearance, since there was a response to this story.
http://www.nsa.gov/public_info/_files/…
They’ve also had a conference call with the press.
http://www.reuters.com/article/2013/08/17/…
And the Director of National Intelligence is on Twitter and Tumblr.
https://twitter.com/icontherecord
http://icontherecord.tumblr.com/
Assume it’s really true that the NSA has no idea what documents Snowden took, and that they wouldn’t even know he’d taken anything if he hadn’t gone public. The fact that abuses of their systems by NSA officers were largely discovered through self-reporting substantiates that belief. Given that, why should anyone believe that Snowden is the first person to walk out the NSA’s door with multiple gigabytes of classified documents? He might be the first to release documents to the public, but it’s a reasonable assumption that the previous leakers were working for Russia, or China, or elsewhere.
http://www.cbsnews.com/8301-201_162-57600000/…
http://s.wsj.com/washwire/2013/08/23/…
I don’t like stories about the personalities in the Snowden affair, because it detracts from the NSA and the policy issues. But I’m a sucker for operational security, and just have to post this detail from their first meeting in Hong Kong.
http://www.nytimes.com/2013/08/18/magazine/…
Actually, the whole article is interesting. The author is writing a book about surveillance and privacy, one of probably a half dozen about the Snowden affair that will come out this year.
While we’re on the topic, here’s some really stupid opsec on the part of Greenwald and Poitras:
http://joshuafoust.com/extraordinary-court-statement/
Here’s a 1983 article on the NSA. The moral is that NSA surveillance overreach has been going on for a long, long time.
http://www.nytimes.com/1983/03/27/magazine/…
The new Snowden revelations are explosive. Basically, the NSA is able to decrypt most of the Internet. They’re doing it primarily by cheating, not by mathematics.
http://www.theguardian.com/world/2013/sep/05/…
http://www.nytimes.com/2013/09/06/us/…
http://www.propublica.org/article/…
Remember this: The math is good, but math has no agency. Code has agency, and the code has been subverted.
Slashdot and Reddit thread:
http://yro.slashdot.org/story/13/09/06/0148201/…
http://www.reddit.com/r/netsec/comments/1lu6o2/…
Matthew Green wrote a blog post speculating on how the NSA defeats encryption.
http://.cryptographyengineering.com/2013/09/…
It’s well worth reading, and not just because Johns Hopkins University asked him to remove it, and then backed down a few hours later.
http://www.techdirt.com/articles/20130909/…
http://news.cnet.com/8301-1009_3-57602345-83/…
http://arstechnica.com/security/2013/09/…
http://www.baltimoresun.com/news/maryland/education/…
Ed Felten has an excellent essay on the damage caused by the NSA secretly breaking the security of Internet systems:
https://freedom-to-tinker.com//felten/…
Not my best quote on the NSA:
https://www.schneier.com/blog/archives/2013/09/…
Our Newfound Fear of Risk
We’re afraid of risk. It’s a normal part of life, but we’re increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren’t free. They cost money, of course, but they cost other things as well. They often don’t provide the security they advertise, and—paradoxically—they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.
Three examples:
* We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it’s not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes—a wrong address on a warrant, a misunderstanding—result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.
* We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.
* We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade—including the wars in Iraq and Afghanistan—money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.
There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.
There’s a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world—among the wealthier of us who live in peaceful Western countries—our lives have become safer.
Our notions of risk are not absolute; they’re based more on how far they are from whatever we think of as “normal.” So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.
Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are—and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.
Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn’t discipline—no matter how benign the infraction—the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.
We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved—medicine is the big one, but other sciences as well—we become better at mitigating and recovering from those sorts of risks.
Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn’t able to figure out how to topple structures constructed under some new and safer building code, and an automobile won’t invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.
When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect—and you also get more side effects, because we *all* adapt.
We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.
This essay previously appeared on Forbes.com.
https://www.schneier.com/blog/archives/2013/09/…
Accidental SWAT death:
http://www.slate.com/articles/news_and_politics/…
Cost of zero-tolerance in schools:
http://www.texaspolicy.com/sites/default/files/…
http://www.publicinterestprojects.org/wp-content/…
Drone killings:
http://www.wired.com/threatlevel/2013/08/…
Our embassies:
http://www.npr.org/templates/story/story.php?…
Cartoon about how bad we are at assessing risk:
http://xkcd.com/1252/
Slashdot thread:
http://news.slashdot.org/story/13/09/04/0155247/…
Human-Machine Trust Failures
I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.
I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.
The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.
My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.
In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.
This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”
Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.
The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.
This essay previously appeared in “IEEE Security & Privacy.”
https://www.schneier.com/essay-445.html
Excess Automobile Deaths as a Result of 9/11
People commented about a point I made in a recent essay:
In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.
Yes, that’s wrong. Where I said “months,” I should have said “years.”
I got the sound bite from John Mueller and Mark G. Stewart’s book, “Terror, Security, and Money.” This is footnote 19 from Chapter 1:
The inconvenience of extra passenger screening and added costs at airports after 9/11 cause many short-haul passengers to drive to their destination instead, and, since airline travel is far safer than car travel, this has led to an increase of 500 U.S. traffic fatalities per year. Using DHS-mandated value of statistical life at $6.5 million, this equates to a loss of $3.2 billion per year, or $32 billion over the period 2002 to 2011 (Blalock et al. 2007).
The authors make the same point in this earlier (and shorter) essay:
Increased delays and added costs at U.S. airports due to new security procedures provide incentive for many short-haul passengers to drive to their destination rather than flying, and, since driving is far riskier than air travel, the extra automobile traffic generated has been estimated in one study to result in 500 or more extra road fatalities per year.
The references are:
* Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon. 2007. “The Impact of Post-9/11 Airport Security Measures on the Demand for Air Travel.” “Journal of Law and Economics” 50(4) November: 731-755.
* Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon. 2009. “Driving Fatalities after 9/11: A Hidden Cost of Terrorism.” “Applied Economics” 41(14): 1717-1729.
There’s also this reference:
* Michael Sivak and Michael J. Flannagan. 2004. “Consequences for road traffic fatalities of the reduction in flying following September 11, 2001.” “Transportation Research Part F: Traffic Psychology and Behavior” 7 (4).
Abstract: Gigerenzer (Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic accidents. Psychological Science, 15 , 286287) argued that the increased fear of flying in the U.S. after September 11 resulted in a partial shift from flying to driving on rural interstate highways, with a consequent increase of 353 road traffic fatalities for October through December 2001. We reevaluated the consequences of September 11 by utilizing the trends in road traffic fatalities from 2000 to 2001 for January through August. We also examined which road types and traffic participants contributed most to the increased road fatalities. We conclude that (1) the partial modal shift after September 11 resulted in 1018 additional road fatalities for the three months in question, which is substantially more than estimated by Gigerenzer, (2) the major part of the increased toll occurred on local roads, arguing against a simple modal shift from flying to driving to the same destinations, (3) driver fatalities did not increase more than in proportion to passenger fatalities, and (4) pedestrians and bicyclists bore a disproportionate share of the increased fatalities.
My original quote:
https://www.schneier.com/essay-442.html
“Terror, Security, and Money”:
http://www.amazon.com/…
Business Week makes the same point:
http://www.businessweek.com/articles/2012-11-18/…
Another analysis:
http://skeptics.stackexchange.com/questions/17578/…
News
Terrorist organizations have the same management problems as other organizations, and new ones besides.
http://www.foreignaffairs.com/articles/139817/…
More on the economics of terrorism:
http://www.ted.com/talks/…
http://themonkeycage.org/2008/02/11/…
New survey on teens and privacy:
http://www.pewinternet.org/Reports/2013/…
Interesting paper: “The Banality of Security: The Curious Case of Surveillance Cameras,” by Benjamin Goold, Ian Loader, and Angélica Thumala (full paper is behind a paywall).
http://bjc.oxfordjournals.org/content/early/2013/07/…
Orin Kerr envisions what the Electronic Communications Privacy Act (ECPA) should look like today:
http://papers.ssrn.com/sol3/papers.cfm?…
This research project by Brandon Wiley to evade Internet censorship—the tool is called “Dust”—looks really interesting.
http://www.khanfu.com/m/plain/29/event/2032
http://blanu.net/Dust.pdf
http://github.com/blanu/Dust/
http://blanu.net/Dust.pdf
http://blanu.net/Dust-FOCI.pdf
A company that teaches people how to beat lie detectors is under investigation.
http://www.mcclatchydc.com/2013/08/16/199590/…
New paper on the Federal Trade Commission and its actions to protect privacy:
http://papers.ssrn.com/sol3/papers.cfm?…
Bessemer Venture Partners partner David Cowan has an interesting article on the opportunities for cloud security companies.
http://www.technologyreview.com/view/518771/…
Richard Stiennnon, an industry analyst, has a similar article.
http://www.forbes.com/sites/richardstiennon/2013/08/…
And Zscaler comments on a 451 Research report on the cloud security business.
http://www.prosecurityzone.com/…
NIST’s John Kelsey gave an excellent talk on the history, status, and future of the SHA-3 hashing standard. The slides are online.
https://docs.google.com/file/d/…
A write-up of the talk:
http://bristolcrypto.blogspot.co.uk/2013/08/…
I keep getting alerts of new issues of the “Journal of Homeland Security and Emergency Management,” but there are rarely articles I find interesting.
http://www.degruyter.com/view/j/…
Money reduces trust in small groups, but increases it in larger groups. Basically, the introduction of money allows society to scale.
http://www.bbc.co.uk/news/science-environment-23623157
The TSA does not have to tell the truth.
https://www.schneier.com/blog/archives/2013/09/…
iPhone Fingerprint Authentication
NOTE: This essay was written before Apple’s iPhone announcement.
When Apple bought AuthenTec for its biometrics technology—reported as one of its most expensive purchases—there was a lot of speculation about how the company would incorporate biometrics in its product line. Many speculate that the new Apple iPhone to be announced tomorrow will come with a fingerprint authentication system, and there are several ways it could work, such as swiping your finger over a slit-sized reader to have the phone recognize you.
Apple would be smart to add biometric technology to the iPhone. Fingerprint authentication is a good balance between convenience and security for a mobile device.
Biometric systems are seductive, but the reality isn’t that simple. They have complicated security properties. For example, they are not keys. Your fingerprint isn’t a secret; you leave it everywhere you touch.
And fingerprint readers have a long history of vulnerabilities as well. Some are better than others. The simplest ones just check the ridges of a finger; some of those can be fooled with a good photocopy. Others check for pores as well. The better ones verify pulse, or finger temperature. Fooling them with rubber fingers is harder, but often possible. A Japanese researcher had good luck doing this over a decade ago with the gelatin mixture that’s used to make Gummi bears.
The best system I’ve ever seen was at the entry gates of a secure government facility. Maybe you could have fooled it with a fake finger, but a Marine guard with a big gun was making sure you didn’t get the opportunity to try. Disney World uses a similar system at its park gates—but without the Marine guards.
A biometric system that authenticates you and you alone is easier to design than a biometric system that is supposed to identify unknown people. That is, the question “Is this the finger belonging to the owner of this iPhone?” is a much easier question for the system to answer than “Whose finger is this?”
There are two ways an authentication system can fail. It can mistakenly allow an unauthorized person access, or it can mistakenly deny access to an authorized person. In any consumer system, the second failure is far worse than the first. Yes, it can be problematic if an iPhone fingerprint system occasionally allows someone else access to your phone. But it’s much worse if you can’t reliably access your own phone—you’d junk the system after a week.
If it’s true that Apple’s new iPhone will have biometric security, the designers have presumably erred on the side of ensuring that the user can always get in. Failures will be more common in cold weather, when your shriveled fingers just got out of the shower, and so on. But there will certainly still be the traditional PIN system to fall back on.
So…can biometric authentication be hacked?
Almost certainly. I’m sure that someone with a good enough copy of your fingerprint and some rudimentary materials engineering capability—or maybe just a good enough printer—can authenticate his way into your iPhone. But, honestly, if some bad guy has your iPhone and your fingerprint, you’ve probably got bigger problems to worry about.
The final problem with biometric systems is the database. If the system is centralized, there will be a large database of biometric information that’s vulnerable to hacking. A system by Apple will almost certainly be local—you authenticate yourself to the phone, not to any network—so there’s no requirement for a centralized fingerprint database.
Apple’s move is likely to bring fingerprint readers into the mainstream. But all applications are not equal. It’s fine if your fingers unlock your phone. It’s a different matter entirely if your fingerprint is used to authenticate your iCloud account. The centralized database required for that application would create an enormous security risk.
This essay previously appeared on Wired.com.
http://www.wired.com/opinion/2013/09/…
Authentic:
http://www.patentlyapple.com/patently-apple/2013/07/…
http://www.wired.com/business/2012/09/…
iPhone speculation:
http://news.yahoo.com/…
http://www.bloomberg.com/news/2013-08-13/…
My previous essay on biometric security:
https://www.schneier.com/…
Beating fingerprint readers:
http://dsc.discovery.com/tv-shows/mythbusters/…
http://www.theregister.co.uk/2002/05/16/…
Disney’s biometric system:
http://boingboing.net/2008/03/15/…
News on the iPhone’s fingerprint reader:
http://www.forbes.com/sites/andygreenberg/2013/09/…
http://online.wsj.com/article/…
http://www.theverge.com/2013/9/10/4715372/…
http://techcrunch.com/2013/09/10/…
http://www.macworld.com/article/2048514/…
Hacking Consumer Devices
Recently, a Texas couple apparently discovered that the electronic baby monitor in their children’s bedroom had been hacked. According to a local TV station, the couple said they heard an unfamiliar voice coming from the room, went to investigate and found that someone had taken control of the camera monitor remotely and was shouting profanity-laden abuse. The child’s father unplugged the monitor.
What does this mean for the rest of us? How secure are consumer electronic systems, now that they’re all attached to the Internet?
The answer is not very, and it’s been this bad for many years. Security vulnerabilities have been found in all types of webcams, cameras of all sorts, implanted medical devices, cars, and even smart toilets—not to mention yachts, ATM machines, industrial control systems and military drones.
All of these things have long been hackable. Those of us who work in security are often amazed that most people don’t know about it.
Why are they hackable? Because security is very hard to get right. It takes expertise, and it takes time. Most companies don’t care because most customers buying security systems and smart appliances don’t know enough to care. Why should a baby monitor manufacturer spend all sorts of money making sure its security is good when the average customer won’t even notice?
Even worse, that consumer will look at two competing baby monitors—a more expensive one with better security, and a cheaper one with minimal security—and buy the cheaper. Without the expertise to make an informed buying decision, cheaper wins.
A lot of hacks happen because the users don’t configure or install their devices properly, but that’s really the fault of the manufacturer. These are supposed to be consumer devices, not specialized equipment for security experts only.
This sort of thing is true in other aspects of society, and we have a variety of mechanisms to deal with it. Government regulation is one of them. For example, few of us can differentiate real pharmaceuticals from snake oil, so the FDA regulates what can be sold and what sorts of claims vendors can make. Independent product testing is another. You and I might not be able to tell a well-made car from a poorly-made one at a glance, but we can both read the reports from a variety of testing agencies.
Computer security has resisted these mechanisms, both because the industry changes so quickly and because this sort of testing is hard and expensive. But the effect is that we’re all being sold a lot of insecure consumer products with embedded computers. And as these computers get connected to the Internet, the problems will get worse.
The moral here isn’t that your baby monitor could be hacked. The moral is that pretty much every “smart” everything can be hacked, and because consumers don’t care, the market won’t fix the problem.
This essay previously appeared on CNN.com. I wrote it in about half an hour, on request, and I’m not really happy with it. I should have talked more about the economics of good security, as well as the economics of hacking. The point is that we don’t have to worry about hackers smart enough to figure out these vulnerabilities, but those dumb hackers who just use software tools written and distributed by the smart hackers. Ah well, next time.
This essay previously appeared on CNN.com.
http://www.cnn.com/2013/08/14/opinion/…
Baby monitor hack:
http://us.cnn.com/2013/08/14/tech/web/…
More webcam vulnerabilities:
http://arstechnica.com/tech-policy/2013/03/…
http://www.theregister.co.uk/2013/01/29/cctv_vuln/
Other consumer-device vulnerabilities:
http://www.wired.com/threatlevel/2012/05/cctv-hack/
http://www.forbes.com/sites/ericbasu/2013/08/03/…
http://www.forbes.com/sites/ericbasu/2013/08/03/…
http://news.cnet.com/8301-1009_3-57596847-83/…
http://www.bbc.co.uk/news/technology-23575249
http://www.bbc.co.uk/news/technology-23575249
http://www.gizmag.com/gps-spoofing-yacht-control/28644/
http://www.technologyreview.com/hack/421410/…
http://www.technologyreview.com/hack/421410/…
http://www.computerworld.com/s/article/9241293/…
http://security.blogs.cnn.com/2012/07/19/…
Syrian Electronic Army Cyberattacks
The Syrian Electronic Army attacked again last week, compromising the websites of the New York Times, Twitter, the Huffington Post, and others.
Political hacking isn’t new. Hackers were breaking into systems for political reasons long before commerce and criminals discovered the Internet. Over the years, we’ve seen UK vs. Ireland, Israel vs. Arab states, Russia vs. its former Soviet republics, India vs. Pakistan, and US vs. China.
There was a big one in 2007, when the government of Estonia was attacked in cyberspace following a diplomatic incident with Russia. It was hyped as the first cyberwar, but the Kremlin denied any Russian government involvement. The only individuals positively identified were young ethnic Russians living in Estonia.
Poke at any of these international incidents, and what you find are kids playing politics. The Syrian Electronic Army doesn’t seem to be an actual army. We don’t even know if they’re Syrian. And—to be fair—I don’t know their ages. Looking at the details of their attacks, it’s pretty clear they didn’t target the “New York Times” and others directly. They reportedly hacked into an Australian domain name registrar called Melbourne IT, and used that access to disrupt service at a bunch of big-name sites.
We saw this same tactic last year from Anonymous: hack around at random, then retcon a political reason why the sites they successfully broke into deserved it. It makes them look a lot more skilled than they actually are.
This isn’t to say that cyberattacks by governments aren’t an issue, or that cyberwar is something to be ignored. Attacks from China reportedly are a mix of government-executed military attacks, government-sponsored independent attackers, and random hacking groups that work with tacit government approval. The US also engages in active cyberattacks around the world. Together with Israel, the US employed a sophisticated computer virus (Stuxnet) to attack Iran in 2010.
For the typical company, defending against these attacks doesn’t require anything different than what you’ve been traditionally been doing to secure yourself in cyberspace. If your network is secure, you’re secure against amateur geopoliticians who just want to help their side.
This essay originally appeared on the “Wall Street Journal’s” website.
http://s.wsj.com/speakeasy/2013/08/29/…
The Syrian Electronic Army:
http://s.wsj.com/digits/2013/08/27/…
The Kremlin’s denial:
http://online.wsj.com/article/SB117944513189906904.html
How the hack worked:
http://online.wsj.com/article/…
Stuxnet:
http://online.wsj.com/article/…
Schneier News
“Schneier on Security” made the list of Wired’s best “Government and Security” blogs.
http://www.wired.com/threatlevel/2013/08/…
I was interviewed by MinnPost.
http://www.minnpost.com/politics-policy/2013/09/…
Various audio and video interviews with me:
http://insidecville.com/nation/bruce-schneier-8-30-13/
http://www.democracynow.org/2013/9/6/…
http://matthewf.net/2013/09/10/…
https://threatpost.com/…
I’m speaking at the Congress on Privacy & Surveillance in Lausanne, Switzerland on 9/30.
http://ic.epfl.ch/privacy-surveillance
You can find my new PGP public key and my OTR key fingerprint here.
https://www.schneier.com/contact.html
Hacker News discusses my PGP key length:
https://news.ycombinator.com/item?id=6376954
The Cryptopocalypse
There was a presentation at Black Hat last month warning us of a “factoring cryptopocalypse”: a moment when factoring numbers and solving the discrete log problem become easy, and both RSA and DH break. This presentation was provocative, and has generated a lot of commentary, but I don’t see any reason to worry.
Yes, breaking modern public-key cryptosystems has gotten easier over the years. This has been true for a few decades now. Back in 1999, I wrote this about factoring:
Factoring has been getting easier. It’s been getting easier faster than anyone has anticipated. I see four reasons why this is so:
* Computers are getting faster. * Computers are better networked. * The factoring algorithms are getting more efficient. * Fundamental advances in mathematics are giving us better
factoring algorithms.
I could have said the same thing about the discrete log problem. And, in fact, advances in solving one problem tend to mirror advances in solving the other.
The reasons are arrayed in order of unpredictability. The first two—advances in computing and networking speed—basically follow Moore’s Law (and others), year after year. The third comes in regularly, but in fits and starts: a 2x improvement here, a 10x improvement there. It’s the fourth that’s the big worry. Fundamental mathematical advances only come once in a while, but when they do come, the effects can be huge. If factoring ever becomes “easy” such that RSA is no longer a viable cryptographic algorithm, it will be because of this sort of advance.
The authors base their current warning on some recent fundamental advances in solving the discrete log problem, but the work doesn’t generalize to the types of numbers used for cryptography. And they’re not going to generalize; the result is simply specialized.
This isn’t to say that solving these problems won’t continue to get easier, but so far it has been trivially easy to increase key lengths to stay ahead of the advances. I expect this to remain true for the foreseeable future.
Cryptopocalypse presentation:
https://www.isecpartners.com/media/105564/…
Commentary:
http://www.technologyreview.com/news/517781/…
http://www.crn.com/slide-shows/security/240159536/…
http://arstechnica.com/security/2013/08/…
http://threatpost.com/…
My 1999 essay:
https://www.schneier.com/crypto-gram-9903.html#RSA140
Recent discrete log advances:
http://eprint.iacr.org/2013/095.pdf
http://eprint.iacr.org/2013/400.pdf
http://www.daemonology.net//…
Measuring Entropy and its Applications to Encryption
There have been a bunch of articles about an information theory paper with vaguely sensational headlines like “Encryption is less secure than we thought” and “Research shakes crypto foundations.” It’s actually not that bad.
Basically, the researchers argue that the traditional measurement of Shannon entropy isn’t the right model to use for cryptography, and that minimum entropy is. This difference may make some ciphertexts easier to decrypt, but not in ways that have practical implications in the general case. It’s the same thinking that leads us to guess passwords from a dictionary rather than randomly—because we know that humans both created the passwords and have to remember them.
This isn’t news—lots of cryptography papers make use of minimum entropy instead of Shannon entropy already—and it’s hard to see what the contribution of this paper is. Note that the paper was presented at an information theory conference, and not a cryptography conference. My guess is that there wasn’t enough crypto expertise on the program committee to reject the paper.
So don’t worry; cryptographic algorithms aren’t going to come crumbling down anytime soon. Well, they might—but not because of this result.
The research:
http://arxiv.org/pdf/1301.6356.pdf
Press reaction:
http://phys.org/news/2013-08-encryption-thought.html
http://www.theregister.co.uk/2013/08/14/…
Slashdot thread:
http://it.slashdot.org/story/13/08/14/1821224/…
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Security Futurologist for BT—formerly British Telecom. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2013 by Bruce Schneier.