July 15, 2015
by Bruce Schneier
CTO, Resilient Systems, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2015/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Crypto-Gram Is Moving (Please Read!)
- Twitter Followers: Please Use the Correct Feed
- Organizational Doxing
- Why We Encrypt
- The Secrecy of the Snowden Documents
- The Risks of Mandating Backdoors in Encryption Products
- What is the DoD’s Position on Backdoors in Security Systems?
- Office of Personnel Management Data Hack
- Schneier News
- More about the NSA’s XKEYSCORE
- Hayden Mocks NSA Reforms
- NSA French and German Intercepts
- Hacking Team Is Hacked
Sometime between now and the next issue, the Crypto-Gram mailing list will be moving to a new host.
When the move happens, you’ll get an e-mail asking you to confirm your subscription. In the e-mail will be a link that will you have to click in order to join the new list. The link will go to dreamhost.com—that’s the new host—not to schneier.com. It’s just the one click, and you won’t be asked for any additional information.
(Yes, I am asking you all to click on a link you’ll receive in e-mail. The fact that I’m writing about this in Crypto-Gram and posting about it on my blog is the best confirmation I can provide.)
If for any reason you don’t want to receive Crypto-Gram anymore, just don’t click the confirmation link, and you’ll automatically drop off the list.
I’ll post updates on the status of the move on the main list page, here:
Blog post about the move:
The official Twitter feed for my blog is @schneierblog.
The account @Bruce_Schneier also mirrored my blog, but was not mine. I had nothing to do with it, and I never knew who owned it. And it kept following people, who then thought I was following them. So earlier this month I had it shut down.
Recently, WikiLeaks began publishing over half a million previously secret cables and other documents from the Foreign Ministry of Saudi Arabia. It’s a huge trove, and already reporters are writing stories about the highly secretive government.
What Saudi Arabia is experiencing isn’t common but part of a growing trend.
Just last week, unknown hackers broke into the network of the cyber-weapons arms manufacturer Hacking Team and published 400 gigabytes of internal data, describing, among other things, its sale of Internet surveillance software to totalitarian regimes around the world.
Last year, hundreds of gigabytes of Sony’s sensitive data was published on the Internet, including executive salaries, corporate emails and contract negotiations. The attacker in this case was the government of North Korea, which was punishing Sony for producing a movie that made fun of its leader. In 2010, the U.S. cyberweapons arms manufacturer HBGary Federal was a victim, and its attackers were members of a loose hacker collective called LulzSec.
Edward Snowden stole a still-unknown number of documents from the National Security Agency in 2013 and gave them to reporters to publish. Chelsea Manning stole three-quarters of a million documents from the U.S. State Department and gave them to WikiLeaks to publish. The person who stole the Saudi Arabian documents might also be a whistleblower and insider but is more likely a hacker who wanted to punish the kingdom.
Organizations are increasingly getting hacked, and not by criminals wanting to steal credit card numbers or account information in order to commit fraud, but by people intent on stealing as much data as they can and publishing it. Law professor and privacy expert Peter Swire refers to “the declining half-life of secrets.” Secrets are simply harder to keep in the information age. This is bad news for all of us who value our privacy, but there’s a hidden benefit when it comes to organizations.
The decline of secrecy means the rise of transparency. Organizational transparency is vital to any open and free society.
Open government laws and freedom of information laws let citizens know what the government is doing, and enable them to carry out their democratic duty to oversee its activities. Corporate disclosure laws perform similar functions in the private sphere. Of course, both corporations and governments have some need for secrecy, but the more they can be open, the more we can knowledgeably decide whether to trust them.
This makes the debate more complicated than simple personal privacy. Publishing someone’s private writings and communications is bad, because in a free and diverse society people should have private space to think and act in ways that would embarrass them if public.
But organizations are not people and, while there are legitimate trade secrets, their information should otherwise be transparent. Holding government and corporate private behavior to public scrutiny is good.
Most organizational secrets are only valuable for a short term: negotiations, new product designs, earnings numbers before they’re released, patents before filing, and so on.
Forever secrets, like the formula for Coca-Cola, are few and far between. The one exception is embarrassments. If an organization had to assume that anything it did would become public in a few years, people within that organization would behave differently.
The NSA would have had to weigh its collection programs against the possibility of public scrutiny. Sony would have had to think about how it would look to the world if it paid its female executives significantly less than its male executives. HBGary would have thought twice before launching an intimidation campaign against a journalist it didn’t like, and Hacking Team wouldn’t have lied to the UN about selling surveillance software to Sudan. Even the government of Saudi Arabia would have behaved differently. Such embarrassment might be the first significant downside of hiring a psychopath as CEO.
I don’t want to imply that this forced transparency is a good thing, though. The threat of disclosure chills all speech, not just illegal, embarrassing, or objectionable speech. There will be less honest and candid discourse. People in organizations need the freedom to write and say things that they wouldn’t want to be made public.
State Department officials need to be able to describe foreign leaders, even if their descriptions are unflattering. Movie executives need to be able to say unkind things about their movie stars. If they can’t, their organizations will suffer.
With few exceptions, our secrets are stored on computers and networks vulnerable to hacking. It’s much easier to break into networks than it is to secure them, and large organizational networks are very complicated and full of security holes. Bottom line: If someone sufficiently skilled, funded and motivated wants to steal an organization’s secrets, they will succeed. This includes hacktivists (HBGary Federal, Hacking Team), foreign governments (Sony), and trusted insiders (State Department and NSA).
It’s not likely that your organization’s secrets will be posted on the Internet for everyone to see, but it’s always a possibility.
Dumping an organization’s secret information is going to become increasingly common as individuals realize its effectiveness for whistleblowing and revenge. While some hackers will use journalists to separate the news stories from mere personal information, not all will.
Both governments and corporations need to assume that their secrets are more likely to be exposed, and exposed sooner, than ever. They should do all they can to protect their data and networks, but have to realize that their best defense might be to refrain from doing things that don’t look good on the front pages of the world’s newspapers.
This essay previously appeared on CNN.com. I didn’t use the term “organizational doxing,” though, because it would be too unfamiliar to that audience.
Saudi Arabia leaks:
Hacking Team leaks:
Encryption protects our data. It protects our data when it’s sitting on our computers and in data centers, and it protects it when it’s being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy. It protects our anonymity. And sometimes, it protects our lives.
This protection is important for everyone. It’s easy to see how encryption protects journalists, human rights defenders, and political activists in authoritarian countries. But encryption protects the rest of us as well. It protects our data from criminals. It protects it from competitors, neighbors, and family members. It protects it from malicious attackers, and it protects it from accidents.
Encryption works best if it’s ubiquitous and automatic. The two forms of encryption you use most often—https URLs on your browser, and the handset-to-tower link for your cell phone calls—work so well because you don’t even know they’re there.
Encryption should be enabled for everything by default, not a feature you turn on only if you’re doing something you consider worth protecting.
This is important. If we only use encryption when we’re working with important data, then encryption signals that data’s importance. If only dissidents use encryption in a country, that country’s authorities have an easy way of identifying them. But if everyone uses it all of the time, encryption ceases to be a signal. No one can distinguish simple chatting from deeply private conversation. The government can’t tell the dissidents from the rest of the population. Every time you use encryption, you’re protecting someone who needs to use it to stay alive.
It’s important to remember that encryption doesn’t magically convey security. There are many ways to get encryption wrong, and we regularly see them in the headlines. Encryption doesn’t protect your computer or phone from being hacked, and it can’t protect metadata, such as e-mail addresses that need to be unencrypted so your mail can be delivered.
But encryption is the most important privacy-preserving technology we have, and one that is uniquely suited to protect against bulk surveillance—the kind done by governments looking to control their populations and criminals looking for vulnerable victims. By forcing both to target their attacks against individuals, we protect society.
Today, we are seeing government pushback against encryption. Many countries, from States like China and Russia to more democratic governments like the United States and the United Kingdom, are either talking about or implementing policies that limit strong encryption. This is dangerous, because it’s technically impossible, and the attempt will cause incredible damage to the security of the Internet.
There are two morals to all of this. One, we should push companies to offer encryption to everyone, by default. And two, we should resist demands from governments to weaken encryption. Any weakening, even in the name of legitimate law enforcement, puts us all at risk. Even though criminals benefit from strong encryption, we’re all much more secure when we all have strong encryption.
This essay originally appeared in Securing Safe Spaces Online.
It’s a companion document to this report:
Last weekend, the Sunday Times published a front-page story, citing anonymous British sources claiming that both China and Russia have copies of the Snowden documents. It’s a terrible article, filled with factual inaccuracies and unsubstantiated claims about both Snowden’s actions and the damage caused by his disclosure, and others have thoroughly refuted the story. I want to focus on the actual question: Do countries like China and Russia have copies of the Snowden documents?
I believe the answer is certainly yes, but that it’s almost certainly not Snowden’s fault.
Snowden has claimed that he gave nothing to China while he was in Hong Kong, and brought nothing to Russia. He has said that he encrypted the documents in such a way that even he no longer has access to them, and that he did this before the US government stranded him in Russia. I have no doubt he did as he said, because A) it’s the smart thing to do, and B) it’s easy. All he would have had to do was encrypt the file with a long random key, break the encrypted text up into a few parts and mail them to trusted friends around the world, then forget the key. He probably added some security embellishments, but—regardless—the first sentence of the Times story simply makes no sense: “Russia and China have cracked the top-secret cache of files…”
But while cryptography is strong, computer security is weak. The vulnerability is not Snowden; it’s everyone who has access to the files.
First, the journalists working with the documents. I’ve handled some of the Snowden documents myself, and even though I’m a paranoid cryptographer, I know how difficult it is to maintain perfect security. It’s been open season on the computers of the journalists Snowden shared documents with since this story broke in July 2013. And while they have been taking extraordinary pains to secure those computers, it’s almost certainly not enough to keep out the world’s intelligence services.
There is a lot of evidence for this belief. We know from other top-secret NSA documents that as far back as 2008, the agency’s Tailored Access Operations group has extraordinary capabilities to hack into and “exfiltrate” data from specific computers, even if those computers are highly secured and not connected to the Internet.
These NSA capabilities are not unique, and it’s reasonable to assume both that other countries had similar capabilities in 2008 and that everyone has improved their attack techniques in the seven years since then. Last week, we learned that Israel had successfully hacked a wide variety of networks, including that of a major computer antivirus company. We also learned that China successfully hacked US government personnel databases. And earlier this year, Russia successfully hacked the White House’s network. These sorts of stories are now routine.
Which brings me to the second potential source of these documents to foreign intelligence agencies: the US and UK governments themselves. I believe that both China and Russia had access to all the files that Snowden took well before Snowden took them because they’ve penetrated the NSA networks where those files reside. After all, the NSA has been a prime target for decades.
Those government hacking examples above were against unclassified networks, but the nation-state techniques we’re seeing work against classified and unconnected networks as well. In general, it’s far easier to attack a network than it is to defend the same network. This isn’t a statement about willpower or budget; it’s how computer and network security work today. A former NSA deputy director recently said that if we were to score cyber the way we score soccer, the tally would be 462-456 twenty minutes into the game. In other words, it’s all offense and no defense.
In this kind of environment, we simply have to assume that even our classified networks have been penetrated. Remember that Snowden was able to wander through the NSA’s networks with impunity, and that the agency had so few controls in place that the only way they can guess what has been taken is to extrapolate based on what has been published. Does anyone believe that Snowden was the first to take advantage of that lax security? I don’t.
This is why I find allegations that Snowden was working for the Russians or the Chinese simply laughable. What makes you think those countries waited for Snowden? And why do you think someone working for the Russians or the Chinese would go public with their haul?
I am reminded of a comment made to me in confidence by a US intelligence official. I asked him what he was most worried about, and he replied: “I know how deep we are in our enemies’ networks without them having any idea that we’re there. I’m worried that our networks are penetrated just as deeply.”
Seems like a reasonable worry to me.
The open question is which countries have sophisticated enough cyberespionage operations to mount a successful attack against one of the journalists or against the intelligence agencies themselves. And while I have my own mental list, the truth is that I don’t know. But certainly Russia and China are on the list, and it’s just as certain they didn’t have to wait for Snowden to get access to the files. While it might be politically convenient to blame Snowden because, as the Sunday Times reported an anonymous source saying, “we have now seen our agents and assets being targeted,” the NSA and GCHQ should first take a look into their mirrors.
This essay originally appeared on Wired.com.
Snowden brought nothing to Russia:
Snowden encrypted the documents:
The capabilities of the NSA’s TAO:
Intelligence successes of other countries:
NSA deputy director quote:
NSA still doesn’t know what Snowden took:
Allegations that Snowden was working for the Russians or the Chinese:
I also posted this essay to Lawfare, where I added:
A Twitter user commented: “Surely if agencies accessed computers of people Snowden shared with then is still his fault?”
Yes, that’s right. Snowden took the documents out of the well-protected NSA network and shared with people who don’t have those levels of computer security. Given what we’ve seen of the NSA’s hacking capabilities, I think the odds are zero that other nations were unable to hack at least one of those journalists’ computers. And yes, Snowden has to own that.
The point I make in the article is that those nations didn’t have to wait for Snowden. More specifically, GCHQ claims that “we have now seen our agents and assets being targeted.” One, agents and assets are not discussed in the Snowden documents. Two, it’s *two years* after Snowden handed those documents to reporters. Whatever is happening, it’s unlikely to be related to Snowden.
Peter Swire, law professor and one of the members of the President’s review group on the NSA, writes about intelligence reform and the USA FREEDOM Act.
New report: “The Tradeoff Fallacy: How marketers are misrepresenting American consumers and opening them up to exploitation.”
When you connect hospital drug pumps to the Internet, they’re hackable. This is only surprising to people who aren’t paying attention.
One of the biggest conceptual problems we have is that something is believed secure until demonstrated otherwise. We need to reverse that: everything should be believed insecure until demonstrated otherwise.
Interesting article on the inner workings of a Facebook account farm, with commentary on fake social media accounts in general.
As we’re all gearing up to fight the Second Crypto War over governments’ demands to be able to backdoor any cryptographic system, it pays for us to remember the history of the First Crypto War. The Open Technology Institute has written the story of those years in the mid-1990s.
The Second Crypto War is going to be harder and nastier, and I am less optimistic that strong cryptography will win in the short term.
Baseball hacking: Cardinals vs. Astros. I think this is the first case of one professional sports team hacking another. No idea if it was an official operation, or a couple of employees doing it on their own initiative.
The Intercept published a new story from the Snowden documents showing that the NSA and GCHQ targeted antivirus companies.
Good articles on the documents.
There are two other Snowden stories about GCHQ: one about its hacking practices, and the other about its propaganda and psychology research. The second is particularly disturbing.
Here’s a comprehensive document on migrating from SHA-1 to SHA-2 in Active Directory certificates.
There’s a new paper on a low-cost TEMPEST attack against PC cryptography:
Interesting research from 2012: “The Dynamics of Evolving Beliefs, Concerns, Emotions, and Behavioral Avoidance Following 9/11: A Longitudinal Analysis of Representative Archival Samples.”
The Intercept has published a highly detailed two-part article on how the NSA’s XKEYSCORE works, including a huge number of related documents from the Snowden archive.
So much to digest. Please post anything interesting you notice in the blog comments.
It’s the Internet, which means there must be cute animal videos. But this one is different. Watch a mother rabbit beat up a snake to protect her children. It’s impressive the way she keeps attacking the snake until it is far away from her nest, but I worry that she doesn’t know enough to grab the snake by the neck. Maybe there just aren’t any venomous snakes around those parts.
Amazon is analyzing the personal relationships of its reviewers.
India is cracking down on people who use technology to cheat on exams:
I haven’t heard much about this sort of thing in the US or Europe, but I assume it’s happening there too.
Interesting article on the NSA’s use of multi-beam antennas for surveillance. Certainly smart technology; it can eavesdrop on multiple targets per antenna.
I’m surprised by how behind the NSA was on this technology. It’s from at least 1973, and there was some commercialization as far back as 1981. Why did it take the NSA/GCHQ until 2010 to install this?
Here’s a modern supplier.
On the security of nuclear facilities:
Last week, a group of cryptographers and security experts released a major paper outlining the risks of government-mandated back-doors in encryption products: Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications, by Hal Abelson, Ross Anderson, Steve Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Matthew Green, Susan Landau, Peter Neumann, Ron Rivest, Jeff Schiller, Bruce Schneier, Michael Specter, and Danny Weitzner.
Abstract: Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels going dark, these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates. We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse forward secrecy design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.
It’s already had a big impact on the debate. It was mentioned several times during the recent Senate hearing on the issue.
Peter Swire’s Senate testimony:
Good article on these new crypto wars:
In May, Admiral James A. Winnefeld, Jr., vice-chairman of the Joint Chiefs of Staff, gave an address at the Joint Service Academies Cyber Security Summit at West Point. After he spoke for twenty minutes on the importance of Internet security and a good national defense, I was able to ask him a question about security versus surveillance:
Bruce Schneier: I’d like to hear you talk about this need to get beyond signatures and the more robust cyber defense and ask the industry to provide these technologies to make the infrastructure more secure. My question is, the only definition of “us” that makes sense is the world, is everybody. Any technologies that we’ve developed and built will be used by everyone—nation-state and non-nation-state. So anything we do to increase our resilience, infrastructure, and security will naturally make Admiral Rogers’s both intelligence and attack jobs much harder. Are you okay with that?
Admiral James A. Winnefeld: Yes. I think Mike’s okay with that, also. That’s a really, really good question. We call that IGL. Anyone know what IGL stands for? Intel gain-loss. And there’s this constant tension between the operational community and the intelligence community when a military action could cause the loss of a critical intelligence node. We live this every day. In fact, in ancient times, when we were collecting actual signals in the air, we would be on the operational side, “I want to take down that emitter so it’ll make it safer for my airplanes to penetrate the airspace,” and they’re saying, “No, you’ve got to keep that emitter up, because I’m getting all kinds of intelligence from it.” So this is a familiar problem. But I think we all win if our networks are more secure. And I think I would rather live on the side of secure networks and a harder problem for Mike on the intelligence side than very vulnerable networks and an easy problem for Mike. And part of that—it’s not only the right thing do, but part of that goes to the fact that we are more vulnerable than any other country in the world, on our dependence on cyber. I’m also very confident that Mike has some very clever people working for him. He might actually still be able to get some work done. But it’s an excellent question. It really is.
It’s a good answer, and one firmly on the side of not introducing security vulnerabilities, backdoors, key-escrow systems, or anything that weakens Internet systems. It speaks to what I have seen as a split in the Second Crypto War, between the NSA and the FBI on building secure systems versus building systems with surveillance capabilities.
I have written about this before:
But here’s the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.
Even worse, modern computer technology is inherently democratizing. Today’s NSA secrets become tomorrow’s PhD theses and the next day’s hacker tools. As long as we’re all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.
We can’t choose a world where the US gets to spy but China doesn’t, or even a world where governments get to spy and criminals don’t. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It’s security or surveillance.
NSA Director Admiral Mike Rogers was in the audience (he spoke earlier), and I saw him nodding at Winnefeld’s answer. Two weeks later, at CyCon in Tallinn, Rogers gave the opening keynote, and he seemed to be saying the opposite.
“Can we create some mechanism where within this legal framework there’s a means to access information that directly relates to the security of our respective nations, even as at the same time we are mindful we have got to protect the rights of our individual citizens?”
Rogers said a framework to allow law enforcement agencies to gain access to communications is in place within the phone system in the United States and other areas, so “why can’t we create a similar kind of framework within the internet and the digital age?”
He added: “I certainly have great respect for those that would argue that they most important thing is to ensure the privacy of our citizens and we shouldn’t allow any means for the government to access information. I would argue that’s not in the nation’s best long term interest, that we’ve got to create some structure that should enable us to do that mindful that it has to be done in a legal way and mindful that it shouldn’t be something arbitrary.”
Does Winnefeld know that Rogers is contradicting him? Can someone ask JCS about this?
Joint Service Academies Cyber Security Summit:
Winnefeld’s comments (32:42 mark):
Rogers’ opening keynote at CyCon:
I don’t have much to say about the recent hack of the US Office of Personnel Management, which has been attributed to China (and seems to be getting worse all the time). We know that government networks aren’t any more secure than corporate networks, and might even be less secure.
I agree with Ben Wittes here (although not the imaginary double standard he talks about in the rest of the essay):
For the record, I have no problem with the Chinese going after this kind of data. Espionage is a rough business and the Chinese owe as little to the privacy rights of our citizens as our intelligence services do to the employees of the Chinese government. It’s our government’s job to protect this material, knowing it could be used to compromise, threaten, or injure its people—not the job of the People’s Liberation Army to forebear collection of material that may have real utility.
Former NSA Director Michael Hayden says much the same thing:
If Hayden had had the ability to get the equivalent Chinese records when running CIA or NSA, he says, “I would not have thought twice. I would not have asked permission. I’d have launched the star fleet. And we’d have brought those suckers home at the speed of light.” The episode, he says, “is not shame on China. This is shame on us for not protecting that kind of information.” The episode is “a tremendously big deal, and my deepest emotion is embarrassment.”
My question is this: Has anyone thought about the possibility of the attackers *manipulating* data in the database? What are the potential attacks that could stem from adding, deleting, and changing data? I don’t think they can add a person with a security clearance, but I’d like someone who knows more than I do to understand the risks.
Ben Wittes comment:
Michael Hayden comment:
I am speaking at Def Con in Las Vegas on 8/7:
An article on my Movie-Plot Threat Contest:
A video of my presentation at the Norwegian Developers Conference:
My Vice intervidew on mass surveillance:
Another review of Data and Goliath:
An interview with me on the UK banning strong encryption:
I’ve been reading through the 48 classified documents about the NSA’s XKEYSCORE system released by the Intercept last month. From the article:
The NSA’s XKEYSCORE program, first revealed by The Guardian, sweeps up countless people’s Internet searches, emails, documents, usernames and passwords, and other private communications. XKEYSCORE is fed a constant flow of Internet traffic from fiber optic cables that make up the backbone of the world’s communication network, among other sources, for processing. As of 2008, the surveillance system boasted approximately 150 field sites in the United States, Mexico, Brazil, United Kingdom, Spain, Russia, Nigeria, Somalia, Pakistan, Japan, Australia, as well as many other countries, consisting of over 700 servers.
These servers store “full-take data” at the collection sites—meaning that they captured all of the traffic collected—and, as of 2009, stored content for 3 to 5 days and metadata for 30 to 45 days. NSA documents indicate that tens of billions of records are stored in its database. “It is a fully distributed processing and query system that runs on machines around the world,” an NSA briefing on XKEYSCORE says. “At field sites, XKEYSCORE can run on multiple computers that gives it the ability to scale in both processing power and storage.”
There seems to be no access controls at all restricting how analysts can use XKEYSCORE. Standing queries—called “workflows”—and new fingerprints have an approval process, presumably for load issues, but individual queries are not approved beforehand but may be audited after the fact. These are things which are supposed to be low latency, and you can’t have an approval process for low latency analyst queries. Since a query can get at the recorded raw data, a single query is effectively a retrospective wiretap.
All this means that the Intercept is correct when it writes:
These facts bolster one of Snowden’s most controversial statements, made in his first video interview published by The Guardian on June 9, 2013. “I, sitting at my desk,” said Snowden, could “wiretap anyone, from you or your accountant, to a federal judge to even the president, if I had a personal email.”
You’ll only get the data if it’s in the NSA’s databases, but if it is there you’ll get it.
Honestly, there’s not much in these documents that’s a surprise to anyone who studied the 2013 XKEYSCORE leaks and knows what can be done with a highly customizable Intrusion Detection System. But it’s always interesting to read the details.
One document—”Intro to Context Sensitive Scanning with X-KEYSCORE Fingerprints (2010)—talks about some of the queries an analyst can run. A sample scenario: “I want to look for people using Mojahedeen Secrets encryption from an iPhone.”
Mujahedeen Secrets is an encryption program written by al Qaeda supporters. It has been around since 2007. Last year, Stuart Baker cited its increased use as evidence that Snowden harmed America. I thought the opposite, that the NSA benefits from al Qaeda using this program. I wrote: “There’s nothing that screams ‘hack me’ more than using specially designed al Qaeda encryption software.”
And now we see how it’s done. In the document, we read about the specific XKEYSCORE queries an analyst can use to search for traffic encrypted by Mujahedeen Secrets. Here are some of the program’s fingerprints:
encryption/mojahaden2 encryption/mojahaden2/encodedheader encryption/mojahaden2/hidden encryption/mojahaden2/hidden2 encryption/mojahaden2/hidden44 encryption/mojahaden2/secure_file_cendode encryption/mojahaden2/securefile
So if you want to search for all iPhone users of Mujahedeen Secrets:
Or you can search for the program’s use in the encrypted text, because: “…many of the CT Targets are now smart enough not to leave the Mojahedeen Secrets header in the E-mails they send. How can we detect that the E-mail (which looks like junk) is in fact Mojahedeen Secrets encrypted text.” Summary of the answer: there are lots of ways to detect the use of this program that users can’t detect. And you can combine the use of Mujahedeen Secrets with other identifiers to find targets. For example, you can specifically search for the program’s use in extremist forums. (Note that the NSA wrote that comment about Mujahedeen Secrets users increasing their opsec in 2010, two years before Snowden supposedly told them that the NSA was listening on their communications. Honestly, I would not be surprised if the program turned out to have been a US operation to get Islamic radicals to make their traffic stand out more easily.)
It’s not just Mujahedeen Secrets. Nicholas Weaver explains how you can use XKEYSCORE to identify co-conspirators who are all using PGP.
And these searches are just one example. Other examples from the documents include:
* “Targets using mail.ru from a behind a large Iranian proxy.”
* Usernames and passwords of people visiting gov.ir.”
* People in Pakistan visiting certain German-language message boards.”
* HTTP POST traffic from Russia in the middle of the night—useful for finding people trying to steal our data.”
* People doing web searches on jihadist topics from Kabul.
E-mails, chats, web-browsing traffic, pictures, documents, voice calls, webcam photos, web searches, advertising analytics traffic, social media traffic, botnet traffic, logged keystrokes, file uploads to online services, Skype sessions and more: if you can figure out how to form the query, you can ask XKEYSCORE for it. For an example of how complex the searches can be, look at this XKEYSCORE query published in March, showing how New Zealand used the system to spy on the World Trade Organization: automatically track any email body with any particular WTO-related content for the upcoming election.
I always read these NSA documents with an assumption that other countries are doing the same thing. The NSA is not made of magic, and XKEYSCORE is not some super-advanced NSA-only technology. It is the same sort of thing that every other country would use with its surveillance data. For example, Russia explicitly requires ISPs to install similar monitors as part of its SORM Internet surveillance system. As a home user, you can build your own XKEYSCORE using the public-domain Bro Security Monitor and the related Network Time Machine attached to a back-end data-storage system. (Lawrence Berkeley National Laboratory uses this system to store three months’ worth of Internet traffic for retrospective surveillance—it used the data to study Heartbleed.) The primary advantage the NSA has is that it sees more of the Internet than anyone else, and spends more money to store the data it intercepts for longer than anyone else. And if these documents explain XKEYSCORE in 2009 and 2010, expect that it’s much more powerful now.
Back to encryption and Mujahedeen Secrets. If you want to stay secure, whether you’re trying to evade surveillance by Russia, China, the NSA, criminals intercepting large amounts of traffic, or anyone else, try not to stand out. Don’t use some homemade specialized cryptography that can be easily identified by a system like this. Use reasonably strong encryption software on a reasonably secure device—that’s all in common use. If you trust Apple’s claims, use iMessage and FaceTime on your iPhone. I really like Moxie Marlinspike’s Signal for both text and voice, but worry that it’s too obvious because it’s still rare. Ubiquitous encryption is the bane of listeners worldwide, and it’s the best thing we can deploy to make the world safer.
All the links and references associated with this essay are here:
Former NSA Director Michael recently mocked the NSA reforms in the recently passed USA Freedom Act:
If somebody would come up to me and say, “Look, Hayden, here’s the thing: This Snowden thing is going to be a nightmare for you guys for about two years. And when we get all done with it, what you’re going to be required to do is that little 215 program about American telephony metadata—and by the way, you can still have access to it, but you got to go to the court and get access to it from the companies, rather than keep it to yourself.” I go: “And this is it after two years? Cool!”
The thing is, he’s right. And Peter Swire is also right when he calls the law “the biggest pro-privacy change to U.S. intelligence law since the original enactment of the Foreign Intelligence Surveillance Act in 1978.” I supported the bill not because it was the answer, but because it was a step in the right direction. And Hayden’s comments demonstrate how much more work we have to do.
A few weeks ago, WikiLeaks published three summaries of NSA intercepts of French and German government communications. To me, the most interesting thing is not the intercept analyses, the spreadsheets of intelligence targets. Here we learn the specific telephone numbers being targeted, who owns those phone numbers, the office within the NSA that processes the raw communications received, why the target is being spied on (for the Germans, all are designated as “Germany: Political Affairs”), and when we started spying using this particular justification. It’s one of the few glimpses we have into the bureaucracy of surveillance.
Now that we’ve seen a few top secret documents on eavesdropping on German, French, and Brazilian communications, and given what I know of Julian Assange’s tactics, my guess is that there is a lot more where this came from. And this might be yet another leaker—certainly it’s not Snowden.
Counting the leakers:
Someone hacked the cyberweapons arms manufacturer Hacking Team and posted 400 GB of internal company data.
Hacking Team is a pretty sleazy company, selling surveillance software to all sorts of authoritarian governments around the world. Reporters Without Borders calls it one of the enemies of the Internet. Citizen Lab has published many reports about their activities.
It’s a huge trove of data, including a spreadsheet listing every government client, when they first bought the surveillance software, and how much money they have paid the company to date. Not surprising, the company has been lying about who its customers are.
The Hacking Team CEO, David Vincenzetti, doesn’t like me:
In another [e-mail], the Hacking Team CEO on 15 May claimed renowned cryptographer Bruce Schneier was “exploiting the Big Brother is Watching You FUD (Fear, Uncertainty and Doubt) phenomenon in order to sell his books, write quite self-promoting essays, give interviews, do consulting etc. and earn his hefty money.”
Meanwhile, Hacking Team has told all of its customers to shut down all uses of its software. They are in “full on emergency mode,” which is perfectly understandable. This is a big deal. It’s one thing to have dissatisfied customers. It’s another to have dissatisfied customers with death squads. I don’t think the company is going to survive this.
EDITED TO ADD: Hacking Team had no exploits for an un-jailbroken iPhone. Seems like the platform of choice if you want to stay secure.
More on Hacking Team:
Hacking Team doesn’t like me:
Hacking Team’s full on emergency mode:
Hacking Team had a signed iOS certificate, which has been revoked:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <https://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.
Copyright (c) 2015 by Bruce Schneier.