April 15, 2014
by Bruce Schneier
CTO, Co3 Systems, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1404.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Seventh Movie-Plot Threat Contest
- MYSTIC: The NSA’s Telephone Call Collection Program
- The Continuing Public/Private Surveillance Partnership
- New Book on Data and Power
- Schneier News
- An Open Letter to IBM’s Open Letter
- Ephemeral Apps
- Details of the Target Credit Card Breach
Heartbleed is a catastrophic bug in OpenSSL:
“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.
Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory — SSL private keys, user keys, anything — is vulnerable. And you have to assume that it is all compromised. All of it.
“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.
The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.
At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.
As you might expect, this year’s contest has the NSA as the villain:
The NSA has won, but how did it do it? How did it use its ability to conduct ubiquitous surveillance, its massive data centers, and its advanced data analytics capabilities to come out on top? Did it take over the world overtly, or is it just pulling the strings behind everyone’s backs? Did it have to force companies to build surveillance into its products, or could it just piggy-back on market trends? How does it deal with liberal democracies and ruthless totalitarian dictatorships at the same time? Is it blackmailing Congress? How does the money flow? What’s the story?
That’s it: an NSA movie-plot threat. (For those who don’t know, a movie-plot threat is a scary-threat story that would make a great movie, but is much too specific to build security policies around.) Nothing too science fictional; today’s technology or presumed technology only.
Entries are limited to 500 words, and should be posted in the comments. In a month, I’ll choose some semifinalists, and we can all vote and pick the winner.
Prize will be something tangible, but primarily the accolades of your peers.
Post your entries here:
The Washington Post is reporting on an NSA program called MYSTIC, which collects all — that’s 100% — of a country’s telephone calls. Those calls are stored in a database codenamed NUCLEON, and can be retrieved at a later date using a tool codenamed RETRO. This is voice, not metadata.
What’s interesting here is not the particular country whose data is being collected; that information was withheld from the article. It’s not even that the voice data is stored for a month, and then deleted. All of that can change, either at the whim of the NSA or as storage capabilities get larger. What’s interesting is that the capability exists to collect 100% of a country’s telephone calls, and the analysis tools are in place to search them.
Automatic face-recognition software is getting better:
Ross Anderson liveblogged Financial Cryptography 2014. Interesting stuff.
Both Der Spiegel and the New York Times are reporting that the NSA has hacked Huawei pretty extensively, getting copies of the company’s products’ source code and most of the e-mail from the company. Aside from being a pretty interesting story about the operational capabilities of the NSA, it exposes some pretty blatant US government hypocrisy on this issue. As former Bush administration official (and a friend of mine) Jack Goldsmith writes: “The Huawei revelations are devastating rebuttals to hypocritical U.S. complaints about Chinese penetration of U.S. networks, and also make USG protestations about not stealing intellectual property to help U.S. firms’ competitiveness seem like the self-serving hairsplitting that it is. (I have elaborated on these points many times and will not repeat them here.) ‘The irony is that exactly what they are doing to us is what they have always charged that the Chinese are doing through us,’ says a Huawei Executive.” This isn’t to say that the Chinese are not targeting foreign networks through Huawei equipment; they almost certainly are.
There’s a private competition to identify new password hashing schemes. Submissions are due at the end of the month.
Chilean drug trafficker pencil-and-paper code.
Interesting research into figuring out where Twitter users are located, based on similar tweets from other users.
Research shows that smarter people are more trusting.
Five-year-old finds login vulnerability in Microsoft Xbox.
The headline is provocative: “Human biology inspires ‘unbreakable’ encryption.” It’s unlikely to be secure.
Regularly, someone from outside cryptography — who has no idea how crypto works — pops up and says “Hey, I can solve their problems.” Invariably, they make some trivial encryption scheme because they don’t know better. Remember: anyone can create a cryptosystem that he himself cannot break. And this advice from 15 years ago is still relevant.
Police disabling their own voice recorders.
Surveillance of power is one of the most important ways to ensure that power does not abuse its status. But, of course, power does not like to be watched.
Recently Matthew Green has led an independent project to audit TrueCrypt. Phase I, a source-code audit by ISec Partners, is complete. Next up is Phase II, formal cryptanlaysis. Quick summary: I’m still using it.
If you’ve been reading the news recently, you might think that corporate America is doing its best to thwart NSA surveillance.
Google just announced that it is encrypting Gmail when you access it from your computer or phone, and between data centers. Last week, Mark Zuckerberg personally called President Obama to complain about the NSA using Facebook as a means to hack computers, and Facebook’s Chief Security Officer explained to reporters that the attack technique has not worked since last summer. Yahoo, Google, Microsoft, and others are now regularly publishing “transparency reports,” listing approximately how many government data requests the companies have received and complied with.
On the government side, last week the NSA’s General Counsel Rajesh De seemed to have thrown those companies under a bus by stating that — despite their denials — they knew all about the NSA’s collection of data under both the PRISM program and some unnamed “upstream” collections on the communications links.
Yes, it may seem like the public/private surveillance partnership has frayed — but, unfortunately, it is alive and well. The main focus of massive Internet companies and government agencies both still largely align: to keep us all under constant surveillance. When they bicker, it’s mostly role-playing designed to keep us blasé about what’s really going on.
The U.S. intelligence community is still playing word games with us. The NSA collects our data based on four different legal authorities: the Foreign Intelligence Surveillance Act (FISA) of 1978, Executive Order 12333 of 1981 and modified in 2004 and 2008, Section 215 of the Patriot Act of 2001, and Section 702 of the FISA Amendments Act (FAA) of 2008. Be careful when someone from the intelligence community uses the caveat “not under this program” or “not under this authority”; almost certainly it means that whatever it is they’re denying is done under some other program or authority. So when De said that companies knew about NSA collection under Section 702, it doesn’t mean they knew about the other collection programs.
The big Internet companies know of PRISM — although not under that code name — because that’s how the program works; the NSA serves them with FISA orders. Those same companies did not know about any of the other surveillance against their users conducted on the far more permissive EO 12333. Google and Yahoo did not know about MUSCULAR, the NSA’s secret program to eavesdrop on their trunk connections between data centers. Facebook did not know about QUANTUMHAND, the NSA’s secret program to attack Facebook users. And none of the target companies knew that the NSA was harvesting their users’ address books and buddy lists.
These companies are certainly pissed that the publicity surrounding the NSA’s actions is undermining their users’ trust in their services, and they’re losing money because of it. Cisco, IBM, cloud service providers, and others have announced that they’re losing billions, mostly in foreign sales.
These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like. IBM’s letter to its clients last week is an excellent example. The letter lists five “simple facts” that it hopes will mollify its customers, but the items are so qualified with caveats that they do the exact opposite to anyone who understands the full extent of NSA surveillance. And IBM’s spending $1.2B on data centers outside the U.S. will only reassure customers who don’t realize that National Security Letters require a company to turn over data, regardless of where in the world it is stored.
Google’s recent actions, and similar actions of many Internet companies, will definitely improve its users’ security against surreptitious government collection programs — both the NSA’s and other governments’ — but their assurances deliberately ignores the massive security vulnerability built into its services by design. Google, and by extension, the U.S. government, still has access to your communications on Google’s servers.
Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop.
It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others.
Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations. Surveillance is still the business model of the Internet, and every one of those companies wants access to your communications and your metadata. Your private thoughts and conversations are the product they sell to their customers. We also have learned that they read your e-mail for their own internal investigations.
But even if this were not true, even if — for example — Google were willing to forgo data mining your e-mail and video conversations in exchange for the marketing advantage it would give it over Microsoft, it still won’t offer you real security. It can’t.
The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.
This isn’t paranoia. We know that the U.S. government ordered the secure e-mail provider Lavabit to turn over its master keys and compromise every one of its users. We know that the U.S. government convinced Microsoft — either through bribery, coercion, threat, or legal compulsion — to make changes in how Skype operates, to make eavesdropping easier.
We don’t know what sort of pressure the U.S. government has put on Google and the others. We don’t know what secret agreements those companies have reached with the NSA. We do know the NSA’s BULLRUN program to subvert Internet cryptography was successful against many common protocols. Did the NSA demand Google’s keys, as it did with Lavabit? Did its Tailored Access Operations group break into to Google’s servers and steal the keys?
We just don’t know.
The best we have are caveat-laden pseudo-assurances. At SXSW earlier this month, CEO Eric Schmidt tried to reassure the audience by saying that he was “pretty sure that information within Google is now safe from any government’s prying eyes.” A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.
Google, Facebook, Microsoft, and the others are already on the record as supporting these legislative changes. It would be better if they openly acknowledged their users’ insecurity and increased their pressure on the government to change, rather than trying to fool their users and customers.
This essay previously appeared on TheAtlantic.com.
Rajesh De’s comments:
Public/private surveillance partnership:
NSA word games:
The permissiveness of EO 12333:
NSA harvesting address books and buddy lists:
Companies losing money because of NSA activities:
Companies securing their users:
Surveillance is the business model of the Internet:
The value of metadata:
NSA and Lavabit:
NSA and Microsoft Skype:
Tailored Access Operations:
Eric Schmidt’s comments:
Companies on the record as opposing NSA surveillance:
I’m writing a new book, with the tentative title of “Data and Power.”
While it’s obvious that the proliferation of data affects power, it’s less clear how it does so. Corporations are collecting vast dossiers on our activities on- and off-line — initially to personalize marketing efforts, but increasingly to control their customer relationships. Governments are using surveillance, censorship, and propaganda — both to protect us from harm and to protect their own power. Distributed groups — socially motivated hackers, political dissidents, criminals, communities of interest — are using the Internet to both organize and effect change. And we as individuals are becoming both more powerful and less powerful. We can’t evade surveillance, but we can post videos of police atrocities online, bypassing censors and informing the world. How long we’ll still have those capabilities is unclear.
Understanding these trends involves understanding data. Data is generated by all computing processes. Most of it used to be thrown away, but declines in the prices of both storage and processing mean that more and more of it is now saved and used. Who saves the data, and how they use it, is a matter of extreme consequence, and will continue to be for the coming decades.
“Data and Power” examines these trends and more. The book looks at the proliferation and accessibility of data, and how it has enabled constant surveillance of our entire society. It examines how governments and corporations use that surveillance data, as well as how they control data for censorship and propaganda. The book then explores how data has empowered individuals and less-traditional power blocs, and how the interplay among all of these types of power will evolve in the future. It discusses technical controls on power, and the limitations of those controls. And finally, the book describes solutions to balance power in the future — both general principles for society as a whole, and specific near-term changes in technology, business, laws, and social norms.
There’s a fundamental trade-off we need to make as society. Our data is enormously valuable in aggregate, yet it’s incredibly personal. The powerful will continue to demand aggregate data, yet we have to protect its intimate details. Balancing those two conflicting values is difficult, whether it’s medical data, location data, Internet search data, or telephone metadata. But balancing them is what society needs to do, and is almost certainly the fundamental issue of the Information Age.
As I said, “Data and Power” is just a tentative title. Suggestions for a better one — either a title or a subtitle — are appreciated. Here are some ideas to get you started:
* Data and Power: The Political Science of Information Security
* The Feudal Internet: How Data Affects Power and How Power Affects Data
* Our Data Shadow: The Battles for Power in the Information Society
* Data.Power: The Political Science of Information Security
* Data and Power in the Information Age
* Data and Goliath: The Balance of Power in the Information Age
* The Power of Data: How the Information Society Upsets Power Balances
My plan is to finish the manuscript by the end of October, for publication in February 2015. Norton will be the publisher. I’ll post a table of contents in a couple of months. And, as with my previous books, I will be asking for volunteers to read and comment on a draft version.
If you notice I’m not posting as many blog entries, or writing as many essays, this is what I’m doing instead.
I’m speaking at Stanford Law School on Apr 15:
I’m speaking at the Global Summit for Leaders in Information Technology in Washington DC on May 7.
I’m speaking at the Institute of World Politics on May 8.
I’m speaking at the University of Zurich on May 21:
I’m speaking at IT Security Inside in Zurich on May 22:
I’m speaking at the University of Oregon at Eugene on May 28, and then Portland on May 29:
Here’s a list of articles about me from the last month.
http://threatpost.com/… or http://threatpost.com/bruce-schneier-tec
And these are audio/videos of me:
Last week, IBM published an “open letter” about “government access to data,” where it tried to assure its customers that it’s not handing everything over to the NSA. Unfortunately, the letter (quoted in part below) leaves open more questions than it answers.
At the outset, we think it is important for IBM to clearly state some simple facts:
* IBM has not provided client data to the National Security Agency (NSA) or any other government agency under the program known as PRISM.
* IBM has not provided client data to the NSA or any other government agency under any surveillance program involving the bulk collection of content or metadata.
* IBM has not provided client data stored outside the United States to the U.S. government under a national security order, such as a FISA order or a National Security Letter.
* IBM does not put “backdoors” in its products for the NSA or any other government agency, nor does IBM provide software source code or encryption keys to the NSA or any other government agency for the purpose of accessing client data.
* IBM has and will continue to comply with the local laws, including data privacy laws, in all countries in which it operates.
To which I ask:
* We know you haven’t provided data to the NSA under PRISM. It didn’t use that name with you. Even the NSA General Counsel said: “PRISM was an internal government term that as the result of leaks became the public term.” What program *did* you provide data to the NSA under?
* It seems rather obvious that you haven’t provided the NSA with any data under a bulk collection surveillance program. You’re not Google; you don’t have bulk data to that extent. So why the caveat? And again, under what program *did* you provide data to the NSA?
* Okay, so you say that you haven’t provided any data stored outside the US to the NSA under a national security order. Since those national security orders prohibit you from disclosing their existence, would you say anything different if you did receive them? And even if we believe this statement, it implies two questions. Why did you specifically not talk about data stored inside the US? And why did you specifically not talk about providing data under another sort of order?
* Of course you don’t provide your source code to the NSA for the purpose of accessing client data. The NSA isn’t going to tell you that’s why it wants your source code. So, for what purposes *did* you provide your source code to the government? To get a contract? For audit purposes? For what?
* Yes, we know you need to comply with all local laws, including US laws. That’s why we don’t trust you — the current secret interpretations of US law requires you to screw your customers. I’d really rather you simply said that, and worked to change those laws, than pretend that you can convince us otherwise.
One more thing. A New York Times article says that you are “spending more than a billion dollars to build data centers overseas to reassure foreign customers that their information is safe from prying eyes in the United States government.” Do you not know that National Security Letters require you to turn over requested data, regardless of where in the world it is stored? Or do you just hope that your customers don’t realize that?
The NY Times article:
Ephemeral messaging apps such as Snapchat, Wickr and Frankly, all of which advertise that your photo, message or update will only be accessible for a short period, are on the rise. Snapchat and Frankly, for example, claim they permanently delete messages, photos and videos after 10 seconds. After that, there’s no record.
This notion is especially popular with young people, and these apps are an antidote to sites such as Facebook where everything you post lasts forever unless you take it down — and taking it down is no guarantee that it isn’t still available.
These ephemeral apps are the first concerted push against the permanence of Internet conversation. We started losing ephemeral conversation when computers began to mediate our communications. Computers naturally produce conversation records, and that data was often saved and archived.
The powerful and famous — from Oliver North back in 1987 to Anthony Weiner in 2011 — have been brought down by e-mails, texts, tweets and posts they thought private. Lots of us have been embroiled in more personal embarrassments resulting from things we’ve said either being saved for too long or shared too widely.
People have reacted to this permanent nature of Internet communications in ad hoc ways. We’ve deleted our stuff where possible and asked others not to forward our writings without permission. “Wall scrubbing” is the term used to describe the deletion of Facebook posts.
Sociologist danah boyd has written about teens who systematically delete every post they make on Facebook soon after they make it. Apps such as Wickr just automate the process. And it turns out there’s a huge market in that.
Ephemeral conversation is easy to promise but hard to get right. In 2013, researchers discovered that Snapchat doesn’t delete images as advertised; it merely changes their names so they’re not easy to see. Whether this is a problem for users depends on how technically savvy their adversaries are, but it illustrates the difficulty of making instant deletion actually work.
The problem is that these new “ephemeral” conversations aren’t really ephemeral the way a face-to-face unrecorded conversation would be. They’re not ephemeral like a conversation during a walk in a deserted woods used to be before the invention of cell phones and GPS receivers.
At best, the data is recorded, used, saved and then deliberately deleted. At worst, the ephemeral nature is faked. While the apps make the posts, texts or messages unavailable to users quickly, they probably don’t erase them off their systems immediately. They certainly don’t erase them from their backup tapes, if they end up there.
The companies offering these apps might very well analyze their content and make that information available to advertisers. We don’t know how much metadata is saved. In Snapchat, users can see the metadata even though they can’t see the content and what it’s used for. And if the government demanded copies of those conversations — either through a secret NSA demand or a more normal legal process involving an employer or school — the companies would have no choice but to hand them over.
Even worse, if the FBI or NSA demanded that American companies secretly store those conversations and not tell their users, breaking their promise of deletion, the companies would have no choice but to comply.
That last bit isn’t just paranoia.
We know the U.S. government has done this to companies large and small. Lavabit was a small secure e-mail service, with an encryption system designed so that even the company had no access to users’ e-mail. Last year, the NSA presented it with a secret court order demanding that it turn over its master key, thereby compromising the security of every user. Lavabit shut down its service rather than comply, but that option isn’t feasible for larger companies. In 2011, Microsoft made some still-unknown changes to Skype to make NSA eavesdropping easier, but the security promises they advertised didn’t change.
This is one of the reasons President Barack Obama’s announcement that he will end one particular NSA collection program under one particular legal authority barely begins to solve the problem: the surveillance state is so robust that anything other than a major overhaul won’t make a difference.
Of course, the typical Snapchat user doesn’t care whether the U.S. government is monitoring his conversations. He’s more concerned about his high school friends and his parents. But if these platforms are insecure, it’s not just the NSA that one should worry about.
Dissidents in the Ukraine and elsewhere need security, and if they rely on ephemeral apps, they need to know that their own governments aren’t saving copies of their chats. And even U.S. high school students need to know that their photos won’t be surreptitiously saved and used against them years later.
The need for ephemeral conversation isn’t some weird privacy fetish or the exclusive purview of criminals with something to hide. It represents a basic need for human privacy, and something every one of us had as a matter of course before the invention of microphones and recording devices.
We need ephemeral apps, but we need credible assurances from the companies that they are actually secure and credible assurances from the government that they won’t be subverted.
This essay previously appeared on CNN.com.
The popularity of ephemeral apps:
Facebook doesn’t erase deleted content:
Systematically deleting every post they make on Facebook:
Snapchat doesn’t delete images as advertised:
Microsoft making changes in Skype:
President Barack Obama’s announcement:
Breaking up the NSA:
The value of privacy:
There are apps to permanently save Snapchat photos:
At Financial Cryptography 2014, Franziska Roesner presented a paper that questions as to whether users expect ephemeral messaging from Snapchat.
Long and interesting article about the Target credit card breach from last year. What’s especially interesting to me is that the attack had been preventable, but the problem was that Target messed up its incident response.
In testimony before Congress, Target has said that it was only after the U.S. Department of Justice notified the retailer about the breach in mid-December that company investigators went back to figure out what happened. What it hasn’t publicly revealed: Poring over computer logs, Target found FireEye’s alerts from Nov. 30 and more from Dec. 2, when hackers installed yet another version of the malware. Not only should those alarms have been impossible to miss, they went off early enough that the hackers hadn’t begun transmitting the stolen card data out of Target’s network. Had the company’s security team responded when it was supposed to, the theft that has since engulfed Target, touched as many as one in three American consumers, and led to an international manhunt for the hackers never would have happened at all.
This is *exactly* the sort of thing that my new company, Co3 Systems, solves. All of those next-generation endpoint detection systems, threat intelligence feeds, and so on only matter if you do something in response to them. If Target had had incident response procedures in place, and a system in place to ensure they followed those procedures, it would have been much more likely to have responded to the alerts it received from FireEye.
This is why I believe that incident response is the most underserved area of IT security right now.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books — including “Liars and Outliers: Enabling the Trust Society Needs to Survive” — as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Co3 Systems, Inc. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Co3 Systems, Inc.
Copyright (c) 2014 by Bruce Schneier.