Crypto-Gram

September 15, 2016

by Bruce Schneier
CTO, Resilient, an IBM Company
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2016/...>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


The NSA Is Hoarding Vulnerabilities

The National Security Agency is lying to us. We know that because data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others' computers. Those vulnerabilities aren't being reported, and aren't getting fixed, making your computers and networks unsafe.

On August 13, a group calling itself the Shadow Brokers released 300 megabytes of NSA cyberweapon code on the Internet. Near as we experts can tell, the NSA network itself wasn't hacked; what probably happened was that a "staging server" for NSA cyberweapons -- that is, a server the NSA was making use of to mask its surveillance activities -- was hacked in 2013.

The NSA inadvertently resecured itself in what was coincidentally the early weeks of the Snowden document release. The people behind the link used casual hacker lingo, and made a weird, implausible proposal involving holding a bitcoin auction for the rest of the data: "!!! Attention government sponsors of cyber warfare and those who profit from it !!!! How much you pay for enemies cyber weapons?"

Still, most people believe the hack was the work of the Russian government and the data release some sort of political message. Perhaps it was a warning that if the US government exposes the Russians as being behind the hack of the Democratic National Committee -- or other high-profile data breaches -- the Russians will expose NSA exploits in turn.

But what I want to talk about is the data. The sophisticated cyberweapons in the data dump include vulnerabilities and "exploit code" that can be deployed against common Internet security systems. Products targeted include those made by Cisco, Fortinet, TOPSEC, Watchguard, and Juniper -- systems that are used by both private and government organizations around the world. Some of these vulnerabilities have been independently discovered and fixed since 2013, and some had remained unknown until now.

All of them are examples of the NSA -- despite what it and other representatives of the US government say -- prioritizing its ability to conduct surveillance over our security. Here's one example. Security researcher Mustafa al-Bassam found an attack tool codenamed BENIGHCERTAIN that tricks certain Cisco firewalls into exposing some of their memory, including their authentication passwords. Those passwords can then be used to decrypt virtual private network, or VPN, traffic, completely bypassing the firewalls' security. Cisco hasn't sold these firewalls since 2009, but they're still in use today.

Vulnerabilities like that one could have, and should have, been fixed years ago. And they would have been, if the NSA had made good on its word to alert American companies and organizations when it had identified security holes.

Over the past few years, different parts of the US government have repeatedly assured us that the NSA does not hoard "zero days" -- the term used by security experts for vulnerabilities unknown to software vendors. After we learned from the Snowden documents that the NSA purchases zero-day vulnerabilities from cyberweapons arms manufacturers, the Obama administration announced, in early 2014, that the NSA must disclose flaws in common software so they can be patched (unless there is "a clear national security or law enforcement" use).

Later that year, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues Michael Daniel insisted that US doesn't stockpile zero-days (except for the same narrow exemption). An official statement from the White House in 2014 said the same thing.

The Shadow Brokers data shows this is not true. The NSA hoards vulnerabilities.

Hoarding zero-day vulnerabilities is a bad idea. It means that we're all less secure. When Edward Snowden exposed many of the NSA's surveillance programs, there was considerable discussion about what the agency does with vulnerabilities in common software products that it finds. Inside the US government, the system of figuring out what to do with individual vulnerabilities is called the Vulnerabilities Equities Process (VEP). It's an inter-agency process, and it's complicated.

There is a fundamental tension between attack and defense. The NSA can keep the vulnerability secret and use it to attack other networks. In such a case, we are all at risk of someone else finding and using the same vulnerability. Alternatively, the NSA can disclose the vulnerability to the product vendor and see it gets fixed. In this case, we are all secure against whoever might be using the vulnerability, but the NSA can't use it to attack other systems.

There are probably some overly pedantic word games going on. Last year, the NSA said that it discloses 91 percent of the vulnerabilities it finds. Leaving aside the question of whether that remaining 9 percent represents 1, 10, or 1,000 vulnerabilities, there's the bigger question of what qualifies in the NSA's eyes as a "vulnerability."

Not all vulnerabilities can be turned into exploit code. The NSA loses no attack capabilities by disclosing the vulnerabilities it can't use, and doing so gets its numbers up; it's good PR. The vulnerabilities we care about are the ones in the Shadow Brokers data dump. We care about them because those are the ones whose existence leaves us all vulnerable.

Because everyone uses the same software, hardware, and networking protocols, there is no way to simultaneously secure our systems while attacking their systems -- whoever "they" are. Either everyone is more secure, or everyone is more vulnerable.

Pretty much uniformly, security experts believe we ought to disclose and fix vulnerabilities. And the NSA continues to say things that appear to reflect that view, too. Recently, the NSA told everyone that it doesn't rely on zero days -- very much, anyway.

Earlier this year at a security conference, Rob Joyce, the head of the NSA's Tailored Access Operations (TAO) organization -- basically the country's chief hacker -- gave a rare public talk, in which he said that credential stealing is a more fruitful method of attack than are zero days: "A lot of people think that nation states are running their operations on zero days, but it's not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive."

The distinction he's referring to is the one between exploiting a technical hole in software and waiting for a human being to, say, get sloppy with a password.

A phrase you often hear in any discussion of the Vulnerabilities Equities Process is NOBUS, which stands for "nobody but us." Basically, when the NSA finds a vulnerability, it tries to figure out if it is unique in its ability to find it, or whether someone else could find it, too. If it believes no one else will find the problem, it may decline to make it public. It's an evaluation prone to both hubris and optimism, and many security experts have cast doubt on the very notion that there is some unique American ability to conduct vulnerability research.

The vulnerabilities in the Shadow Brokers data dump are definitely not NOBUS-level. They are run-of-the-mill vulnerabilities that anyone -- another government, cybercriminals, amateur hackers -- could discover, as evidenced by the fact that many of them *were* discovered between 2013, when the data was stolen, and this summer, when it was published. They are vulnerabilities in common systems used by people and companies all over the world.

So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities? This is what the Vulnerabilities Equities Process is designed to prevent, and it has clearly failed.

If there are any vulnerabilities that -- according to the standards established by the White House and the NSA -- should have been disclosed and fixed, it's these. That they have not been during the three-plus years that the NSA knew about and exploited them -- despite Joyce's insistence that they're not very important -- demonstrates that the Vulnerable Equities Process is badly broken.

We need to fix this. This is exactly the sort of thing a congressional investigation is for. This whole process needs a lot more transparency, oversight, and accountability. It needs guiding principles that prioritize security over surveillance. A good place to start are the recommendations by Ari Schwartz and Rob Knake in their report: these include a clearly defined and more public process, more oversight by Congress and other independent bodies, and a strong bias toward fixing vulnerabilities instead of exploiting them.

And as long as I'm dreaming, we really need to separate our nation's intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency's mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS's mission.

I doubt we're going to see any congressional investigations this year, but we're going to have to figure this out eventually. In my 2014 book "Data and Goliath," I write that "no matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security by fixing almost all the vulnerabilities we find..." Our nation's cybersecurity is just too important to let the NSA sacrifice it in order to gain a fleeting advantage over a foreign adversary.

This essay previously appeared on Vox.com.
http://www.vox.com/2016/8/24/12615258/...

My original post, when we weren't sure this was real:
https://www.schneier.com/blog/archives/2016/08/...

The Shadow Brokers:
https://twitter.com/shadowbrokerss

The Equation Group:
http://arstechnica.com/security/2015/02/...

News articles:
http://www.wsj.com/articles/...
https://motherboard.vice.com/read/...
http://www.nytimes.com/2016/08/17/us/...
https://www.washingtonpost.com/world/...
https://cybersecpolitics.blogspot.com/2016/08/...

Nicholas Weaver's analysis:
https://www.lawfareblog.com/very-bad-monday-nsa-0

Snowden's comments:
https://twitter.com/Snowden/status/765513662597623808

Exploits in the data dump:
http://blogs.cisco.com/security/shadow-brokers
http://www.forbes.com/sites/thomasbrewster/2016/08/...
https://motherboard.vice.com/read/...
https://motherboard.vice.com/read/...
https://theintercept.com/2016/08/19/...
http://fortiguard.com/advisory/FG-IR-16-023
https://musalbas.com/2016/08/18/...
https://www.cisco.com/c/en/us/products/collateral/...

Someone who played with some of the vulnerabilities.
http://xorcat.net/2016/08/16/...

The vulnerabilities were seen in the wild within 24 hours, demonstrating how important they were to disclose and patch.
https://www.wired.com/2016/08/...

James Bamford thinks this is the work of an insider. I disagree, but he's right that the TAO catalog was not a Snowden document.
http://www.reuters.com/article/...

People are looking at the quality of the code. It's not that good.
https://www.cs.uic.edu/~s/musings/equation-group/
http://news.softpedia.com/news/...

Equities debate:
http://www.nytimes.com/2014/04/13/us/politics/...
https://www.wired.com/2014/11/...
https://www.whitehouse.gov/blog/2014/04/28/...
https://www.wired.com/2014/11/...
https://www.schneier.com/blog/archives/2014/05/...
http://mashable.com/2015/11/09/...
http://www.networkworld.com/article/2462706/...
http://belfercenter.ksg.harvard.edu/publication/...

TAO public talk:
https://www.youtube.com/watch?v=bDJb8WOJYdA

Credential stealing:
https://www.schneier.com/blog/archives/2016/05/...

"Nobody but us.":
https://www.washingtonpost.com/blogs/the-switch/wp/...

Me on breaking up the NSA:
http://www.wired.com/2014/08/...


Someone Is Learning How to Take Down the Internet

Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet. These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. We don't know who is doing this, but it feels like a large nation state. China or Russia would be my first guesses.

First, a little background. If you want to take a network off the Internet, the easiest way to do it is with a distributed denial-of-service attack (DDoS). Like the name says, this is an attack designed to prevent legitimate users from getting to the site. There are subtleties, but basically it means blasting so much data at the site that it's overwhelmed. These attacks are not new: hackers do this to sites they don't like, and criminals have done it as a method of extortion. There is an entire industry, with an arsenal of technologies, devoted to DDoS defense. But largely it's a matter of bandwidth. If the attacker has a bigger fire hose of data than the defender has, the attacker wins.

Recently, some of the major companies that provide the basic infrastructure that makes the Internet work have seen an increase in DDoS attacks against them. Moreover, they have seen a certain profile of attacks. These attacks are significantly larger than the ones they're used to seeing. They last longer. They're more sophisticated. And they look like probing. One week, the attack would start at a particular level of attack and slowly ramp up before stopping. The next week, it would start at that higher point and continue. And so on, along those lines, as if the attacker were looking for the exact point of failure.

The attacks are also configured in such a way as to see what the company's total defenses are. There are many different ways to launch a DDoS attack. The more attack vectors you employ simultaneously, the more different defenses the defender has to counter with. These companies are seeing more attacks using three or four different vectors. This means that the companies have to use everything they've got to defend themselves. They can't hold anything back. They're forced to demonstrate their defense capabilities for the attacker.

I am unable to give details, because these companies spoke with me under condition of anonymity. But this all is consistent with what Verisign is reporting. Verisign is the registrar for many popular top-level Internet domains, like .com and .net. If it goes down, there's a global blackout of all websites and e-mail addresses in the most common top-level domains. Every quarter, Verisign publishes a DDoS trends report. While its publication doesn't have the level of detail I heard from the companies I spoke with, the trends are the same: "in Q2 2016, attacks continued to become more frequent, persistent, and complex."

There's more. One company told me about a variety of probing attacks in addition to the DDoS attacks: testing the ability to manipulate Internet addresses and routes, seeing how long it takes the defenders to respond, and so on. Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.

Who would do this? It doesn't seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It's not normal for companies to do that. Furthermore, the size and scale of these probes -- and especially their persistence -- points to state actors. It feels like a nation's military cybercommand trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US's Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.

What can we do about this? Nothing, really. We don't know where the attacks come from. The data I see suggests China, an assessment shared by the people I spoke with. On the other hand, it's possible to disguise the country of origin for these sorts of attacks. The NSA, which has more surveillance in the Internet backbone than everyone else combined, probably has a better idea, but unless the US decides to make an international incident over this, we won't see any attribution.

But this is happening. And people should know.

This essay previously appeared on Lawfare.com.
https://www.lawfareblog.com/...

Verisign's report:
https://www.verisign.com/assets/...


News

New research: "Flip Feng Shui: Hammering a Needle in the Software Stack," by Kaveh Razavi, Ben Gras, Erik Bosman Bart Preneel, Cristiano Giuffrida, and Herbert Bos.
https://www.usenix.org/system/files/conference/...

If you've read my book "Liars and Outliers," you know I like the prisoner's dilemma as a way to think about trust and security. There is an enormous amount of research -- both theoretical and experimental -- about the dilemma, which is why I found this new research so interesting.
http://advances.sciencemag.org/content/2/8/...
https://psmag.com/...

Andrew Appel has a good two-part essay on securing elections.
https://freedom-to-tinker.com/blog/appel/...
https://freedom-to-tinker.com/blog/appel/...
And three organizations -- Verified Voting, EPIC, and Common Cause -- have published a report on the risks of Internet voting. The report is primarily concerned with privacy, and the threats to a secret ballot.
http://secretballotatrisk.org/

Radio noise from a nearby neon-sign transformer made it impossible for people to unlock their cars remotely.
http://www.arrl.org/news/view/...

The detailed accounts of the recent terrorist-shooter false-alarm at Kennedy Airport in New York illustrate how completely and totally unprepared the airport authorities are for any real such event. I have two reactions to this. On the one hand, this is a movie-plot threat -- the sort of overly specific terrorist scenario that doesn't make sense to defend against. On the other hand, police around the world need training in these types of scenarios in general. Panic can easily cause more deaths than terrorists themselves, and we need to think about what responsibilities police and other security guards have in these situations.
http://nymag.com/daily/intelligencer/2016/08/...
http://www.nytimes.com/2016/08/16/nyregion/...

Windows 10 update breaks most webcams.
http://arstechnica.com/information-technology/2016/...

fMRI experiments show that we are more likely to ignore security warnings when they interrupt other tasks.
https://news.byu.edu/news/...
http://pubsonline.informs.org/doi/abs/10.1287/...
http://www.theregister.co.uk/2016/08/18/...

The EFF has a good analysis of all the ways Windows 10 violates your privacy.
https://www.eff.org/deeplinks/2016/08/...

Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.
http://dx.doi.org/10.1525/collabra.33
http://www.npr.org/sections/13.7/2016/08/22/...

We've long known that 64 bits is too small for a block cipher these days. That's why new block ciphers like AES have 128-bit, or larger, block sizes. The insecurity of the smaller block is nicely illustrated by a new attack called "Sweet32." It exploits the ability to find block collisions in Internet protocols to decrypt some traffic, even though the attackers never learn the key.
https://sweet32.info/
https://sweet32.info/SWEET32_CCS16.pdf
http://blog.cryptographyengineering.com/2016/08/...
http://arstechnica.com/security/2016/08/...
http://news.softpedia.com/news/...
http://www.tomshardware.com/news/...
https://news.ycombinator.com/item?id=12351739

Apple applied for a patent earlier this year on collecting biometric information of an unauthorized device user. The obvious application is taking a copy of the fingerprint and photo of someone using a stolen smartphone.
http://appft.uspto.gov/netacgi/nph-Parser?...

This is interesting research: "Keystroke Recognition Using WiFi Signals." Basically, the user's hand positions as they type distorts the Wi-Fi signal in predictable ways.
https://www.sigmobile.org/mobicom/2015/papers/...
http://www.dailydot.com/layer8/...

Another paper on using Wi-Fi for surveillance. This one is on identifying people by their body shape. "FreeSense:Indoor Human Identification with WiFi Signals":
https://motherboard.vice.com/read/...
https://arxiv.org/abs/1608.03430
http://www.cse.unsw.edu.au/~wenh/zhang_dcoss16.pdf

We're starting to see some information on the Israeli cyberweapons arms manufacturer that sold the iPhone zero-day exploit to the United Arab Emirates so they could spy on human rights defenders.
http://www.news.com.au/technology/online/hacking/...
https://www.schneier.com/blog/archives/2016/08/...
I received criticism in the blog comments about calling NSO Group an Israeli company. I was just repeating the news articles, but further research indicates that it is Israeli-founded and Israeli-based, but 100% owned by an American private equity firm.
http://www.bloomberg.com/research/stocks/private/...
http://www.forbes.com/sites/thomasbrewster/2016/08/...

I was reading this 2014 McAfee report on the economic impact of cybercrime, and came across this interesting quote on how security is a tax on the Internet economy: "Another way to look at the opportunity cost of cybercrime is to see it as a share of the Internet economy. Studies estimate that the Internet economy annually generates between $2 trillion and $3 trillion, a share of the global economy that is expected to grow rapidly. If our estimates are right, cybercrime extracts between 15% and 20% of the value created by the Internet, a heavy tax on the potential for economic growth and job creation and a share of revenue that is significantly larger than any other transnational criminal activity." Of course you can argue with the numbers, and there's good reason to believe that the actual costs of cybercrime are much lower. And, of course, those costs are largely indirect costs. It's not that cybercriminals are getting away with all that value; it's largely spent on security products and services from companies like McAfee (and my own IBM Security). In "Liars and Outliers," I talk about security as a tax on the honest.
http://www.mcafee.com/us/resources/reports/...

The Intercept has published a 120-page catalog of spy gear from the British defense company Cobham. This is equipment available to police forces. The catalog was leaked by someone inside the Florida Department of Law Enforcement.
https://theintercept.com/2016/09/01/...
https://www.documentcloud.org/documents/...

Yet another leaked catalog of Internet attack services, this one specializing in disinformation:
https://motherboard.vice.com/read/...

The former head of French SIGINT gave a talk where he talked about a lot of things he probably shouldn't have.
http://www.20minutes.fr/france/...
http://www.lemonde.fr/societe/article/2016/09/03/...
https://youtu.be/s8gCaySejr4
https://www.schneier.com/blog/archives/2016/09/...
http://pastebin.com/CCxtasnn
https://medium.com/@msuiche/...

Brian Krebs reports that the Israeli DDoS service vDOS has earned $600K in the past two years. The information was obtained from a hack and data dump of the company's information.
http://krebsonsecurity.com/2016/09/...
The owners have been arrested.
http://krebsonsecurity.com/2016/09/...

We have a leak from yet another cyberweapons arms manufacturer: the Italian company RCS Labs. Vice Motherboard reports on a surveillance video demo:
https://motherboard.vice.com/read/...
http://www.rcslab.it/en/products/index.html
https://news.slashdot.org/story/16/09/08/0025258/...

The malware "Mal/Miner-C" infects Internet-exposed Seagate Central Network Attached Storage (NAS) devices, and from there takes over connected computers to mine for cryptocurrency. About 77% of all drives have been infected.
http://news.softpedia.com/news/...
https://www.sophos.com/en-us/medialibrary/PDFs/...
http://arstechnica.com/security/2016/09/...
https://news.slashdot.org/story/16/09/11/0028238/...

This $60 USB kill stick destroys anything it's plugged into.
http://www.zdnet.com/article/...
https://www.usbkill.com/download/...
https://www.usbkill.com/usb-killer/8-usb-killer.html
https://hardware.slashdot.org/story/16/09/09/...

The Intercept has published the manuals for Harris Corporation's IMSI catcher: Stingray. It's an impressive surveillance device.
https://theintercept.com/2016/09/12/...


Organizational Doxing and Disinformation

In the past few years, the devastating effects of hackers breaking into an organization's network, stealing confidential data, and publishing everything have been made clear. It happened to the Democratic National Committee, to Sony, to the National Security Agency, to the cyber-arms weapons manufacturer Hacking Team, to the online adultery site Ashley Madison, and to the Panamanian tax-evasion law firm Mossack Fonseca.

This style of attack is known as organizational doxing. The hackers, in some cases individuals and in others nation-states, are out to make political points by revealing proprietary, secret, and sometimes incriminating information. And the documents they leak do that, airing the organizations' embarrassments for everyone to see.

In all of these instances, the documents were real: the email conversations, still-secret product details, strategy documents, salary information, and everything else. But what if hackers were to alter documents before releasing them? This is the next step in organizational doxing -- and the effects can be much worse.

It's one thing to have all of your dirty laundry aired in public for everyone to see. It's another thing entirely for someone to throw in a few choice items that aren't real.

Recently, Russia has started using forged documents as part of broader disinformation campaigns, particularly in relation to Sweden's entering of a military partnership with NATO, and Russia's invasion of Ukraine.

Forging thousands -- or more -- documents is difficult to pull off, but slipping a single forgery in an actual cache is much easier. The attack could be something subtle. Maybe a country that anonymously publishes another country's diplomatic cables wants to influence yet a third country, so adds some particularly egregious conversations about that third country. Or the next hacker who steals and publishes email from climate change researchers invents a bunch of over-the-top messages to make his political point even stronger. Or it could be personal: someone dumping email from thousands of users making changes in those by a friend, relative, or lover.

Imagine trying to explain to the press, eager to publish the worst of the details in the documents, that everything is accurate except this particular email. Or that particular memo. That the salary document is correct except that one entry. Or that the secret customer list posted up on WikiLeaks is correct except that there's one inaccurate addition. It would be impossible. Who would believe you? No one. And you couldn't prove it.

It has long been easy to forge documents on the Internet. It's easy to create new ones, and modify old ones. It's easy to change things like a document's creation date, or a photograph's location information. With a little more work, pdf files and images can be altered. These changes will be undetectable. In many ways, it's surprising that this kind of manipulation hasn't been seen before. My guess is that hackers who leak documents don't have the secondary motives to make the data dumps worse than they already are, and nation-states have just gotten into the document leaking business.

Major newspapers do their best to verify the authenticity of leaked documents they receive from sources. They only publish the ones they know are authentic. The newspapers consult experts, and pay attention to forensics. They have tense conversations with governments, trying to get them to verify secret documents they're not actually allowed to admit even exist. This is only possible because the news outlets have ongoing relationships with the governments, and they care that they get it right. There are lots of instances where neither of these two things are true, and lots of ways to leak documents without any independent verification at all.

No one is talking about this, but everyone needs to be alert to the possibility. Sooner or later, the hackers who steal an organization's data are going to make changes in them before they release them. If these forgeries aren't questioned, the situations of those being hacked could be made worse, or erroneous conclusions could be drawn from the documents. When someone says that a document they have been accused of writing is forged, their arguments at least should be heard.

This essay previously appeared on TheAtlantic.com.
http://www.theatlantic.com/technology/archive/2016/...

Organizational doxing:
https://www.schneier.com/blog/archives/2015/07/...

Russia using forged documents:
http://www.nytimes.com/2016/08/29/world/europe/...


Schneier News

I'm speaking at Sibos in Geneva on September 29.
https://www.sibos.com/

I'm speaking at Les Assises de la Securite in Monaco on October 5.
https://en.lesassisesdelasecurite.com/


iPhone Zero-Day Used by UAE Government

Last week, Apple issued a critical security patch for the iPhone: iOS 9.3.5. The incredible story is that this patch is the result of investigative work by Citizen Lab, which uncovered a zero-day exploit being used by the UAE government against a human rights defender. The UAE spyware was provided by the Israeli cyberweapons arms manufacturer NSO Group.

This is a big deal. iOS vulnerabilities are expensive, and can sell for over $1M. That we can find one used in the wild and patch it, rendering it valueless, is a major win and puts a huge dent in the vulnerabilities market. The more we can do this, the less valuable these zero-days will be to both criminals and governments -- and to criminal governments.

https://deibert.citizenlab.org/2016/08/...
https://citizenlab.org/2016/08/...

http://www.nytimes.com/2016/08/26/technology/...
https://www.wired.com/2016/08/...
http://www.cnet.com/how-to/...


Apple's Cloud Key Vault

Ever since Ian Krstic, Apple's Head of Security Engineering and Architecture, presented the company's key backup technology at Black Hat 2016, people have been pointing to it as evidence that the company can create a secure backdoor for law enforcement.

It's not. Matthew Green and Steve Bellovin have both explained why not. And the same group of us that wrote the "Keys Under Doormats" paper on why backdoors are a bad idea have also explained why Apple's technology does not enable it to build secure backdoors for law enforcement.

The problem with Tait's argument becomes clearer when you actually try to turn Apple's Cloud Key Vault into an exceptional access mechanism. In that case, Apple would have to replace the HSM with one that accepts an additional message from Apple or the FBI -- or an agency from any of the 100+ countries where Apple sells iPhones -- saying "OK, decrypt," as well as the user's password. In order to do this securely, these messages would have to be cryptographically signed with a second set of keys, which would then have to be used as often as law enforcement access is required. Any exceptional access scheme made from this system would have to have an additional set of keys to ensure authorized use of the law enforcement access credentials.
Managing access by a hundred-plus countries is impractical due to mutual mistrust, so Apple would be stuck with keeping a second signing key (or database of second signing keys) for signing these messages that must be accessed for each and every law enforcement agency. This puts us back at the situation where Apple needs to protect another repeatedly-used, high-value public key infrastructure: an equivalent situation to what has already resulted in the theft of Bitcoin wallets, RealTek's code signing keys, and Certificate Authority failures, among many other disasters.
Repeated access of private keys drastically increases their probability of theft, loss, or inappropriate use. Apple's Cloud Key Vault does not have any Apple-owned private key, and therefore does not indicate that a secure solution to this problem actually exists.
It is worth noting that the exceptional access schemes one can create from Apple's CKV (like the one outlined above) inherently entails the precise issues we warned about in our previous essay on the danger signs for recognizing flawed exceptional access systems. Additionally, the Risks of Key Escrow and Keys Under Doormats papers describe further technical and nontechnical issues with exceptional access schemes that must be addressed. Among the nontechnical hurdles would be the requirement, for example, that Apple run a large legal office to confirm that requests for access from the government of Uzbekistan actually involved a device that was located in that country, and that the request was consistent with both US law and Uzbek law.
My colleagues and I do not argue that the technical community doesn't know how to store high-value encryption keys -- to the contrary that's the whole point of an HSM. Rather, we assert that holding on to keys in a safe way such that any other party (i.e. law enforcement or Apple itself) can also access them repeatedly without high potential for catastrophic loss is impossible with today's technology, and that any scheme running into fundamental sociotechnical challenges such as jurisdiction must be evaluated honestly before any technical implementation is considered.

Krstic's talk:
https://www.blackhat.com/docs/us-16/materials/...

People claiming it demonstrates a backdoor:
https://lawfareblog.com/...
https://twitter.com/pwnallthethings/status/...

Matthew Green's response:
https://blog.cryptographyengineering.com/2016/08/13/...

Steve Bellovin's response:
https://www.cs.columbia.edu/~smb/blog/2016-08/...

Our response:
https://www.lawfareblog.com/...


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 13 books -- including his latest, "Data and Goliath" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient, an IBM Company. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient, an IBM Company.

Copyright (c) 2016 by Bruce Schneier.

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.