September 15, 2015

by Bruce Schneier
CTO, Resilient Systems, Inc.

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively and intelligent comment section. An RSS feed is available.

In this issue:

The Security Risks of Third-Party Data

Most of us get to be thoroughly relieved that our e-mails weren't in the Ashley Madison database. But don't get too comfortable. Whatever secrets you have, even the ones you don't think of as secret, are more likely than you think to get dumped on the Internet. It's not your fault, and there's largely nothing you can do about it.

Welcome to the age of organizational doxing.

Organizational doxing -- stealing data from an organization's network and indiscriminately dumping it all on the Internet -- is an increasingly popular attack against organizations. Because our data is connected to the Internet, and stored in corporate networks, we are all in the potential blast-radius of these attacks. While the risk that any particular bit of data gets published is low, we have to start thinking about what could happen if a larger-scale breach affects us or the people we care about. It's going to get a lot uglier before security improves.

We don't know why anonymous hackers broke into the networks of Avid Life Media, then stole and published 37 million -- so far -- personal records of users. The hackers say it was because of the company's deceptive practices. They expressed indifference to the "cheating dirtbags" who had signed up for the site. The primary target, the hackers said, was the company itself. That philanderers were exposed, marriages were ruined, and people were driven to suicide was apparently a side effect.

Last November, the North Korean government stole and published gigabytes of corporate e-mail from Sony Pictures. This was part of a much larger doxing -- a hack aimed at punishing the company for making a movie parodying the North Korean leader Kim Jong-un. The press focused on Sony's corporate executives, who had sniped at celebrities and made racist jokes about President Obama. But also buried in those e-mails were loves, losses, confidences, and private conversations of thousands of innocent employees. The press didn't bother with those e-mails -- and we know nothing of any personal tragedies that resulted from their friends' searches. They, too, were caught in the blast radius of the larger attack.

The Internet is more than a way for us to get information or connect with our friends. It has become a place for us to store our personal information. Our e-mail is in the cloud. So are our address books and calendars, whether we use Google, Apple, Microsoft, or someone else. We store to-do lists on Remember the Milk and keep our jottings on Evernote. Fitbit and Jawbone store our fitness data. Flickr, Facebook, and iCloud are the repositories for our personal photos. Facebook and Twitter store many of our intimate conversations.

It often feels like everyone is collecting our personal information. Smartphone apps collect our location data. Google can draw a surprisingly intimate portrait of what we're thinking about from our Internet searches. Dating sites (even those less titillating than Ashley Madison), medical-information sites, and travel sites all have detailed portraits of who we are and where we go. Retailers save records of our purchases, and those databases are stored on the Internet. Data brokers have detailed dossiers that can include all of this and more.

Many people don't think about the security implications of this information existing in the first place. They might be aware that it's mined for advertising and other marketing purposes. They might even know that the government can get its hands on such data, with different levels of ease depending on the country. But it doesn't generally occur to people that their personal information might be available to anyone who wants to look.

In reality, all these networks are vulnerable to organizational doxing. Most aren't any more secure than Ashley Madison or Sony were. We could wake up one morning and find detailed information about our Uber rides, our Amazon purchases, our subscriptions to pornographic websites -- anything we do on the Internet -- published and available. It's not likely, but it's certainly possible.

Right now, you can search the Ashley Madison database for any e-mail address, and read that person's details. You can search the Sony data dump and read the personal chatter of people who work for the company. Tempting though it may be, there are many reasons not to search for people you know on Ashley Madison. The one I most want to focus on is context. An e-mail address might be in that database for many reasons, not all of them lascivious. But if you find your spouse or your friend in there, you don't necessarily know the context. It's the same with the Sony employee e-mails, and the data from whatever company is doxed next. You'll be able to read the data, but without the full story, it can be hard to judge the meaning of what you're reading.

Even so, of course people are going to look. Reporters will search for public figures. Individuals will search for people they know. Secrets will be read and passed around. Anguish and embarrassment will result. In some cases, lives will be destroyed.

Privacy isn't about hiding something. It's about being able to control how we present ourselves to the world. It's about maintaining a public face while at the same time being permitted private thoughts and actions. It's about personal dignity.

Organizational doxing is a powerful attack against organizations, and one that will continue because it's so effective. And while the network owners and the hackers might be battling it out for their own reasons, sometimes it's our data that's the prize. Having information we thought private turn out to be public and searchable is what happens when the hackers win. It's a result of the information age that hasn't been fully appreciated, and one that we're still not prepared to face.

This essay previously appeared on the Atlantic.

Organizational doxing:

Ashley Madison stories:

North Korean hack of Sony:

Smartphone apps collecting location data:

Searching the Ashley Madison database:

Why not to search that database:

NSA Plans for a Post-Quantum World

Quantum computing is a novel way to build computers -- one that takes advantage of the quantum properties of particles to perform operations on data in a very different way than traditional computers. In some cases, the algorithm speedups are extraordinary.

Specifically, a quantum computer using something called Shor's algorithm can efficiently factor numbers, breaking RSA. A variant can break Diffie-Hellman and other discrete log-based cryptosystems, including those that use elliptic curves. This could potentially render all modern public-key algorithms insecure. Before you panic, note that the largest number to date that has been factored by a quantum computer is 143. So while a practical quantum computer is still science fiction, it's not *stupid* science fiction. And the computation that factored 143 also accidentally "factored much larger numbers such as 3599, 11663, and 56153, without the awareness of the authors of that work," which shows how weird this all is.

(Note that this is completely different from quantum cryptography, which is a way of passing bits between two parties that relies on physical quantum properties for security. The only thing quantum computation and quantum cryptography have to do with each other is their first words. It is also completely different from the NSA's QUANTUM program, which is its code name for a packet-injection system that works directly in the Internet backbone.)

Practical quantum computation doesn't mean the end of cryptography. There are lesser-known public-key algorithms such as McEliece and lattice-based algorithms that, while less efficient than the ones we use, are currently secure against a quantum computer. And quantum computation only speeds up a brute-force keysearch by a factor of a square root, so any symmetric algorithm can be made secure against a quantum computer by doubling the key length.

We know from the Snowden documents that the NSA is conducting research on both quantum computation and quantum cryptography. It's not a lot of money, and few believe that the NSA has made any real advances in theoretical or applied physics in this area. My guess has previously been that we'll see a practical quantum computer within 30 to 40 years, but not much sooner than that.

This all means that now is the time to think about what living in a post-quantum world would be like. NIST is doing its part, having hosted a conference on the topic earlier this year. And the NSA announced that it is moving towards quantum-resistant algorithms.

Earlier this week, the NSA's Information Assurance Directorate updated its list of Suite B cryptographic algorithms. It explicitly talked about the threat of quantum computers:

IAD will initiate a transition to quantum resistant algorithms in the not too distant future. Based on experience in deploying Suite B, we have determined to start planning and communicating early about the upcoming transition to quantum resistant algorithms. Our ultimate goal is to provide cost effective security against a potential quantum computer. We are working with partners across the USG, vendors, and standards bodies to ensure there is a clear plan for getting a new suite of algorithms that are developed in an open and transparent manner that will form the foundation of our next Suite of cryptographic algorithms.
Until this new suite is developed and products are available implementing the quantum resistant suite, we will rely on current algorithms. For those partners and vendors that have not yet made the transition to Suite B elliptic curve algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition.

Suite B is a family of cryptographic algorithms approved by the NSA. It's all part of the NSA's Cryptographic Modernization Program. Traditionally, NSA algorithms were classified and could only be used in specially built hardware modules. Suite B algorithms are public, and can be used in anything. This is not to say that Suite B algorithms are second class, or breakable by the NSA. They're being used to protect US secrets: "Suite A will be used in applications where Suite B may not be appropriate. Both Suite A and Suite B can be used to protect foreign releasable information, US-Only information, and Sensitive Compartmented Information (SCI)."

The NSA is worried enough about advances in the technology to start transitioning away from algorithms that are vulnerable to a quantum computer. Does this mean that the agency is close to a working prototype in their own classified labs? Unlikely. Does this mean that they envision practical quantum computers sooner than my 30-to-40-year estimate? Certainly.

Unlike most personal and corporate applications, the NSA routinely deals with information it wants kept secret for decades. Even so, the NSA is acting like practical quantum computers will exist long before then, and I am deferring to their expertise. We should all follow the NSA's lead and transition our own systems to quantum-resistant algorithms over the next decade or so -- possibly even sooner.

A version of this essay previously appeared on Lawfare.

Quantum computing:

Factor number with a quantum computer:

Factoring 143:

Quantum cryptography:

NSA's QUANTUM program:

NSA quantum research:

NIST post-quantum cryptography conference:

NSA's announcement:

The Suite B family of cryptographic algorithms:

Drone Self-Defense and the Law

Last month, a Kentucky man shot down a drone that was hovering near his backyard.

WDRB News reported that the camera drone's owners soon showed up at the home of the shooter, William H. Merideth: "Four guys came over to confront me about it, and I happened to be armed, so that changed their minds," Merideth said. "They asked me, 'Are you the S-O-B that shot my drone?' and I said, 'Yes I am,'" he said. "I had my 40 mm Glock on me and they started toward me and I told them, 'If you cross my sidewalk, there's gonna be another shooting.'" Police charged Meredith with criminal mischief and wanton endangerment.

This is a trend. People have shot down drones in southern New Jersey and rural California as well. It's illegal, and they get arrested for it.

Technology changes everything. Specifically, it upends long-standing societal balances around issues like security and privacy. When a capability becomes possible, or cheaper, or more common, the changes can be far-reaching. Rebalancing security and privacy after technology changes capabilities can be very difficult, and take years. And we're not very good at it.

The security threats from drones are real, and the government is taking them seriously. In January, a man lost control of his drone, which crashed on the White House lawn. In May, another man was arrested for trying to fly his drone over the White House fence, and another last week for flying a drone into the stadium where the US Open was taking place.

Drones have attempted to deliver drugs to prisons in Maryland, Ohio and South Carolina -- so far.

There have been many near-misses between drones and airplanes. Many people have written about the possible terrorist uses of drones.

Defenses are being developed. Both Lockheed Martin and Boeing sell anti-drone laser weapons. One company sells shotgun shells specifically designed to shoot down drones.

Other companies are working on technologies to detect and disable them safely. Some of those technologies were used to provide security at this year's Boston Marathon.

Law enforcement can deploy these technologies, but under current law it's illegal to shoot down a drone, even if it's hovering above your own property. In our society, you're generally not allowed to take the law into your own hands. You're expected to call the police and let them deal with it.

There's an alternate theory, though, from law professor Michael Froomkin. He argues that self-defense should be permissible against drones simply because you don't know their capabilities. We know, for example, that people have mounted guns on drones, which means they could pose a threat to life. Note that this legal theory has not been tested in court.

Increasingly, government is regulating drones and drone flights both at the state level and by the FAA. There are proposals to require that drones have an identifiable transponder, or no-fly zones programmed into the drone software.

Still, a large number of security issues remain unresolved. How do we feel about drones with long-range listening devices, for example? Or drones hovering outside our property and photographing us through our windows?

What's going on is that drones have changed how we think about security and privacy within our homes, by removing the protections we used to get from fences and walls. Of course, being spied on and shot at from above is nothing new, but access to those technologies was expensive and largely the purview of governments and some corporations. Drones put these capabilities into the hands of hobbyists, and we don't know what to do about it.

The issues around drones will get worse as we move from remotely piloted aircraft to true drones: aircraft that operate autonomously from a computer program. For the first time, autonomous robots -- with ever-increasing intelligence and capabilities at an ever-decreasing cost -- will have access to public spaces. This will create serious problems for society, because our legal system is largely based on deterring human miscreants rather than their proxies.

Our desire to shoot down a drone hovering nearby is understandable, given its potential threat. Society's need for people not to take the law into their own hands -- and especially not to fire guns into the air -- is also understandable. These two positions are increasingly coming into conflict, and will require increasing government regulation to sort out. But more importantly, we need to rethink our assumptions of security and privacy in a world of autonomous drones, long-range cameras, face recognition, and the myriad other technologies that are increasingly in the hands of everyone.

This essay previously appeared on

Kentucky man shooting down a drone:

Drone shot down in New Jersey:

Drone shot down in rural California:

Drone crashed on the White House lawn:

Drone flying over White House fence:

Drone flying into US Open stadium:

Drones delivering drugs into prisons:

Drone/airplane near misses:

Terrorist uses of drones:

Drone defenses:

Current law:

Froomkin's argument:

Guns mounted on drones:

Regulating drones:

Listening devices on drones:

Drones hovering outside apartment windows:


There's a new article, published jointly by the New York Times and ProPublica, about the NSA's longstanding relationship with AT&T. It's based on the Snowden documents, and there are a bunch of new pages published.
Companion piece:

One of the books confiscated from Chelsea Manning was a copy of Data and Goliath.

I've previously written about mail cover -- the practice of recording data on mail envelopes. Sai has been covering the issue in more detail, and recently received an unredacted copy of a 2014 audit report. The New York Times has an article on it.

Two former Kaspersky employees have accused the company of faking malware to harm rival antivirus products. They say the company would falsely classify legitimate files as malicious, tricking other antivirus companies that blindly copied Kaspersky's data into deleting them from their customers' computers.
Kaspersky denies it.
Here's an October 2013 presentation by Microsoft on the attacks.
And here's a dissenting opinion.

AVA: A Social Engineering Vulnerability Scanner:

Nasty Cisco attack:
There's no indication of who is doing these attacks, but it's exactly the sort of thing you'd expect out of a government attacker. Regardless of which government initially discovered this, assume that they're all exploiting it by now -- and will continue to do so until it's fixed.

The US government has admitted that it uses predictive assessments to put people on the no-fly list.

Snake-oil cryptography competition:

SS7 phone-switch flaw enabled surveillance:

Yet another biometric: your heartbeat.

This research says that data breaches are not getting larger over time: "Hype and Heavy Tails: A Closer Look at Data Breaches," by Benjamin Edwards, Steven Hofmeyr, and Stephanie Forrest.

Here's an interesting research paper that tries to calculate the differential value of privacy-invasive advertising practices. The researchers used data from a mobile ad network and were able to see how different personalized advertising practices affected customer purchasing behavior. The details are interesting, but basically, most personal information had little value. Overall, the ability to target advertising produces a 29% greater return on an advertising budget, mostly by knowing the right time to show someone a particular ad.

Kansas Senator Pat Roberts wins an award for his movie-plot threat: terrorists attacking the maximum-security federal prison at Ft. Leavenworth:
Not just terrorists, but terrorists with a submarine! This is why Ft. Leavenworth, a prison from which no one has ever escaped, is unsuitable for housing Guantanamo detainees. I've never understood the argument that terrorists are too dangerous to house in US prisons. They're just terrorists, it's not like they're Magneto.

Regularities in Android lock patterns:
Similar research on this:

In the wake of the recent averted mass shooting on the French railroads, officials are realizing that there are just too many potential targets to defend.

CitizenLab is reporting on Iranian hacking attempts against activists, which include a real-time man-in-the-middle attack against Google's two-factor authentication. The report quotes my previous writing on the vulnerabilities of two-factor authentication.
More commentary:

The German newspaper Zeit is reporting the BfV, Germany's national intelligence agency, (probably) illegally traded data about Germans to the NSA in exchange for access to XKeyscore.
Note that the documents this story is based on seem not to have been provided by Snowden.

An unofficial blog post from FTC chief technologist Ashkan Soltani on the virtues of strong end-user device controls.

James Mickens on security, for your amusement:

Using Samsung's Internet-enabled refrigerator for man-in-the-middle attacks:
When I think about the security implications of the Internet of Things, this is one of my primary worries. As we connect things to each other, vulnerabilities on one of them affect the security of another. And because so many of the things we connect to the Internet will be poorly designed, and low cost, there will be lots of vulnerabilities in them. Expect a lot more of this kind of thing as we move forward.
Dave Barry reblogged me.

An Australian reporter for the ABC, Will Ockenden published a bunch of his metadata, and asked people to derive various elements of his life. They did pretty well, even though they were amateurs, which should give you some idea what professionals can do.

This Washington Post article uses the history of the L0pht to talk about the broader issues of Internet security.

Several times I've mentioned Peter Swire's concept of "the declining half-life of secrets." He's finally written it up.

Interesting research: "An Analysis of China's 'Great Cannon.'"

Chess player caught cheating at a tournament:
Older posts on this:
A grandmaster caught cheating:

Ashley Madison encrypted users' passwords using the bcrypt function. It's a secure password-encryption function, but two implementation programming mistakes allow millions of passwords to be easily decrypted.
Ars Technica explains the problems.

Security vs. privacy: a cartoon.

Hacking Team, Computer Vulnerabilities, and the NSA

When the National Security Administration (NSA) -- or any government agency -- discovers a vulnerability in a popular computer system, should it disclose it or not? The debate exists because vulnerabilities have both offensive and defensive uses. Offensively, vulnerabilities can be exploited to penetrate others' computers and networks, either for espionage or destructive purposes. Defensively, publicly revealing security flaws can be used to make our own systems less vulnerable to those same attacks. The two options are mutually exclusive: either we can help to secure both our own networks and the systems we might want to attack, or we can keep both networks vulnerable. Many, myself included, have long argued that defense is more important than offense, and that we should patch almost every vulnerability we find. Even the President's Review Group on Intelligence and Communications Technologies recommended in 2013 that "U.S. policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on U.S. Government and other networks."

Both the NSA and the White House have talked about a secret "vulnerability equities process" they go through when they find a security flaw. Both groups maintain the process is heavily weighted in favor or disclosing vulnerabilities to the vendors and having them patched.

An undated document -- declassified last week with heavy redactions after a year-long Freedom of Information Act lawsuit -- shines some light on the process but still leaves many questions unanswered. An important question is: which vulnerabilities go through the equities process, and which don't?

A real-world example of the ambiguity surrounding the equities process emerged from the recent hacking of the cyber weapons arms manufacturer Hacking Team. The corporation sells Internet attack and espionage software to countries around the world, including many reprehensible governments to allow them to eavesdrop on their citizens, sometimes as a prelude to arrest and torture. The computer tools were used against US journalists.

In July, unidentified hackers penetrated Hacking Team's corporate network and stole almost everything of value, including corporate documents, e-mails, and source code. The hackers proceeded to post it all online.

The NSA was most likely able to penetrate Hacking Team's network and steal the same data. The agency probably did it years ago. They would have learned the same things about Hacking Team's network software that we did in July: how it worked, what vulnerabilities they were using, and which countries were using their cyber weapons. Armed with that knowledge, the NSA could have quietly neutralized many of the company's products. The United States could have alerted software vendors about the zero-day exploits and had them patched. It could have told the antivirus companies how to detect and remove Hacking Team's malware. It could have done a lot. Assuming that the NSA did infiltrate Hacking Team's network, the fact that the United States chose not to reveal the vulnerabilities it uncovered is both revealing and interesting, and the decision provides a window into the vulnerability equities process.

The first question to ask is why? There are three possible reasons. One, the software was also being used by the United States, and the government did not want to lose its benefits. Two, NSA was able to eavesdrop on other entities using Hacking Team's software, and they wanted to continue benefitting from the intelligence. And three, the agency did not want to expose their own hacking capabilities by demonstrating that they had compromised Hacking Team's network. In reality, the decision may have been due to a combination of the three possibilities.

How was this decision made? More explicitly, did any vulnerabilities that Hacking Team exploited, and the NSA was aware of, go through the vulnerability equities process? It is unclear. The NSA plays fast and loose when deciding which security flaws go through the procedure. The process document states that it applies to vulnerabilities that are "newly discovered and not publicly known." Does that refer only to vulnerabilities discovered by the NSA, or does the process also apply to zero-day vulnerabilities that the NSA discovers others are using? If vulnerabilities used in others' cyber weapons are excluded, it is very difficult to talk about the process as it is currently formulated.

The US government should close the vulnerabilities that foreign governments are using to attack people and networks. If taking action is as easy as plugging security vulnerabilities in products and making everyone in the world more secure, that should be standard procedure. The fact that the NSA -- we assume -- chose not to suggests that the United States has its priorities wrong.

Undoubtedly, there would be blowback from closing vulnerabilities utilized in others' cyber weapons. Several companies sell information about vulnerabilities to different countries, and if they found that those security gaps were regularly closed soon after they started trying to sell them, they would quickly suspect espionage and take more defensive precautions. The new wariness of sellers and decrease in available security flaws would also raise the price of vulnerabilities worldwide. The United States is one of the biggest buyers, meaning that we benefit from greater availability and lower prices.

If we assume the NSA has penetrated these companies' networks, we should also assume that the intelligence agencies of countries like Russia and China have done the same. Are those countries using Hacking Team's vulnerabilities in their cyber weapons? We are all embroiled in a cyber arms race -- finding, buying, stockpiling, using, and exposing vulnerabilities -- and our actions will affect the actions of all the other players.

It seems foolish that we would *not* take every opportunity to neutralize the cyberweapons of those countries that would attack the United States or use them against their own people for totalitarian gain. Is it truly possible that when the NSA intercepts and reverse-engineers a cyberweapon used by one of our enemies -- whether a Hacking Team customer or a country like China -- we don't close the vulnerabilities that that weapon uses? Does the NSA use knowledge of the weapon to defend the US government networks whose security it maintains, at the expense of everyone else in the country and the world? That seems incredibly dangerous.

In my book Data and Goliath, I suggested breaking apart the NSA's offensive and defensive components, in part to resolve the agency's internal conflict between attack and defense. One part would be focused on foreign espionage, and another on cyberdefense. This Hacking Team discussion demonstrates that even separating the agency would not be enough. The espionage-focused organization that penetrates and analyzes the products of cyberweapons arms manufacturers would regularly learn about vulnerabilities used to attack systems and networks worldwide. Thus, that section of the agency would still have to transfer that knowledge to the defense-focused organization. That is not going to happen as long as the United States prioritizes surveillance over security and attack over defense. The norms governing actions in cyberspace need to be changed, a task far more difficult than any reform of the NSA.

This essay previously appeared in the Georgetown Journal of International Affairs.

Disclosing vulnerabilities:

President's Review Group on Intelligence and Communications Technologies

White House equities process:

Hacking Team:

US buying vulnerabilities:

Cyberweapons arms race:

TSA Master Keys

Someone recently noticed a Washington Post story on the TSA that originally contained a detailed photograph of all the TSA master keys. It's now blurred out of the Washington Post story, but the image is still floating around the Internet. The whole thing neatly illustrates one of the main problems with backdoors, whether in cryptographic systems or physical systems: they're fragile.

Nicholas Weaver wrote:

TSA "Travel Sentry" luggage locks contain a disclosed backdoor which is similar in spirit to what Director Comey desires for encrypted phones. In theory, only the Transportation Security Agency or other screeners should be able to open a TSA lock using one of their master keys. All others, notably baggage handlers and hotel staff, should be unable to surreptitiously open these locks.
Unfortunately for everyone, a TSA agent and the Washington Post revealed the secret. All it takes to duplicate a physical key is a photograph, since it is the pattern of the teeth, not the key itself, that tells you how to open the lock. So by simply including a pretty picture of the complete spread of TSA keys in the Washington Post's paean to the TSA, the Washington Post enabled anyone to make their own TSA keys.
So the TSA backdoor has failed: we must assume any adversary can open any TSA "lock". If you want to at least know your luggage has been tampered with, forget the TSA lock and use a zip-tie or tamper-evident seal instead, or attach a real lock and force the TSA to use their bolt cutters.

Weaver's comments:

TSA "Travel Sentry" luggage locks:

It's the third photo on this page.
It's reproduced here.
There's also this set of photos.
Someone has published a set of CAD files so you can make your own master keys.

Oracle CSO Rant Against Security Experts

Oracle's CSO Mary Ann Davidson wrote a blog post ranting against security experts finding vulnerabilities in her company's products. The blog post has been taken down by the company, but was saved for posterity by others. There's been lots of commentary.

It's easy to just mock Davidson's stance, but it's dangerous to our community. Yes, if researchers don't find vulnerabilities in Oracle products, then the company won't look bad and won't have to patch things. But the real attackers -- whether they be governments, criminals, or cyberweapons arms manufacturers who sell to government and criminals -- will continue to find vulnerabilities in her products. And while they won't make a press splash and embarrass her, they will exploit them.


Dangers of restricting vulnerability disclosure:

Schneier News

I'm speaking on the future of privacy at a public seminar sponsored by the Institute for Future Studies, in Stockholm, Sweden, on September 21, 2015.

I'm speaking at Next Generation Threats 2015 in Stockholm, Sweden, on September 22, 2015.

I'm speaking at Next Generation Threats 2015 in Gothenburg, Sweden, on September 23, 2015.

I'm speaking at Free and Safe in Cyberspace in Brussels on September 24, 2015.

I'll be on a panel at Privacy. Security. Risk. 2015 in Las Vegas on September 30, 2015.

I'm speaking at the Privacy + Security Forum, October 21-23, 2015, at The Marvin Center in Washington, DC.

I'm speaking at the Boston Book Festival on October 24, 2015.

I'm speaking at the 4th Annual Cloud Security Congress EMEA in Berlin on November 17, 2015.

I was interviewed by the Social Network Station:

Book review of Data and Goliath:

I was interviewed by Software Engineering Daily:

I was interviewed by Folha de S.Paulo (article is in Portuguese):

Three articles on my video keynote speech at LinuxCon 2015:

FBI and Apple's Encryption

The New York Times is reporting that Apple encryption is hampering an FBI investigation:

In an investigation involving guns and drugs, the Justice Department obtained a court order this summer demanding that Apple turn over, in real time, text messages between suspects using iPhones.
Apple's response: Its iMessage system was encrypted and the company could not comply.
Government officials had warned for months that this type of standoff was inevitable as technology companies like Apple and Google embraced tougher encryption. The case, coming after several others in which similar requests were rebuffed, prompted some senior Justice Department and F.B.I. officials to advocate taking Apple to court, several current and former law enforcement officials said.
While that prospect has been shelved for now, the Justice Department is engaged in a court dispute with another tech company, Microsoft.

Several people have asked me in e-mail if this is the case I was referring to here:

There's a persistent rumor going around that Apple is in the secret FISA Court, fighting a government order to make its platform more surveillance-friendly -- and they're losing. This might explain Apple CEO Tim Cook's somewhat sudden vehemence about privacy. I have not found any confirmation of the rumor.

It's not. The rumor I am hearing is not about access to a particular user and his communications. It is about general access to iOS data and communications. And it's in the FISA court, which means that it's not a domestic criminal matter.

To reiterate: this is a rumor. I have no confirmation. But I know three reporters that are poking around, looking for the story.

EDITED TO ADD (9/11): Nicholas Weaver, Matthew Green, and Ashkan Soltani have all written about how Apple could add a backdoor to iMessage without you ever realizing it. The basic idea is that the user isn't able to confirm that a key hasn't been added to his system, nor that there is a man-in-the-middle.

Here's Apple's iOS security guide:

Animals vs. Drones

It's not just humans who dislike the small flying objects. YouTube has videos of drones being stared at quizzically by a moose, harassed by a raven, attacked by a hawk, butted by a ram, knocked out of the sky by a chimpanzee (who planned the whole thing) and a goose, and punched out of the sky by a kangaroo.

And bears hate them, even if they don't actually attack.









Glenn Greenwald Debates Keith Alexander

It's an interesting debate, surprisingly civil.

Alexander seemed to have been okay with Snowden revealing surveillance based on Section 215:

"If he had taken the one court document and said, 'This is what I'm going to do'... I think this would be a whole different discussion," Alexander said. "I do think he had the opportunity [to be] what many could consider an American hero."

And he also spoke in favor of allowing adversarial proceedings in the FISA Court.

On the other hand, I am getting tired of this back-door/front-door nonsense. Alexander said that he's not in favor of backdoors in security systems, but wants some kind of "front door." FBI Director Comey plays this word game, too:

There is a misconception that building a lawful intercept solution into a system requires a so-called "back door," one that foreign adversaries and hackers may try to exploit.
But that isn't true. We aren't seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process -- front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.

They both see a difference here. A backdoor is a secret method of access, one that anyone can discover and use. A front door is a public method of access, one that -- somehow -- no one else can discover and use. But in reality, there's no difference. Technologically, they're the same: a method of third-party data access that works despite the intentions of the data owner.

In the beginning of the debate, I got the feeling that Alexander was trying to subtly shill his company. (Not that there's anything wrong with that -- I sometimes do the same thing for my own company. But realizing it helped me understand some of Alexander's comments better.) Later, the discussion turned into a recycling of common talking points from both sides.

Comey back-door/front-door word game:

There's no difference:

Alexander's company:

Wanted: Cryptography Products for Worldwide Survey

In 1999, Lance Hoffman, David Balenson, and others published a survey of non-US cryptographic products. The point of the survey was to illustrate that there was a robust international market in these products, and that US-only export restrictions on strong encryption did nothing to prevent its adoption and everything to disadvantage US corporations. This was an important contribution during the First Crypto War, and Hoffman testified before a Senate committee on his findings.

I want to redo that survey for 2015.

Here, at the beginning of the Second Crypto War, we again need to understand which encryption products are outside the reach of US regulation (or UK regulation). Are there so many foreign crypto products that any regulation by only one country will be easily circumvented? Or has the industry consolidated around only a few products made by only a few countries, so that effective regulation of strong encryption is possible? What are the possibilities for encrypted communication and data storage? I honestly don't know the answer -- and I think it's important to find out.

To that end, I am asking for help. Please respond in the comments with the names -- and URLs -- of non-US encryption software and hardware products. I am only interested in those useful for protecting communications and data storage. I don't care about encrypting financial transactions, or anything of that sort.

Thank you for your help. And please forward this blog post to anyone else who might help.

EDITED TO ADD: Thinking about it more, I want to compile a list of domestic (US) encryption products as well. Since right now the FBI seems intent on just pressuring the big companies like Apple and Microsoft, and not regulating cryptography in general, knowing what else is out there in the US will be useful.

1999 Survey:

First Crypto War:

Second Crypto War:

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 12 books -- including "Liars and Outliers: Enabling the Trust Society Needs to Survive" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2015 by Bruce Schneier.

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.