June 15, 2016

by Bruce Schneier
CTO, Resilient, an IBM Company

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively and intelligent comment section. An RSS feed is available.

In this issue:

The Unfalsifiability of Security Claims

Interesting research paper: Cormac Herley, "Unfalsifiability of security claims":

There is an inherent asymmetry in computer security: things can be declared insecure by observation, but not the reverse. There is no observation that allows us to declare an arbitrary system or technique secure. We show that this implies that claims of necessary conditions for security (and sufficient conditions for insecurity) are unfalsifiable. This in turn implies an asymmetry in self-correction: while the claim that countermeasures are sufficient is always subject to correction, the claim that they are necessary is not. Thus, the response to new information can only be to ratchet upward: newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out. Further, when justifications are unfalsifiable, deciding the relative importance of defensive measures reduces to a subjective comparison of assumptions. Relying on such claims is the source of two problems: once we go wrong we stay wrong and errors accumulate, and we have no systematic way to rank or prioritize measures.

This is both true and not true.

Mostly, it's true. It's true in cryptography, where we can never say that an algorithm is secure. We can either show how it's insecure, or say something like: all of these smart people have spent lots of hours trying to break it, and they can't -- but we don't know what a smarter person who spends even more hours analyzing it will come up with. It's true in things like airport security, where we can easily point out insecurities but are unable to similarly demonstrate that some measures are unnecessary. And this does lead to a ratcheting up on security, in the absence of constraints like budget or processing speed. It's easier to demand that everyone take off their shoes for special screening, or that we add another four rounds to the cipher, than to argue the reverse.

But it's not entirely true. It's difficult, but we can analyze the cost-effectiveness of different security measures. We can compare them with each other. We can make estimations and decisions and optimizations. It's just not easy, and often it's more of an art than a science. But all is not lost.

Still, a very good paper and one worth reading.

Blog entry URL:

Unfalsifiability of security claims:

Arresting People for Walking Away from Airport Security

A proposed law in Albany, NY, would make it a crime to walk away from airport screening.

Aside from wondering why county lawmakers are getting involved with what should be national policy, you have to ask: what are these people thinking?

They're thinking in stories, of course. They have a movie plot in their heads, and they are imaging how this measure solves it.

The law is intended to cover what Apple described as a soft spot in the current system that allows passengers to walk away without boarding their flights if security staff flags them for additional scrutiny.
That could include would-be terrorists probing for weaknesses, Apple said, adding that his deputies currently have no legal grounds to question such a person.

Does anyone have any idea what stories these people have in their heads? What sorts of security weaknesses are exposed by walking up to airport security and then walking away?


Here's an interesting case of doctored urine-test samples from the Sochi Olympics. Evidence points to someone defeating the tamper resistance of the bottles. Someone figured out how to open the bottles, swap out the liquid, and replace the caps without leaving any visible signs of tampering.

At the last match of the year for Manchester United, someone found a bomb in a toilet, and security evacuated all 75,000 people and canceled the match. Turns out it was a fake bomb left behind after a recent training exercise.

The Intercept is starting to publish a lot more documents from the Snowden archives. Last month, it published the first year of an internal newsletter called SIDtoday, along with several articles based on the documents.
It's also making the archive available to more researchers.

Economists argue that the security needs of various crops are the cause of civilization size.

Jonathan Mayer, Patrick Mutchler, and John C. Mitchell, "Evaluating the privacy properties of telephone metadata."
New research, but not a new result. There have been several similar studies over the years. This one uses only anonymized call and SMS metadata to identify people who volunteered for the study.

Really interesting article on the difficulties involved with explosive detection at airport security checkpoints.
I disagree with his conclusions -- that more explosive-detection technology is needed at more places in society -- but the technical information on explosives detection technology is really interesting.

Really interesting research: "Online tracking: A 1-million-site measurement and analysis," by Steven Englehardt and Arvind Narayanan:

GCHQ discloses two OS X vulnerabilities to Apple:

Good debate in the Wall Street Journal on whether you should be allowed to prevent drones from flying over your property. This isn't an obvious one; there are good arguments on both sides.

There's a new trend in Silicon Valley startups; companies are not collecting and saving data on their customers. I believe that all this data isn't nearly as valuable as the big-data people are promising. Now that companies are recognizing that it is also a liability, I think we're going to see more rational trade-offs about what to keep -- and for how long -- and what to discard.

The Skein hash function is now part of FreeBSD.

People can be identified from their driving patterns, using the internal computer network of the vehicles. The paper: "Automobile Driver Fingerprinting," by Miro Enev, Alex Takahuwa, Karl Koscher, and Tadayoshi Kohno.

This is a good summary article on the fallibility of DNA evidence. Most interesting to me are the parts on the proprietary algorithms used in DNA matching. It's the same problem as any biometric: we need to know the rates of both false positives and false negatives. And if these algorithms are being used to determine guilt, we have a right to examine them.

Stealth Falcon: new malware from (probably) the UAE:

Lockpicking competitions in the 1850s.

There's a new piece of malware called Irongate, which is obviously inspired by Stuxnet. We don't know who is responsible for it.

There's a new report on security vulnerabilities in the PC initialization/update process, allowing someone to hijack it to install malware.

New paper: "Physical Key Extraction Attacks on PCs," by Daniel Genkin, Lev Pachmanov, Itamar Pipman, Adi Shamir, and Eran Tromer. They recover keys acoustically, from the high-frequency "coil whine" from the circuits, from a distance of about ten meters.

This is a long article, based on documents obtained via FOIA, about Edward Snowden's attempts to raise his concerns inside the NSA.

Really good investigative reporting on the automatic algorithms used to predict recidivism rates.

People who don't want Waze routing cars through their neighborhoods are feeding it false data. Interesting story of data poisoning in real life, and the security measures to detect and discount it.

This interesting essay argues that financial risks are generally not systemic risks, and instead are generally much smaller. That's certainly been our experience to date.

There's an interesting message in the documents about Snowden that Vice magazine got out of the NSA with a FOIA request. At least in 2012, the NSA was using Word macros internally

The Washington Post reporting is that Russian hackers penetrated the network of the Democratic National Committee and stole opposition research on Donald Trump. The evidence is from CrowdStrike. This seems like standard political espionage to me. We certainly don't want it to happen, but we shouldn't be surprised when it does.

Typosquatting is an old trick of registering a domain name a typo away from a popular domain name and using it for various nefarious purposes. Nikolai Philipp Tschacher just published a bachelor's thesis where he does the same trick with the names of popular code libraries, and tricks 17,000 computers into running arbitrary code.


Suckfly seems to be another Chinese nation-state espionage tool, first stealing South Korean certificates and now attacking Indian networks.

Symantec has done a good job of explaining how Suckfly works, and there's a lot of good detail in the blog posts. My only complaint is its reluctance to disclose who the targets are. It doesn't name the South Korean companies whose certificates were stolen, and it doesn't name the Indian companies that were hacked.

My guess is that Symantec can't disclose those names, because those are all customers and Symantec has confidentiality obligations towards them. But by leaving this information out, Symantec is harming us all. We have to make decisions on the Internet all the time about who to trust and who to rely on. The more information we have, the better we can make those decisions. And the more companies are publicly called out when their security fails, the more they will try to make security better.

Symantec's motivation in releasing information about Suckfly is marketing, and that's fine. There, its interests and the interests of the research community are aligned. But here, the interests diverge, and this is the value of mandatory disclosure laws.

Schneier News

I'm speaking at the CSISAC Forum in Cancun, Mexico on June 21.

I spoke at InfoSec Europe in June, and there were several press articles:

Google Moving Forward on Automatic Logins

Google is trying to bring this to Android developers by the end of the year:

Today, secure logins -- like those used by banks or in the enterprise environment -- often require more than just a username and password. They tend to also require the entry of a unique PIN, which is generally sent to your phone via SMS or emailed. This is commonly referred to as two-factor authentication, as it combines something you know (your password) with something you have in your possession, like your phone.
With Project Abacus, users would instead unlock devices or sign into applications based on a cumulative "Trust Score." This score would be calculated using a variety of factors, including your typing patterns, current location, speed and voice patterns, facial recognition, and other things.

Basically, the system replaces traditional authentication -- something you know, have, or are -- with surveillance. So maybe this is a good idea, and maybe it isn't. The devil is in the details.

It's being called creepy. But, as we've repeatedly learned, creepy is subjective. What's creepy now is perfectly normal two years later.

Security and Human Behavior (SHB 2016)

Earlier this month, I was at the ninth Workshop on Security and Human Behavior, hosted at Harvard University.

SHB is a small invitational gathering of people studying various aspects of the human side of security. The fifty or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, philosophers, political scientists, neuroscientists, lawyers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

These are the most intellectually stimulating two days of my year; this year someone called it "Bruce's brain in conference form."

The goal is maximum interaction and discussion. We do that by putting everyone on panels. There are eight six-person panels over the course of the two days. Everyone gets to talk for ten minutes about their work, and then there's half an hour of discussion in the room. Then there are lunches, dinners, and receptions -- all designed so people meet each other and talk.

Workshop on Security and Human Behavior:

This page lists the participants and gives links to some of their work.

As usual, Ross Anderson liveblogged the talks.

Here are my posts on previous SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 13 books -- including his latest, "Data and Goliath" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient, an IBM Company. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient, an IBM Company.

Copyright (c) 2016 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.