January 15, 2017

by Bruce Schneier
CTO, Resilient Systems, Inc.

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively and intelligent comment section. An RSS feed is available.

In this issue:

Attributing the DNC Hacks to Russia

President Barack Obama's public accusation of Russia as the source of the hacks in the US presidential election and the leaking of sensitive e-mails through WikiLeaks and other sources has opened up a debate on what constitutes sufficient evidence to attribute an attack in cyberspace. The answer is both complicated and inherently tied up in political considerations.

The administration is balancing political considerations and the inherent secrecy of electronic espionage with the need to justify its actions to the public. These issues will continue to plague us as more international conflict plays out in cyberspace.

It's true that it's easy for an attacker to hide who he is in cyberspace. We are unable to identify particular pieces of hardware and software around the world positively. We can't verify the identity of someone sitting in front of a keyboard through computer data alone. Internet data packets don't come with return addresses, and it's easy for attackers to disguise their origins. For decades, hackers have used techniques such as jump hosts, VPNs, Tor and open relays to obscure their origin, and in many cases they work. I'm sure that many national intelligence agencies route their attacks through China, simply because everyone knows lots of attacks come from China.

On the other hand, there are techniques that can identify attackers with varying degrees of precision. It's rarely just one thing, and you'll often hear the term "constellation of evidence" to describe how a particular attacker is identified. It's analogous to traditional detective work. Investigators collect clues and piece them together with known mode of operations. They look for elements that resemble other attacks and elements that are anomalies. The clues might involve ones and zeros, but the techniques go back to Sir Arthur Conan Doyle.

The University of Toronto-based organization Citizen Lab routinely attributes attacks against the computers of activists and dissidents to particular Third World governments. It took months to identify China as the source of the 2012 attacks against the "New York Times." While it was uncontroversial to say that Russia was the source of a cyberattack against Estonia in 2007, no one knew if those attacks were authorized by the Russian government -- until the attackers explained themselves. And it was the Internet security company CrowdStrike, which first attributed the attacks against the Democratic National Committee to Russian intelligence agencies in June, based on multiple pieces of evidence gathered from its forensic investigation.

Attribution is easier if you are monitoring broad swaths of the Internet. This gives the National Security Agency a singular advantage in the attribution game. The problem, of course, is that the NSA doesn't want to publish what it knows.

Regardless of what the government knows and how it knows it, the decision of whether to make attribution evidence public is another matter. When Sony was attacked, many security experts -- myself included -- were skeptical of both the government's attribution claims and the flimsy evidence associated with it. I only became convinced when the "New York Times" ran a story about the government's attribution, which talked about both secret evidence inside the NSA and human intelligence assets inside North Korea. In contrast, when the Office of Personnel Management was breached in 2015, the US government decided not to accuse China publicly, either because it didn't want to escalate the political situation or because it didn't want to reveal any secret evidence.

The Obama administration has been more public about its evidence in the DNC case, but it has not been entirely public.

It's one thing for the government to know who attacked it. It's quite another for it to convince the public who attacked it. As attribution increasingly relies on secret evidence -- as it did with North Korea's attack of Sony in 2014 and almost certainly does regarding Russia and the previous election -- the government is going to have to face the choice of making previously secret evidence public and burning sources and methods, or keeping it secret and facing perfectly reasonable skepticism.

If the government is going to take public action against a cyberattack, it needs to make its evidence public. But releasing secret evidence might get people killed, and it would make any future confidentiality assurances we make to human sources completely non-credible. This problem isn't going away; secrecy helps the intelligence community, but it wounds our democracy.

The constellation of evidence attributing the attacks against the DNC, and subsequent release of information, is comprehensive. It's possible that there was more than one attack. It's possible that someone not associated with Russia leaked the information to WikiLeaks, although we have no idea where that someone else would have obtained the information. We know that the Russian actors who hacked the DNC -- both the FSB, Russia's principal security agency, and the GRU, Russia's military intelligence unit -- are also attacking other political networks around the world.

In the end, though, attribution comes down to whom you believe. When Citizen Lab writes a report outlining how a United Arab Emirates human rights defender was targeted with a cyberattack, we have no trouble believing that it was the UAE government. When Google identifies China as the source of attacks against Gmail users, we believe it just as easily.

Obama decided not to make the accusation public before the election so as not to be seen as influencing the election. Now, afterward, there are political implications in accepting that Russia hacked the DNC in an attempt to influence the US presidential election. But no amount of evidence can convince the unconvinceable.

The most important thing we can do right now is deter any country from trying this sort of thing in the future, and the political nature of the issue makes that harder. Right now, we've told the world that others can get away with manipulating our election process as long as they can keep their efforts secret until after one side wins. Obama has promised both secret retaliations and public ones. We need to hope they're enough.

This essay previously appeared on

Obama's accusation:

Lots of attacks come from China:

Citizen Lab:

2012 attacks against the NY Times:

The Russians who attacked Estonia:

CrowdStrike's report on the DNC:

Evidence pointing to Russia's hacking of the DNC, from July:

The NSA about another Russian hacking attempt:

Me on North Korea and Sony:

NY Times story on North Korea attacking Sony:

China hacking OPM:

The government's evidence that Russia hacked the DNC:

The story of North Korea hacking Sony:

Analyzing the evidence that Russia hacked the DNC:

Citizen Lab report on the UAE:

China attacking Google:

How we can deter Russia in the future:

Obama's sanctions against Russia:

The ODNI released a declassified report on the Russian attacks.

And there were Senate hearings on this issue.

A Washington Post article talks about some of the intelligence behind the assessment.

The UK connection.

Are We Becoming More Moral Faster Than We're Becoming More


In "The Better Angels of Our Nature," Steven Pinker convincingly makes the point that by pretty much every measure you can think of, violence has declined on our planet over the long term. More generally, "the world continues to improve in just about every way." He's right, but there are two important caveats.

One, he is talking about the long term. The trend lines are uniformly positive across the centuries and mostly positive across the decades, but go up and down year to year. While this is an important development for our species, most of us care about changes year to year -- and we can't make any predictions about whether this year will be better or worse than last year in any individual measurement.

The second caveat is both more subtle and more important. In 2013, I wrote about how technology empowers attackers. By this measure, the world is getting more dangerous:

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious... and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.
This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

Pinker's trends are based both on increased societal morality and better technology, and both are based on averages: the average person with the average technology. My increased attack capability trend is based on those two trends as well, but on the outliers: the most extreme person with the most extreme technology. Pinker's trends are noisy, but over the long term they're strongly linear. Mine seem to be exponential.

When Pinker expresses optimism that the overall trends he identifies will continue into the future, he's making a bet. He's betting that his trend lines and my trend lines won't cross. That is, that our society's gradual improvement in overall morality will continue to outpace the potentially exponentially increasing ability of the extreme few to destroy everything. I am less optimistic:

But the problem isn't that these security measures won't work -- even as they shred our freedoms and liberties -- it's that no security is perfect.
Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We'll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.
As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn't kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

Clearly we're not at the point yet where any of these disaster scenarios have come to pass, and Pinker rightly expresses skepticism when he says that historical doomsday scenarios have so far never come to pass. But that's the thing about exponential curves; it's hard to predict the future from the past. So either I have discovered a fundamental problem with any intelligent individualistic species and have therefore explained the Fermi Paradox, or there is some other factor in play that will ensure that the two trend lines won't cross.

Pinker's book: "The Better Angels of Our Nature":

A Q&A with Pinker on current events:

My essay:


Filippo Valsorda wrote an excellent essay on why he's giving up on PGP. I have long believed PGP to be more trouble than it is worth. It's hard to use correctly, and easy to get wrong. More generally, e-mail is inherently difficult to secure because of all the different things we ask of it and use it for. Valsorda has a different complaint, that its long-term secrets are an unnecessary source of risk:
This is a good rebuttal:
I'm on Valsorda's side. If you want to communicate securely with me, use a message program.
More criticisms of PGP:

The UN is considering a killer-robot ban. This would be a good idea, although I can't imagine countries like the US, China, and Russia going along with it -- at least not right now.

A film student put spyware on a smartphone and then allowed it to be stolen. He made a movie of the results.

Google has released Project Wycheproof -- a test suite designed to test cryptographic libraries against a series of known attacks. The tool has already found over 40 security bugs in cryptographic libraries, which are (all? mostly?) currently being fixed.

The Encryption Working Group of the House Judiciary Committee and the House Energy and Commerce Committee has released its annual report. It's pretty good.

This Verge article isn't great, but we are certainly moving into a future where audio and video will be easy to fake, and easier to fake undetectably. This is going to make propaganda easier, with all of the ill effects we've already seen turned up to eleven. I don't have a good solution for this.

NIST is accepting proposals for public-key algorithms immune to quantum computing techniques. Details here. Deadline is the end of November 2017. I applaud NIST for taking the lead on this, and for taking it now when there is no emergency and we have time to do this right.

CrowdStrike has an interesting blog post about how the Russian military is tracking Ukrainian field artillery units by compromising soldiers' smartphones and tracking them.

Signal, the encrypted messaging app I prefer, is being blocked in both Egypt and the UAE. Recently, the Signal team developed a workaround: domain fronting. This isn't a new trick (Tor uses it too, for example), but it does work.

Nice article on the 2011 DigiNotar attack and how it changed security practices in the CA industry.

A modern photocopier is basically a computer with a scanner and printer attached. This computer has a hard drive, and scans of images are regularly stored on that drive. This means that when a photocopier is thrown away, that hard drive is filled with pages that the machine copied over its lifetime. As you might expect, some of those pages will contain sensitive information. This 2011 report was written by the Inspector General of the National Archives and Records Administration (NARA). It found that the organization did nothing to safeguard its photocopiers.
I don't mean to single this organization out. Pretty much no one thinks about this security threat.

Someone just registered their company name as ; DROP TABLE "COMPANIES";-- LTD.
Obligatory xkcd comic:

The Barbie typewriter doesn't have many cryptographic capabilities, but it has some.

Good article debunking the myth that requiring people to use their real names on the Internet makes them behave better.

I am co-author on a paper discussing whether elections be classified as "critical infrastructure" in the US, based on experiences in other countries:
The paper was speculative, but now it's official. The US election has been classified as critical infrastructure. I am tentatively in favor of this, but what really matter is what happens now. What does this mean? What sorts of increased security will election systems get? Will we finally get rid of computerized touch-screen voting?

The FDA has issued a report giving medical devices guidance on computer and network security. There's nothing particularly new or interesting; it reads like standard security advice: write secure software, patch bugs, and so on. Note that these are "non-binding recommendations," so I'm really not sure why they bothered.
Why they bothered:

New paper: "A Simple Power Analysis Attack on the Twofish Key Schedule." This shouldn't be a surprise; these attacks are devastating if you don't take steps to mitigate them.
The general issue is if an attacker has physical control of the computer performing the encryption, it is very hard to secure the encryption inside the computer. I wrote a paper about this back in 1999.

President Obama has changed the rules regarding raw intelligence, allowing the NSA to share raw data with the US's other 16 intelligence agencies.
Here are the new procedures.
This rule change has been in the works for a while. Here are two blog posts from April discussing the then-proposed changes.
From a privacy perspective, this feels like a really bad idea to me.

Interesting research: Sebastian Hellmeier, "The Dictator's Digital Toolkit: Explaining Variation in Internet Filtering in Authoritarian Regimes," Politics & Policy, 2016 (full paper is behind a paywall).

Security Risks of TSA PreCheck

Former TSA Administrator Kip Hawley wrote an op-ed pointing out the security vulnerabilities in the TSA's PreCheck program:

The first vulnerability in the system is its enrollment process, which seeks to verify an applicant's identity. We know verification is a challenge: A 2011 Government Accountability Office report on TSA's system for checking airport workers' identities concluded that it was "not designed to provide reasonable assurance that only qualified applicants" got approved. It's not a stretch to believe a reasonably competent terrorist could construct an identity that would pass PreCheck's front end.
The other step in PreCheck's "intelligence-driven, risk-based security strategy" is absurd on its face: The absence of negative information about a person doesn't mean he or she is trustworthy. News reports are filled with stories of people who seemed to be perfectly normal right up to the moment they committed a heinous act. There is no screening algorithm and no database check that can accurately predict human behavior -- especially on the scale of millions. It is axiomatic that terrorist organizations recruit operatives who have clean backgrounds and interview well.

None of this is news.

Back in 2004, I wrote:

Imagine you're a terrorist plotter with half a dozen potential terrorists at your disposal. They all apply for a card, and three get one. Guess which are going on the mission? And they'll buy round-trip tickets with credit cards and have a "normal" amount of luggage with them.
What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.
The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That's simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.

I wrote much the same thing in 2007:

Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn't any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn't even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.
And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.
The truth is that whenever you create two paths through security -- a high-security path and a low-security path -- you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.

In a companion blog post, Hawley has more details about why the program doesn't work:

In the sense that PreCheck bars people who were identified by intelligence or law enforcement agencies as possible terrorists, then it was intelligence-driven. But using that standard for PreCheck is ridiculous since those people already get extra screening or are on the No-Fly list. The movie "Patriots Day," out now, reminds us of the tragic and preventable Boston Marathon bombing. The FBI sent agents to talk to the Tsarnaev brothers and investigate them as possible terror suspects. And cleared them. Even they did not meet the "intelligence-driven" definition used in PreCheck.
The other problem with "intelligence-driven" in the PreCheck context is that intelligence actually tells us the *opposite*; specifically that terrorists pick clean operatives. If TSA uses current intelligence to evaluate risk, it would not be out enrolling everybody they can into pre-9/11 security for everybody not flagged by the security services.

Hawley and I may agree on the problem, but we have completely opposite solutions. The op-ed was too short to include details, but they're in a companion blog post. Basically, he wants to screen PreCheck passengers more:

In the interests of space, I left out details of what I would suggest as short-and medium-term solutions. Here are a few ideas:
* Immediately scrub the PreCheck enrollees for false identities. That can probably be accomplished best and most quickly by getting permission from members, and then using, commercial data. If the results show that PreCheck has already been penetrated, the program should be suspended.
* Deploy K-9 teams at PreCheck lanes.
* Use Behaviorally trained officers to interact with and check the credentials of PreCheck passengers.
* Use Explosives Trace Detection cotton swabs on PreCheck passengers at a much higher rate. Same with removing shoes.
* Turn on the body scanners and keep them fully utilized.
* Allow liquids to stay in the carry-on since TSA scanners can detect threat liquids.
* Work with the airlines to keep the PreCheck experience positive.
* Work with airports to place PreCheck lanes away from regular checkpoints so as not to diminish lane capacity for non-PreCheck passengers. Rental Car check-in areas could be one alternative. Also, downtown check-in and screening (with secure transport to the airport) is a possibility.

These solutions completely ignore the data from the real-world experiment PreCheck has been. Hawley writes that PreCheck tells us that "terrorists pick clean operatives." That's exactly wrong. PreCheck tells us that, basically, there are no terrorists. If 1) it's an easier way through airport security that terrorists will invariably use, and 2) there have been no instances of terrorists using it in the 10+ years it and its predecessors have been in operation, then the inescapable conclusion is that the threat is minimal. Instead of screening PreCheck passengers more, we should screen everybody else *less*. This is me in 2012: "I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security."

I agree with Hawley that we need to overhaul airport security. Me in 2010: "Airport security is the last line of defense, and it's not a very good one." We need to recognize that the actual risk is much lower than we fear, and ratchet airport security down accordingly. And then we need to continue to invest in investigation and intelligence: security measures that work regardless of the tactic or target.

Hawley's op-ed and blog post:

My essays:

Law Enforcement Access to IoT Data

In the first of what will undoubtedly be a large number of battles between companies that make IoT devices and the police, Amazon is refusing to comply with a warrant demanding data on what its Echo device heard at a crime scene.

The particulars of the case are weird. Amazon's Echo does not constantly record; it only listens for its name. So it's unclear that there is any evidence to be turned over. But this general issue isn't going away. We are all under ubiquitous surveillance, but it is surveillance by the companies that control the Internet-connected devices in our lives. The rules by which police and intelligence agencies get access to that data will come under increasing pressure for change.

A newscaster discussed Amazon's Echo on the news, causing devices in the same room as tuned-in televisions to order unwanted products.

This year, the same technology is coming to LG appliances such as refrigerators.

Schneier News

I'm speaking in Stockholm (via Skype) on Jan 23:

I'm speaking at the World Government Summit in Dubai on Feb 12.

I'm speaking at the RSA Conference in San Francisco on Feb 14-15.

This semester, I am teaching Internet security policy at the Harvard Kennedy School.

Class Breaks

There's a concept from computer security known as a class break. It's a particular security vulnerability that breaks not just one system, but an entire class of systems. Examples might be a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system's software. Or a vulnerability in Internet-enabled digital video recorders and webcams that allow an attacker to recruit those devices into a massive botnet.

It's a particular way computer systems can fail, exacerbated by the characteristics of computers and software. It only takes one smart person to figure out how to attack the system. Once he does that, he can write software that automates his attack. He can do it over the Internet, so he doesn't have to be near his victim. He can automate his attack so it works while he sleeps. And then he can pass the ability to someone -- or to lots of people -- without the skill. This changes the nature of security failures, and completely upends how we need to defend against them.

An example: Picking a mechanical door lock requires both skill and time. Each lock is a new job, and success at one lock doesn't guarantee success with another of the same design. Electronic door locks, like the ones you now find in hotel rooms, have different vulnerabilities. An attacker can find a flaw in the design that allows him to create a key card that opens every door. If he publishes his attack software, not just the attacker, but anyone can now open every lock. And if those locks are connected to the Internet, attackers could potentially open door locks remotely -- they could open every door lock remotely at the same time. That's a class break.

It's how computer systems fail, but it's not how we think about failures. We still think about automobile security in terms of individual car thieves manually stealing cars. We don't think of hackers remotely taking control of cars over the Internet. Or, remotely disabling every car over the Internet. We think about voting fraud as unauthorized individuals trying to vote. We don't think about a single person or organization remotely manipulating thousands of Internet-connected voting machines.

In a sense, class breaks are not a new concept in risk management. It's the difference between home burglaries and fires, which happen occasionally to different houses in a neighborhood over the course of the year, and floods and earthquakes, which either happen to everyone in the neighborhood or no one. Insurance companies can handle both types of risk, but they are inherently different. The increasing computerization of everything is moving us from a burglary/fire risk model to a flood/earthquake model, which a given threat either affects everyone in town or doesn't happen at all.

But there's a key difference between floods/earthquakes and class breaks in computer systems: the former are random natural phenomena, while the latter is human-directed. Floods don't change their behavior to maximize their damage based on the types of defenses we build. Attackers do that to computer systems. Attackers examine our systems, looking for class breaks. And once one of them finds one, they'll exploit it again and again until the vulnerability is fixed.

As we move into the world of the Internet of Things, where computers permeate our lives at every level, class breaks will become increasingly important. The combination of automation and action at a distance will give attackers more power and leverage than they have ever had before. Security notions like the precautionary principle -- where the potential of harm is so great that we err on the side of not deploying a new technology without proofs of security -- will become more important in a world where an attacker can open all of the door locks or hack all of the power plants. It's not an inherently less secure world, but it's a differently secure world. It's a world where driverless cars are much safer than people-driven cars, until suddenly they're not. We need to build systems that assume the possibility of class breaks -- and maintain security despite them.

This essay originally appeared on as part of their annual question.

This year it was: "What scientific term or concept ought to be more widely known?"

A Comment on the Trump Dossier

Imagine that you are someone in the CIA, concerned about the future of America. You have this Russian dossier on Donald Trump, which you have some evidence might be true. The smartest thing you can do is to leak it to the public. By doing so, you are eliminating any leverage Russia has over Trump and probably reducing the effectiveness of any other blackmail material any government might have on Trump. I believe you do this regardless of whether you ultimately believe the document's findings or not, and regardless of whether you support or oppose Trump. It's simple game theory.

This document is particularly safe to release. Because it's not a classified report of the CIA, leaking it is not a crime. And you release it now, before Trump becomes president, because doing so afterwards becomes much more dangerous.

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 12 books -- including "Liars and Outliers: Enabling the Trust Society Needs to Survive" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2017 by Bruce Schneier.

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.