Crypto-Gram

January 15, 2017

by Bruce Schneier
CTO, Resilient Systems, Inc.
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2017/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


Attributing the DNC Hacks to Russia

President Barack Obama’s public accusation of Russia as the source of the hacks in the US presidential election and the leaking of sensitive e-mails through WikiLeaks and other sources has opened up a debate on what constitutes sufficient evidence to attribute an attack in cyberspace. The answer is both complicated and inherently tied up in political considerations.

The administration is balancing political considerations and the inherent secrecy of electronic espionage with the need to justify its actions to the public. These issues will continue to plague us as more international conflict plays out in cyberspace.

It’s true that it’s easy for an attacker to hide who he is in cyberspace. We are unable to identify particular pieces of hardware and software around the world positively. We can’t verify the identity of someone sitting in front of a keyboard through computer data alone. Internet data packets don’t come with return addresses, and it’s easy for attackers to disguise their origins. For decades, hackers have used techniques such as jump hosts, VPNs, Tor and open relays to obscure their origin, and in many cases they work. I’m sure that many national intelligence agencies route their attacks through China, simply because everyone knows lots of attacks come from China.

On the other hand, there are techniques that can identify attackers with varying degrees of precision. It’s rarely just one thing, and you’ll often hear the term “constellation of evidence” to describe how a particular attacker is identified. It’s analogous to traditional detective work. Investigators collect clues and piece them together with known mode of operations. They look for elements that resemble other attacks and elements that are anomalies. The clues might involve ones and zeros, but the techniques go back to Sir Arthur Conan Doyle.

The University of Toronto-based organization Citizen Lab routinely attributes attacks against the computers of activists and dissidents to particular Third World governments. It took months to identify China as the source of the 2012 attacks against the “New York Times.” While it was uncontroversial to say that Russia was the source of a cyberattack against Estonia in 2007, no one knew if those attacks were authorized by the Russian government—until the attackers explained themselves. And it was the Internet security company CrowdStrike, which first attributed the attacks against the Democratic National Committee to Russian intelligence agencies in June, based on multiple pieces of evidence gathered from its forensic investigation.

Attribution is easier if you are monitoring broad swaths of the Internet. This gives the National Security Agency a singular advantage in the attribution game. The problem, of course, is that the NSA doesn’t want to publish what it knows.

Regardless of what the government knows and how it knows it, the decision of whether to make attribution evidence public is another matter. When Sony was attacked, many security experts—myself included—were skeptical of both the government’s attribution claims and the flimsy evidence associated with it. I only became convinced when the “New York Times” ran a story about the government’s attribution, which talked about both secret evidence inside the NSA and human intelligence assets inside North Korea. In contrast, when the Office of Personnel Management was breached in 2015, the US government decided not to accuse China publicly, either because it didn’t want to escalate the political situation or because it didn’t want to reveal any secret evidence.

The Obama administration has been more public about its evidence in the DNC case, but it has not been entirely public.

It’s one thing for the government to know who attacked it. It’s quite another for it to convince the public who attacked it. As attribution increasingly relies on secret evidence—as it did with North Korea’s attack of Sony in 2014 and almost certainly does regarding Russia and the previous election—the government is going to have to face the choice of making previously secret evidence public and burning sources and methods, or keeping it secret and facing perfectly reasonable skepticism.

If the government is going to take public action against a cyberattack, it needs to make its evidence public. But releasing secret evidence might get people killed, and it would make any future confidentiality assurances we make to human sources completely non-credible. This problem isn’t going away; secrecy helps the intelligence community, but it wounds our democracy.

The constellation of evidence attributing the attacks against the DNC, and subsequent release of information, is comprehensive. It’s possible that there was more than one attack. It’s possible that someone not associated with Russia leaked the information to WikiLeaks, although we have no idea where that someone else would have obtained the information. We know that the Russian actors who hacked the DNC—both the FSB, Russia’s principal security agency, and the GRU, Russia’s military intelligence unit—are also attacking other political networks around the world.

In the end, though, attribution comes down to whom you believe. When Citizen Lab writes a report outlining how a United Arab Emirates human rights defender was targeted with a cyberattack, we have no trouble believing that it was the UAE government. When Google identifies China as the source of attacks against Gmail users, we believe it just as easily.

Obama decided not to make the accusation public before the election so as not to be seen as influencing the election. Now, afterward, there are political implications in accepting that Russia hacked the DNC in an attempt to influence the US presidential election. But no amount of evidence can convince the unconvinceable.

The most important thing we can do right now is deter any country from trying this sort of thing in the future, and the political nature of the issue makes that harder. Right now, we’ve told the world that others can get away with manipulating our election process as long as they can keep their efforts secret until after one side wins. Obama has promised both secret retaliations and public ones. We need to hope they’re enough.

This essay previously appeared on CNN.com.
http://www.cnn.com/2017/01/05/opinions/…

Obama’s accusation:
http://www.cnn.com/2016/12/16/politics/…

Lots of attacks come from China:
https://www.cnet.com/news/…

Citizen Lab:
https://citizenlab.org/category/research-news/…

2012 attacks against the NY Times:
http://www.nytimes.com/2013/01/31/technology/…

The Russians who attacked Estonia:
http://www.rferl.org/a/…

CrowdStrike’s report on the DNC:
https://www.crowdstrike.com//…

Evidence pointing to Russia’s hacking of the DNC, from July:
https://www.schneier.com/blog/archives/2016/07/…

The NSA about another Russian hacking attempt:
https://theintercept.com/2016/12/29/…

Me on North Korea and Sony:
https://www.schneier.com/essays/archives/2014/12/…
https://www.schneier.com/essays/archives/2015/01/…

NY Times story on North Korea attacking Sony:
http://www.nytimes.com/2015/01/19/world/asia/…

China hacking OPM:
https://www.wired.com/2016/10/…

The government’s evidence that Russia hacked the DNC:
https://www.us-cert.gov/sites/default/files/…
https://www.washingtonpost.com/world/…

The story of North Korea hacking Sony:
http://fortune.com/sony-hack-part-1/

Analyzing the evidence that Russia hacked the DNC:
https://www.emptywheel.net/2016/12/10/…

Citizen Lab report on the UAE:
https://citizenlab.org/2016/08/…

China attacking Google:
http://www.nytimes.com/2011/06/02/technology/…

How we can deter Russia in the future:
http://prosyn.org/FU2S4eH

Obama’s sanctions against Russia:
http://www.cnn.com/2016/12/28/politics/…
https://www.theguardian.com/us-news/2016/dec/29/…

The ODNI released a declassified report on the Russian attacks.
https://assets.documentcloud.org/documents/3254237/…
http://www.nytimes.com/2017/01/06/us/politics/…

And there were Senate hearings on this issue.
http://www.cnn.com/2016/12/30/politics/…

A Washington Post article talks about some of the intelligence behind the assessment.
https://www.washingtonpost.com/world/…

The UK connection.
https://www.theguardian.com/us-news/2017/jan/07/…


Are We Becoming More Moral Faster Than We’re Becoming More

Dangerous?

In “The Better Angels of Our Nature,” Steven Pinker convincingly makes the point that by pretty much every measure you can think of, violence has declined on our planet over the long term. More generally, “the world continues to improve in just about every way.” He’s right, but there are two important caveats.

One, he is talking about the long term. The trend lines are uniformly positive across the centuries and mostly positive across the decades, but go up and down year to year. While this is an important development for our species, most of us care about changes year to year—and we can’t make any predictions about whether this year will be better or worse than last year in any individual measurement.

The second caveat is both more subtle and more important. In 2013, I wrote about how technology empowers attackers. By this measure, the world is getting more dangerous:

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious… and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

Pinker’s trends are based both on increased societal morality and better technology, and both are based on averages: the average person with the average technology. My increased attack capability trend is based on those two trends as well, but on the outliers: the most extreme person with the most extreme technology. Pinker’s trends are noisy, but over the long term they’re strongly linear. Mine seem to be exponential.

When Pinker expresses optimism that the overall trends he identifies will continue into the future, he’s making a bet. He’s betting that his trend lines and my trend lines won’t cross. That is, that our society’s gradual improvement in overall morality will continue to outpace the potentially exponentially increasing ability of the extreme few to destroy everything. I am less optimistic:

But the problem isn’t that these security measures won’t work—even as they shred our freedoms and liberties—it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

Clearly we’re not at the point yet where any of these disaster scenarios have come to pass, and Pinker rightly expresses skepticism when he says that historical doomsday scenarios have so far never come to pass. But that’s the thing about exponential curves; it’s hard to predict the future from the past. So either I have discovered a fundamental problem with any intelligent individualistic species and have therefore explained the Fermi Paradox, or there is some other factor in play that will ensure that the two trend lines won’t cross.

Pinker’s book: “The Better Angels of Our Nature”:
https://www.amazon.com/…

A Q&A with Pinker on current events:
http://www.vox.com/science-and-health/2016/12/22/…

My essay:
https://www.schneier.com/essays/archives/2013/03/…


News

Filippo Valsorda wrote an excellent essay on why he’s giving up on PGP. I have long believed PGP to be more trouble than it is worth. It’s hard to use correctly, and easy to get wrong. More generally, e-mail is inherently difficult to secure because of all the different things we ask of it and use it for. Valsorda has a different complaint, that its long-term secrets are an unnecessary source of risk:
http://arstechnica.com/security/2016/12/…
This is a good rebuttal:
http://arstechnica.com/information-technology/2016/…
I’m on Valsorda’s side. If you want to communicate securely with me, use a message program.
More criticisms of PGP:
https://www.schneier.com/blog/archives/2016/12/…

The UN is considering a killer-robot ban. This would be a good idea, although I can’t imagine countries like the US, China, and Russia going along with it—at least not right now.
https://hardware.slashdot.org/story/16/12/17/…

A film student put spyware on a smartphone and then allowed it to be stolen. He made a movie of the results.
https://www.yahoo.com/tech/…
https://www.youtube.com/watch?v=NpN9NzO4Mo8
https://apple.slashdot.org/story/16/12/19/2050218/…

Google has released Project Wycheproof—a test suite designed to test cryptographic libraries against a series of known attacks. The tool has already found over 40 security bugs in cryptographic libraries, which are (all? mostly?) currently being fixed.
https://github.com/google/wycheproof
https://security.googleblog.com/2016/12/…
https://github.com/google/wycheproof/blob/master/…
https://www.onthewire.io/…
https://tech.slashdot.org/story/16/12/19/2120237/…

The Encryption Working Group of the House Judiciary Committee and the House Energy and Commerce Committee has released its annual report. It’s pretty good.
https://judiciary.house.gov/wp-content/uploads/2016/…

This Verge article isn’t great, but we are certainly moving into a future where audio and video will be easy to fake, and easier to fake undetectably. This is going to make propaganda easier, with all of the ill effects we’ve already seen turned up to eleven. I don’t have a good solution for this.
http://www.theverge.com/2016/12/20/14022958/…

NIST is accepting proposals for public-key algorithms immune to quantum computing techniques. Details here. Deadline is the end of November 2017. I applaud NIST for taking the lead on this, and for taking it now when there is no emergency and we have time to do this right.
https://www.federalregister.gov/documents/2016/12/…
http://csrc.nist.gov/groups/ST/post-quantum-crypto/
https://yro.slashdot.org/story/16/12/21/2334220/…

CrowdStrike has an interesting blog post about how the Russian military is tracking Ukrainian field artillery units by compromising soldiers’ smartphones and tracking them.
https://www.crowdstrike.com//…
https://www.washingtonpost.com/world/…

Signal, the encrypted messaging app I prefer, is being blocked in both Egypt and the UAE. Recently, the Signal team developed a workaround: domain fronting. This isn’t a new trick (Tor uses it too, for example), but it does work.
https://www.wired.com/2016/12/…
https://www.bamsoftware.com/papers/fronting/

Nice article on the 2011 DigiNotar attack and how it changed security practices in the CA industry.
http://www.slate.com/articles/technology/…

A modern photocopier is basically a computer with a scanner and printer attached. This computer has a hard drive, and scans of images are regularly stored on that drive. This means that when a photocopier is thrown away, that hard drive is filled with pages that the machine copied over its lifetime. As you might expect, some of those pages will contain sensitive information. This 2011 report was written by the Inspector General of the National Archives and Records Administration (NARA). It found that the organization did nothing to safeguard its photocopiers.
https://www.schneier.com/blog/archives/2017/01/…
I don’t mean to single this organization out. Pretty much no one thinks about this security threat.
https://www.ftc.gov/tips-advice/business-center/…

Someone just registered their company name as ; DROP TABLE “COMPANIES”;—LTD.
https://beta.companieshouse.gov.uk/company/10542519
https://www.reddit.com/r/sysadmin/comments/5l030g/…
Obligatory xkcd comic:
https://xkcd.com/327/

The Barbie typewriter doesn’t have many cryptographic capabilities, but it has some.
http://www.cryptomuseum.com/crypto/mehano/barbie/

Good article debunking the myth that requiring people to use their real names on the Internet makes them behave better.
https://.coralproject.net/the-real-name-fallacy/

I am co-author on a paper discussing whether elections be classified as “critical infrastructure” in the US, based on experiences in other countries:
https://papers.ssrn.com/sol3/papers.cfm?…
The paper was speculative, but now it’s official. The US election has been classified as critical infrastructure. I am tentatively in favor of this, but what really matter is what happens now. What does this mean? What sorts of increased security will election systems get? Will we finally get rid of computerized touch-screen voting?
http://arstechnica.com/tech-policy/2017/01/…
http://thehill.com/s/congress-blog/technology/…
https://www.dhs.gov/news/2017/01/06/…
http://hosted.ap.org/dynamic/stories/U/…

The FDA has issued a report giving medical devices guidance on computer and network security. There’s nothing particularly new or interesting; it reads like standard security advice: write secure software, patch bugs, and so on. Note that these are “non-binding recommendations,” so I’m really not sure why they bothered.
http://www.fda.gov/downloads/MedicalDevices/…
Why they bothered:
https://www.schneier.com/blog/archives/2017/01/…

New paper: “A Simple Power Analysis Attack on the Twofish Key Schedule.” This shouldn’t be a surprise; these attacks are devastating if you don’t take steps to mitigate them.
https://arxiv.org/pdf/1611.07109.pdf
The general issue is if an attacker has physical control of the computer performing the encryption, it is very hard to secure the encryption inside the computer. I wrote a paper about this back in 1999.
https://www.schneier.com/academic/archives/1999/06/…

President Obama has changed the rules regarding raw intelligence, allowing the NSA to share raw data with the US’s other 16 intelligence agencies.
https://www.nytimes.com/2017/01/12/us/politics/…
Here are the new procedures.
https://www.dni.gov/files/documents/icotr/…
This rule change has been in the works for a while. Here are two blog posts from April discussing the then-proposed changes.
https://www.justsecurity.org/30327/…
https://www.justsecurity.org/30434/…
From a privacy perspective, this feels like a really bad idea to me.

Interesting research: Sebastian Hellmeier, “The Dictator’s Digital Toolkit: Explaining Variation in Internet Filtering in Authoritarian Regimes,” Politics & Policy, 2016 (full paper is behind a paywall).
http://onlinelibrary.wiley.com/doi/10.1111/…


Security Risks of TSA PreCheck

Former TSA Administrator Kip Hawley wrote an op-ed pointing out the security vulnerabilities in the TSA’s PreCheck program:

The first vulnerability in the system is its enrollment process, which seeks to verify an applicant’s identity. We know verification is a challenge: A 2011 Government Accountability Office report on TSA’s system for checking airport workers’ identities concluded that it was “not designed to provide reasonable assurance that only qualified applicants” got approved. It’s not a stretch to believe a reasonably competent terrorist could construct an identity that would pass PreCheck’s front end.

The other step in PreCheck’s “intelligence-driven, risk-based security strategy” is absurd on its face: The absence of negative information about a person doesn’t mean he or she is trustworthy. News reports are filled with stories of people who seemed to be perfectly normal right up to the moment they committed a heinous act. There is no screening algorithm and no database check that can accurately predict human behavior—especially on the scale of millions. It is axiomatic that terrorist organizations recruit operatives who have clean backgrounds and interview well.

None of this is news.

Back in 2004, I wrote:

Imagine you’re a terrorist plotter with half a dozen potential terrorists at your disposal. They all apply for a card, and three get one. Guess which are going on the mission? And they’ll buy round-trip tickets with credit cards and have a “normal” amount of luggage with them.

What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.

The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That’s simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.

I wrote much the same thing in 2007:

Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn’t any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn’t even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.

And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.

The truth is that whenever you create two paths through security—a high-security path and a low-security path—you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.

In a companion blog post, Hawley has more details about why the program doesn’t work:

In the sense that PreCheck bars people who were identified by intelligence or law enforcement agencies as possible terrorists, then it was intelligence-driven. But using that standard for PreCheck is ridiculous since those people already get extra screening or are on the No-Fly list. The movie “Patriots Day,” out now, reminds us of the tragic and preventable Boston Marathon bombing. The FBI sent agents to talk to the Tsarnaev brothers and investigate them as possible terror suspects. And cleared them. Even they did not meet the “intelligence-driven” definition used in PreCheck.

The other problem with “intelligence-driven” in the PreCheck context is that intelligence actually tells us the *opposite*; specifically that terrorists pick clean operatives. If TSA uses current intelligence to evaluate risk, it would not be out enrolling everybody they can into pre-9/11 security for everybody not flagged by the security services.

Hawley and I may agree on the problem, but we have completely opposite solutions. The op-ed was too short to include details, but they’re in a companion blog post. Basically, he wants to screen PreCheck passengers more:

In the interests of space, I left out details of what I would suggest as short-and medium-term solutions. Here are a few ideas:

* Immediately scrub the PreCheck enrollees for false identities. That can probably be accomplished best and most quickly by getting permission from members, and then using, commercial data. If the results show that PreCheck has already been penetrated, the program should be suspended.

* Deploy K-9 teams at PreCheck lanes.

* Use Behaviorally trained officers to interact with and check the credentials of PreCheck passengers.

* Use Explosives Trace Detection cotton swabs on PreCheck passengers at a much higher rate. Same with removing shoes.

* Turn on the body scanners and keep them fully utilized.

* Allow liquids to stay in the carry-on since TSA scanners can detect threat liquids.

* Work with the airlines to keep the PreCheck experience positive.

* Work with airports to place PreCheck lanes away from regular checkpoints so as not to diminish lane capacity for non-PreCheck passengers. Rental Car check-in areas could be one alternative. Also, downtown check-in and screening (with secure transport to the airport) is a possibility.

These solutions completely ignore the data from the real-world experiment PreCheck has been. Hawley writes that PreCheck tells us that “terrorists pick clean operatives.” That’s exactly wrong. PreCheck tells us that, basically, there are no terrorists. If 1) it’s an easier way through airport security that terrorists will invariably use, and 2) there have been no instances of terrorists using it in the 10+ years it and its predecessors have been in operation, then the inescapable conclusion is that the threat is minimal. Instead of screening PreCheck passengers more, we should screen everybody else *less*. This is me in 2012: “I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security.”

I agree with Hawley that we need to overhaul airport security. Me in 2010: “Airport security is the last line of defense, and it’s not a very good one.” We need to recognize that the actual risk is much lower than we fear, and ratchet airport security down accordingly. And then we need to continue to invest in investigation and intelligence: security measures that work regardless of the tactic or target.

Hawley’s op-ed and blog post:
http://www.latimes.com/opinion/op-ed/…
http://kiphawley.typed.com//…

My essays:
https://www.schneier.com/essays/archives/2004/08/…
https://www.schneier.com/essays/archives/2007/01/…
https://www.schneier.com/blog/archives/2012/10/…
http://www.nytimes.com/roomfordebate/2010/11/22/…
https://www.schneier.com/essays/archives/2015/06/…
https://www.schneier.com/essays/archives/2009/11/…


Law Enforcement Access to IoT Data

In the first of what will undoubtedly be a large number of battles between companies that make IoT devices and the police, Amazon is refusing to comply with a warrant demanding data on what its Echo device heard at a crime scene.

The particulars of the case are weird. Amazon’s Echo does not constantly record; it only listens for its name. So it’s unclear that there is any evidence to be turned over. But this general issue isn’t going away. We are all under ubiquitous surveillance, but it is surveillance by the companies that control the Internet-connected devices in our lives. The rules by which police and intelligence agencies get access to that data will come under increasing pressure for change.

http://www.nytimes.com/2016/12/28/business/…
https://www.washingtonpost.com/news/the-switch/wp/…
http://nymag.com/selectall/2016/12/…

A newscaster discussed Amazon’s Echo on the news, causing devices in the same room as tuned-in televisions to order unwanted products.
http://www.cw6sandiego.com/…

This year, the same technology is coming to LG appliances such as refrigerators.
http://arstechnica.com/gadgets/2017/01/…


Schneier News

I’m speaking in Stockholm (via Skype) on Jan 23:
http://teknikochsakerhet.se/cyberforsvarsdagen-2017/

I’m speaking at the World Government Summit in Dubai on Feb 12.
https://worldgovernmentsummit.org/

I’m speaking at the RSA Conference in San Francisco on Feb 14-15.
https://www.rsaconference.com/events/us17

This semester, I am teaching Internet security policy at the Harvard Kennedy School.
https://www.hks.harvard.edu/degrees/…
Syllabus:
https://www.hks.harvard.edu/syllabus/IGA-236.pdf


Class Breaks

There’s a concept from computer security known as a class break. It’s a particular security vulnerability that breaks not just one system, but an entire class of systems. Examples might be a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system’s software. Or a vulnerability in Internet-enabled digital video recorders and webcams that allow an attacker to recruit those devices into a massive botnet.

It’s a particular way computer systems can fail, exacerbated by the characteristics of computers and software. It only takes one smart person to figure out how to attack the system. Once he does that, he can write software that automates his attack. He can do it over the Internet, so he doesn’t have to be near his victim. He can automate his attack so it works while he sleeps. And then he can pass the ability to someone—or to lots of people—without the skill. This changes the nature of security failures, and completely upends how we need to defend against them.

An example: Picking a mechanical door lock requires both skill and time. Each lock is a new job, and success at one lock doesn’t guarantee success with another of the same design. Electronic door locks, like the ones you now find in hotel rooms, have different vulnerabilities. An attacker can find a flaw in the design that allows him to create a key card that opens every door. If he publishes his attack software, not just the attacker, but anyone can now open every lock. And if those locks are connected to the Internet, attackers could potentially open door locks remotely—they could open every door lock remotely at the same time. That’s a class break.

It’s how computer systems fail, but it’s not how we think about failures. We still think about automobile security in terms of individual car thieves manually stealing cars. We don’t think of hackers remotely taking control of cars over the Internet. Or, remotely disabling every car over the Internet. We think about voting fraud as unauthorized individuals trying to vote. We don’t think about a single person or organization remotely manipulating thousands of Internet-connected voting machines.

In a sense, class breaks are not a new concept in risk management. It’s the difference between home burglaries and fires, which happen occasionally to different houses in a neighborhood over the course of the year, and floods and earthquakes, which either happen to everyone in the neighborhood or no one. Insurance companies can handle both types of risk, but they are inherently different. The increasing computerization of everything is moving us from a burglary/fire risk model to a flood/earthquake model, which a given threat either affects everyone in town or doesn’t happen at all.

But there’s a key difference between floods/earthquakes and class breaks in computer systems: the former are random natural phenomena, while the latter is human-directed. Floods don’t change their behavior to maximize their damage based on the types of defenses we build. Attackers do that to computer systems. Attackers examine our systems, looking for class breaks. And once one of them finds one, they’ll exploit it again and again until the vulnerability is fixed.

As we move into the world of the Internet of Things, where computers permeate our lives at every level, class breaks will become increasingly important. The combination of automation and action at a distance will give attackers more power and leverage than they have ever had before. Security notions like the precautionary principle—where the potential of harm is so great that we err on the side of not deploying a new technology without proofs of security—will become more important in a world where an attacker can open all of the door locks or hack all of the power plants. It’s not an inherently less secure world, but it’s a differently secure world. It’s a world where driverless cars are much safer than people-driven cars, until suddenly they’re not. We need to build systems that assume the possibility of class breaks—and maintain security despite them.

This essay originally appeared on Edge.org as part of their annual question.
https://www.edge.org/annual-question/2017/response/…

This year it was: “What scientific term or concept ought to be more widely known?”
https://www.edge.org/annual-question/…


A Comment on the Trump Dossier

Imagine that you are someone in the CIA, concerned about the future of America. You have this Russian dossier on Donald Trump, which you have some evidence might be true. The smartest thing you can do is to leak it to the public. By doing so, you are eliminating any leverage Russia has over Trump and probably reducing the effectiveness of any other blackmail material any government might have on Trump. I believe you do this regardless of whether you ultimately believe the document’s findings or not, and regardless of whether you support or oppose Trump. It’s simple game theory.

This document is particularly safe to release. Because it’s not a classified report of the CIA, leaking it is not a crime. And you release it now, before Trump becomes president, because doing so afterwards becomes much more dangerous.

https://www.nytimes.com/2017/01/11/us/politics/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2017 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.