Crypto-Gram

January 15, 2020

by Bruce Schneier

Fellow and Lecturer, Harvard Kennedy School

schneier@schneier.com

https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. Security Vulnerabilities in the RCS Texting Protocol
  2. Iranian Attacks on Industrial Control Systems
  3. Attacker Causes Epileptic Seizure over the Internet
  4. Lousy IoT Security
  5. ToTok Is an Emirati Spying Tool
  6. Chinese Hackers Bypassing Two-Factor Authentication
  7. Hacking School Surveillance Systems
  8. Mysterious Drones Are Flying over Colorado
  9. Chrome Extension Stealing Cryptocurrency Keys and Passwords
  10. Mailbox Master Keys
  11. USB Cable Kill Switch for Laptops
  12. New SHA-1 Attack
  13. Police Surveillance Tools from Special Services Group
  14. Artificial Personas and Public Discourse
  15. 5G Security
  16. Upcoming Speaking Engagements

Security Vulnerabilities in the RCS Texting Protocol

[2019.12.16] Interesting research:

SRLabs founder Karsten Nohl, a researcher with a track record of exposing security flaws in telephony systems, argues that RCS is in many ways no better than SS7, the decades-old phone system carriers still used for calling and texting, which has long been known to be vulnerable to interception and spoofing attacks. While using end-to-end encrypted internet-based tools like iMessage and WhatsApp obviates many of those of SS7 issues, Nohl says that flawed implementations of RCS make it not much safer than the SMS system it hopes to replace.


Iranian Attacks on Industrial Control Systems

[2019.12.17] New details:

At the CyberwarCon conference in Arlington, Virginia, on Thursday, Microsoft security researcher Ned Moran plans to present new findings from the company’s threat intelligence group that show a shift in the activity of the Iranian hacker group APT33, also known by the names Holmium, Refined Kitten, or Elfin. Microsoft has watched the group carry out so-called password-spraying attacks over the past year that try just a few common passwords across user accounts at tens of thousands of organizations. That’s generally considered a crude and indiscriminate form of hacking. But over the last two months, Microsoft says APT33 has significantly narrowed its password spraying to around 2,000 organizations per month, while increasing the number of accounts targeted at each of those organizations almost tenfold on average.

[…]

The hackers’ motivation—and which industrial control systems they’ve actually breached—remains unclear. Moran speculates that the group is seeking to gain a foothold to carry out cyberattacks with physically disruptive effects. “They’re going after these producers and manufacturers of control systems, but I don’t think they’re the end targets,” says Moran. “They’re trying to find the downstream customer, to find out how they work and who uses them. They’re looking to inflict some pain on someone’s critical infrastructure that makes use of these control systems.”

It’s unclear whether the attackers are causing any actual damage, or just gaining access for some future use.


Attacker Causes Epileptic Seizure over the Internet

[2019.12.18] This isn’t a first, but I think it will be the first conviction:

The GIF set off a highly unusual court battle that is expected to equip those in similar circumstances with a new tool for battling threatening trolls and cyberbullies. On Monday, the man who sent Eichenwald the moving image, John Rayne Rivello, was set to appear in a Dallas County district court. A last-minute rescheduling delayed the proceeding until Jan. 31, but Rivello is still expected to plead guilty to aggravated assault. And he may be the first of many.

The Epilepsy Foundation announced on Monday it lodged a sweeping slate of criminal complaints against a legion of copycats who targeted people with epilepsy and sent them an onslaught of strobe GIFs—a frightening phenomenon that unfolded in a short period of time during the organization’s marking of National Epilepsy Awareness Month in November.

[…]

Rivello’s supporters—among them, neo-Nazis and white nationalists, including Richard Spencer—have also argued that the issue is about freedom of speech. But in an amicus brief to the criminal case, the First Amendment Clinic at Duke University School of Law argued Rivello’s actions were not constitutionally protected.

“A brawler who tattoos a message onto his knuckles does not throw every punch with the weight of First Amendment protection behind him,” the brief stated. “Conduct like this does not constitute speech, nor should it. A deliberate attempt to cause physical injury to someone does not come close to the expression which the First Amendment is designed to protect.”

Another article.

EDITED TO ADD(12/19): More articles.

EDITED TO ADD (1/14): There was a similar case in Germany in 2012—that attacker was convicted.


Lousy IoT Security

[2019.12.19] DTEN makes smart screens and whiteboards for videoconferencing systems. Forescout found that their security is terrible:

In total, our researchers discovered five vulnerabilities of four different kinds:

  • Data exposure: PDF files of shared whiteboards (e.g. meeting notes) and other sensitive files (e.g., OTA—over-the-air updates) were stored in a publicly accessible AWS S3 bucket that also lacked TLS encryption (CVE-2019-16270, CVE-2019-16274).
  • Unauthenticated web server: a web server running Android OS on port 8080 discloses all whiteboards stored locally on the device (CVE-2019-16271).
  • Arbitrary code execution: unauthenticated root shell access through Android Debug Bridge (ADB) leads to arbitrary code execution and system administration (CVE-2019-16273).
  • Access to Factory Settings: provides full administrative access and thus a covert ability to capture Windows host data from Android, including the Zoom meeting content (audio, video, screenshare) (CVE-2019-16272).

These aren’t subtle vulnerabilities. These are stupid design decisions made by engineers who had no idea how to create a secure system. And this, in a nutshell, is the problem with the Internet of Things.

From a Wired article:

One issue that jumped out at the researchers: The DTEN system stored notes and annotations written through the whiteboard feature in an Amazon Web Services bucket that was exposed on the open internet. This means that customers could have accessed PDFs of each others’ slides, screenshots, and notes just by changing the numbers in the URL they used to view their own. Or anyone could have remotely nabbed the entire trove of customers’ data. Additionally, DTEN hadn’t set up HTTPS web encryption on the customer web server to protect connections from prying eyes. DTEN fixed both of these issues on October 7. A few weeks later, the company also fixed a similar whiteboard PDF access issue that would have allowed anyone on a company’s network to access all of its stored whiteboard data.

[…]

The researchers also discovered two ways that an attacker on the same network as DTEN devices could manipulate the video conferencing units to monitor all video and audio feeds and, in one case, to take full control. DTEN hardware runs Android primarily, but uses Microsoft Windows for Zoom. The researchers found that they can access a development tool known as “Android Debug Bridge,” either wirelessly or through USB ports or ethernet, to take over a unit. The other bug also relates to exposed Android factory settings. The researchers note that attempting to implement both operating systems creates more opportunities for misconfigurations and exposure. DTEN says that it will push patches for both bugs by the end of the year.

Boing Boing article.


ToTok Is an Emirati Spying Tool

[2019.12.24] The smartphone messaging app ToTok is actually an Emirati spying tool:

But the service, ToTok, is actually a spying tool, according to American officials familiar with a classified intelligence assessment and a New York Times investigation into the app and its developers. It is used by the government of the United Arab Emirates to try to track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.

ToTok, introduced only months ago, was downloaded millions of times from the Apple and Google app stores by users throughout the Middle East, Europe, Asia, Africa and North America. While the majority of its users are in the Emirates, ToTok surged to become one of the most downloaded social apps in the United States last week, according to app rankings and App Annie, a research firm.

Apple and Google have removed it from their app stores. If you have it on your phone, delete it now.


Chinese Hackers Bypassing Two-Factor Authentication

[2019.12.26] Interesting story of how a Chinese state-sponsored hacking group is bypassing the RSA SecurID two-factor authentication system.

How they did it remains unclear; although, the Fox-IT team has their theory. They said APT20 stole an RSA SecurID software token from a hacked system, which the Chinese actor then used on its computers to generate valid one-time codes and bypass 2FA at will.

Normally, this wouldn’t be possible. To use one of these software tokens, the user would need to connect a physical (hardware) device to their computer. The device and the software token would then generate a valid 2FA code. If the device was missing, the RSA SecureID software would generate an error.

The Fox-IT team explains how hackers might have gone around this issue:

The software token is generated for a specific system, but of course this system specific value could easily be retrieved by the actor when having access to the system of the victim.

As it turns out, the actor does not actually need to go through the trouble of obtaining the victim’s system specific value, because this specific value is only checked when importing the SecurID Token Seed, and has no relation to the seed used to generate actual 2-factor tokens. This means the actor can actually simply patch the check which verifies if the imported soft token was generated for this system, and does not need to bother with stealing the system specific value at all.

In short, all the actor has to do to make use of the 2 factor authentication codes is to steal an RSA SecurID Software Token and to patch 1 instruction, which results in the generation of valid tokens.


Hacking School Surveillance Systems

[2019.12.30] Lance Vick is suggesting that students hack their schools’ surveillance systems.

“This is an ethical minefield that I feel students would be well within their rights to challenge, and if needed, undermine,” he said.

Of course, there are a lot more laws in place against this sort of thing than there were in—say—the 1980s, but it’s still worth thinking about.

EDITED TO ADD (1/2): Another essay on the topic.


Mysterious Drones Are Flying over Colorado

[2020.01.02] No one knows who they belong to. (Well, of course someone knows. And my guess is that it’s likely that we will know soon.)

EDITED TO ADD (1/3): Another article.


Chrome Extension Stealing Cryptocurrency Keys and Passwords

[2020.01.03] A malicious Chrome extension surreptitiously steals Ethereum keys and passwords:

According to Denley, the extension is dangerous to users in two ways. First, any funds (ETH coins and ERC0-based tokens) managed directly inside the extension are at risk.

Denley says that the extension sends the private keys of all wallets created or managed through its interface to a third-party website located at erc20wallet[.]tk.

Second, the extension also actively injects malicious JavaScript code when users navigate to five well-known and popular cryptocurrency management platforms. This code steals login credentials and private keys, data that it’s sent to the same erc20wallet[.]tk third-party website.

Another example of how blockchain requires many single points of trust in order to be secure.


Mailbox Master Keys

[2020.01.06] Here’s a physical-world example of why master keys are a bad idea. It’s a video of two postal thieves using a master key to open apartment building mailboxes.

Changing the master key for physical mailboxes is a logistical nightmare, which is why this problem won’t be fixed anytime soon.


USB Cable Kill Switch for Laptops

[2020.01.07] BusKill is designed to wipe your laptop (Linux only) if it is snatched from you in a public place:

The idea is to connect the BusKill cable to your Linux laptop on one end, and to your belt, on the other end. When someone yanks your laptop from your lap or table, the USB cable disconnects from the laptop and triggers a udev script [1, 2, 3] that executes a series of preset operations.

These can be something as simple as activating your screensaver or shutting down your device (forcing the thief to bypass your laptop’s authentication mechanism before accessing any data), but the script can also be configured to wipe the device or delete certain folders (to prevent thieves from retrieving any sensitive data or accessing secure business backends).

Clever idea, but I—and my guess is most people—would be much more likely to stand up from the table, forgetting that the cable was attached, and yanking it out. My problem with pretty much all systems like this is the likelihood of false alarms.

Slashdot article.

EDITED TO ADD (1/14): There are Bluetooth devices that will automatically encrypt a laptop when the device isn’t in proximity. That’s a much better interface than a cable.


New SHA-1 Attack

[2020.01.08] There’s a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other’s identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.


Police Surveillance Tools from Special Services Group

[2020.01.10] Special Services Group, a company that sells surveillance tools to the FBI, DEA, ICE, and other US government agencies, has had its secret sales brochure published. Motherboard received the brochure as part of a FOIA request to the Irvine Police Department in California.

“The Tombstone Cam is our newest video concealment offering the ability to conduct remote surveillance operations from cemeteries,” one section of the Black Book reads. The device can also capture audio, its battery can last for two days, and “the Tombstone Cam is fully portable and can be easily moved from location to location as necessary,” the brochure adds. Another product is a video and audio capturing device that looks like an alarm clock, suitable for “hotel room stings,” and other cameras are designed to appear like small tree trunks and rocks, the brochure reads.

The “Shop-Vac Covert DVR Recording System” is essentially a camera and 1TB harddrive hidden inside a vacuum cleaner. “An AC power connector is available for long-term deployments, and DC power options can be connected for mobile deployments also,” the brochure reads. The description doesn’t say whether the vacuum cleaner itself works.

[…]

One of the company’s “Rapid Vehicle Deployment Kits” includes a camera hidden inside a baby car seat. “The system is fully portable, so you are not restricted to the same drop car for each mission,” the description adds.

[…]

The so-called “K-MIC In-mouth Microphone & Speaker Set” is a tiny Bluetooth device that sits on a user’s teeth and allows them to “communicate hands-free in crowded, noisy surroundings” with “near-zero visual indications,” the Black Book adds.

Other products include more traditional surveillance cameras and lenses as well as tools for surreptitiously gaining entry to buildings. The “Phantom RFID Exploitation Toolkit” lets a user clone an access card or fob, and the so-called “Shadow” product can “covertly provide the user with PIN code to an alarm panel,” the brochure reads.

The Motherboard article also reprints the scary emails Motherboard received from Special Services Group, when asked for comment. Of course, Motherboard published the information anyway.


Artificial Personas and Public Discourse

[2020.01.13] Presidential campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites. It’s writing convincing op-eds on topics in the news (though there are limitations). And it’s being used to bulk up “pink-slime journalism”—websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There’s a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them—maybe half—were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn’t stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won’t be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi “We all have trust in Mohammed bin Salman” tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos—sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there’s commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it’ll be domestic political groups. Maybe it’ll be the candidates themselves. Most likely, it’ll be everybody. The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots—a proposed California law would require bots to identify themselves—but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don’t have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that’ll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it’s becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

This essay previously appeared in TheAtlantic.com.

EDITED TO ADD: Jamie Susskind wrote a similar essay.


5G Security

[2020.01.14] The security risks inherent in Chinese-made 5G networking equipment are easy to understand. Because the companies that make the equipment are subservient to the Chinese government, they could be forced to include backdoors in the hardware or software to give Beijing remote access. Eavesdropping is also a risk, although efforts to listen in would almost certainly be detectable. More insidious is the possibility that Beijing could use its access to degrade or disrupt communications services in the event of a larger geopolitical conflict. Since the internet, especially the “internet of things,” is expected to rely heavily on 5G infrastructure, potential Chinese infiltration is a serious national security threat.

But keeping untrusted companies like Huawei out of Western infrastructure isn’t enough to secure 5G. Neither is banning Chinese microchips, software, or programmers. Security vulnerabilities in the standards—the protocols and software for 5G—ensure that vulnerabilities will remain, regardless of who provides the hardware and software. These insecurities are a result of market forces that prioritize costs over security and of governments, including the United States, that want to preserve the option of surveillance in 5G networks. If the United States is serious about tackling the national security threats related to an insecure 5G network, it needs to rethink the extent to which it values corporate profits and government espionage over security.

To be sure, there are significant security improvements in 5G over 4G—in encryption, authentication, integrity protection, privacy, and network availability. But the enhancements aren’t enough.

The 5G security problems are threefold. First, the standards are simply too complex to implement securely. This is true for all software, but the 5G protocols offer particular difficulties. Because of how it is designed, the system blurs the wireless portion of the network connecting phones with base stations and the core portion that routes data around the world. Additionally, much of the network is virtualized, meaning that it will rely on software running on dynamically configurable hardware. This design dramatically increases the points vulnerable to attack, as does the expected massive increase in both things connected to the network and the data flying about it.

Second, there’s so much backward compatibility built into the 5G network that older vulnerabilities remain. 5G is an evolution of the decade-old 4G network, and most networks will mix generations. Without the ability to do a clean break from 4G to 5G, it will simply be impossible to improve security in some areas. Attackers may be able to force 5G systems to use more vulnerable 4G protocols, for example, and 5G networks will inherit many existing problems.

Third, the 5G standards committees missed many opportunities to improve security. Many of the new security features in 5G are optional, and network operators can choose not to implement them. The same happened with 4G; operators even ignored security features defined as mandatory in the standard because implementing them was expensive. But even worse, for 5G, development, performance, cost, and time to market were all prioritized over security, which was treated as an afterthought.

Already problems are being discovered. In November 2019, researchers published vulnerabilities that allow 5G users to be tracked in real time, be sent fake emergency alerts, or be disconnected from the 5G network altogether. And this wasn’t the first reporting to find issues in 5G protocols and implementations.

Chinese, Iranians, North Koreans, and Russians have been breaking into U.S. networks for years without having any control over the hardware, the software, or the companies that produce the devices. (And the U.S. National Security Agency, or NSA, has been breaking into foreign networks for years without having to coerce companies into deliberately adding backdoors.) Nothing in 5G prevents these activities from continuing, even increasing, in the future.

Solutions are few and far between and not very satisfying. It’s really too late to secure 5G networks. Susan Gordon, then-U.S. principal deputy director of national intelligence, had it right when she said last March: “You have to presume a dirty network.” Indeed, the United States needs to accept 5G’s insecurities and build secure systems on top of it. In some cases, doing so isn’t hard: Adding encryption to an iPhone or a messaging system like WhatsApp provides security from eavesdropping, and distributed protocols provide security from disruption—regardless of how insecure the network they operate on is. In other cases, it’s impossible. If your smartphone is vulnerable to a downloaded exploit, it doesn’t matter how secure the networking protocols are. Often, the task will be somewhere in between these two extremes.

5G security is just one of the many areas in which near-term corporate profits prevailed against broader social good. In a capitalist free market economy, the only solution is to regulate companies, and the United States has not shown any serious appetite for that.

What’s more, U.S. intelligence agencies like the NSA rely on inadvertent insecurities for their worldwide data collection efforts, and law enforcement agencies like the FBI have even tried to introduce new ones to make their own data collection efforts easier. Again, near-term self-interest has so far triumphed over society’s long-term best interests.

In turn, rather than mustering a major effort to fix 5G, what’s most likely to happen is that the United States will muddle along with the problems the network has, as it has done for decades. Maybe things will be different with 6G, which is starting to be discussed in technical standards committees. The U.S. House of Representatives just passed a bill directing the State Department to participate in the international standards-setting process so that it is just run by telecommunications operators and more interested countries, but there is no chance of that measure becoming law.

The geopolitics of 5G are complicated, involving a lot more than security. China is subsidizing the purchase of its companies’ networking equipment in countries around the world. The technology will quickly become critical national infrastructure, and security problems will become life-threatening. Both criminal attacks and government cyber-operations will become more common and more damaging. Eventually, Washington will have do so something. That something will be difficult and expensive—let’s hope it won’t also be too late.

This essay previously appeared in Foreign Policy.


Upcoming Speaking Engagements

[2020.01.14] This is a current list of where and when I am scheduled to speak:

  • I’m speaking at Indiana University Bloomington on January 30, 2020.
  • I’ll be at RSA Conference 2020 in San Francisco. On Wednesday, February 26, at 2:50 PM, I’ll be part of a panel on “How to Reduce Supply Chain Risk: Lessons from Efforts to Block Huawei.” On Thursday, February 27, at 9:20 AM, I’m giving a keynote on “Hacking Society.”
  • I’m speaking at SecIT by Heise in Hannover, Germany on March 26, 2020.

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.

Copyright © 2020 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.