Crypto-Gram

November 15, 2019

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. Cracking the Passwords of Early Internet Pioneers
  2. Using Machine Learning to Detect IP Hijacking
  3. Adding a Hardware Backdoor to a Networked Computer
  4. Why Technologists Need to Get Involved in Public Policy
  5. Details of the Olympic Destroyer APT
  6. Calculating the Benefits of the Advanced Encryption Standard
  7. Public Voice Launches Petition for an International Moratorium on Using Facial Recognition for Mass Surveillance
  8. NordVPN Breached
  9. Mapping Security and Privacy Research across the Decades
  10. Dark Web Site Taken Down without Breaking Encryption
  11. Former FBI General Counsel Jim Baker Chooses Encryption Over Backdoors
  12. ICT Supply-Chain Security
  13. WhatsApp Sues NSO Group
  14. A Broken Random Number Generator in AMD Microcode
  15. Resources for Measuring Cybersecurity
  16. Homemade TEMPEST Receiver
  17. Obfuscation as a Privacy Tool
  18. Details of an Airbnb Fraud
  19. Eavesdropping on SMS Messages inside Telco Networks
  20. xHelper Malware for Android
  21. Fooling Voice Assistants with Lasers
  22. Identifying and Arresting Ransomware Criminals
  23. NTSB Investigation of Fatal Driverless Car Accident
  24. Technology and Policymakers
  25. Upcoming Speaking Engagements

Cracking the Passwords of Early Internet Pioneers

[2019.10.15] Lots of them weren’t very good:

BSD co-inventor Dennis Ritchie, for instance, used “dmac” (his middle name was MacAlistair); Stephen R. Bourne, creator of the Bourne shell command line interpreter, chose “bourne”; Eric Schmidt, an early developer of Unix software and now the executive chairman of Google parent company Alphabet, relied on “wendy!!!” (the name of his wife); and Stuart Feldman, author of Unix automation tool make and the first Fortran compiler, used “axolotl” (the name of a Mexican salamander).

Weakest of all was the password for Unix contributor Brian W. Kernighan: “/.,/.,” representing a three-character string repeated twice using adjacent keys on a QWERTY keyboard. (None of the passwords included the quotation marks.)

I don’t remember any of my early passwords, but they probably weren’t much better.


Using Machine Learning to Detect IP Hijacking

[2019.10.17] This is interesting research:

In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That’s unfortunately not very hard to do, since BGP itself doesn’t have any security procedures for validating that a message is actually coming from the place it says it’s coming from.

[…]

To better pinpoint serial attacks, the group first pulled data from several years’ worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table. From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviors.

The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:

  • Volatile changes in activity: Hijackers’ address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network’s prefix was under 50 days, compared to almost two years for legitimate networks.
  • Multiple address blocks: Serial hijackers tend to advertise many more blocks of IP addresses, also known as “network prefixes.”
  • IP addresses in multiple countries: Most networks don’t have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.

Note that this is much more likely to detect criminal attacks than nation-state activities. But it’s still good work.

Academic paper.


Adding a Hardware Backdoor to a Networked Computer

[2019.10.18] Interesting proof of concept:

At the CS3sthlm security conference later this month, security researcher Monta Elkins will show how he created a proof-of-concept version of that hardware hack in his basement. He intends to demonstrate just how easily spies, criminals, or saboteurs with even minimal skills, working on a shoestring budget, can plant a chip in enterprise IT equipment to offer themselves stealthy backdoor access…. With only a $150 hot-air soldering tool, a $40 microscope, and some $2 chips ordered online, Elkins was able to alter a Cisco firewall in a way that he says most IT admins likely wouldn’t notice, yet would give a remote attacker deep control.


Why Technologists Need to Get Involved in Public Policy

[2019.10.18] Last month, I gave a 15-minute talk in London titled: “Why technologists need to get involved in public policy.”

In it, I try to make the case for public-interest technologists. (I also maintain a public-interest tech resources page, which has pretty much everything I can find in this space. If I’m missing something, please let me know.)

Boing Boing post.

EDITED TO ADD (10/29): Twitter summary.


Details of the Olympic Destroyer APT

[2019.10.21] Interesting details on Olympic Destroyer, the nation-state cyberattack against the 2018 Winter Olympic Games in South Korea. Wired’s Andy Greenberg presents evidence that the perpetrator was Russia, and not North Korea or China.

EDITED TO ADD (11/13): Attribution to Russia is not new.


Calculating the Benefits of the Advanced Encryption Standard

[2019.10.22] NIST has completed a study—it was published last year, but I just saw it recently—calculating the costs and benefits of the Advanced Encryption Standard.

From the conclusion:

The result of performing that operation on the series of cumulated benefits extrapolated for the 169 survey respondents finds that present value of benefits from today’s perspective is approximately $8.9 billion. On the other hand, the present value of NIST’s costs from today’s perspective is $127 million. Thus, the NPV from today’s perspective is $8,772,000,000; the B/C ratio is therefore 70.2/1; and a measure (explained in detail in Section 6.1) of the IRR for the alternative investment perspective is 31%; all are indicators of a substantial economic impact.

Extending the approach of looking back from 2017 to the larger national economy required the selection of economic sectors best represented by the 169 survey respondents. The economic sectors represented by ten or more survey respondents include the following: agriculture; construction; manufacturing; retail trade; transportation and warehousing; information; real estate rental and leasing; professional, scientific, and technical services; management services; waste management; educational services; and arts and entertainment. Looking at the present value of benefits and costs from 2017’s perspective for these economic sectors finds that the present value of benefits rises to approximately $251 billion while the present value of NIST’s costs from today’s perspective remains the same at $127 million. Therefore, the NPV of the benefits of the AES program to the national economy from today’s perspective is $250,473,200,000; the B/C ratio is roughly 1976/1; and the appropriate, alternative (explained in Section 6.1) IRR and investing proceeds at the social rate of return is 53.6%.

The report contains lots of facts and figures relevant to crypto policy debates, including the chaotic nature of crypto markets in the mid-1990s, the number of approved devices and libraries of various kinds since then, other standards that invoke AES, and so on.

There’s a lot to argue with about the methodology and the assumptions. I don’t know if I buy that the benefits of AES to the economy are in the billions of dollars, mostly because we in the cryptographic community would have come up with alternative algorithms to triple-DES that would have been accepted and used. Still, I like seeing this kind of analysis about security infrastructure. Security is an enabling technology; it doesn’t do anything by itself, but instead allows all sorts of things to be done. And I certainly agree that the benefits of a standardized encryption algorithm that we all trust and use outweigh the cost by orders of magnitude.

And this isn’t the first time NIST has conducted economic impact studies. It released a study of the economic impact of DES in 2001.


Public Voice Launches Petition for an International Moratorium on Using Facial Recognition for Mass Surveillance

[2019.10.22] Coming out of the Privacy Commissioners’ Conference in Albania, Public Voice is launching a petition for an international moratorium on using facial recognition software for mass surveillance.

You can sign on as an individual or an organization. I did. You should as well. No, I don’t think that countries will magically adopt this moratorium. But it’s important for us all to register our dissent.


NordVPN Breached

[2019.10.23] There was a successful attack against NordVPN:

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches that leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.”

The breach happened nineteen months ago, but the company is only just disclosing it to the public. We don’t know exactly what was stolen and how it affects VPN security. More details are needed.

VPNs are a shadowy world. We use them to protect our Internet traffic when we’re on a network we don’t trust, but we’re forced to trust the VPN instead. Recommendations are hard. NordVPN’s website says that the company is based in Panama. Do we have any reason to trust it at all?

I’m curious what VPNs others use, and why they should be believed to be trustworthy.


Mapping Security and Privacy Research across the Decades

[2019.10.24] This is really interesting: “A Data-Driven Reflection on 36 Years of Security and Privacy Research,” by Aniqua Baset and Tamara Denning:

Abstract: Meta-research—research about research—allows us, as a community, to examine trends in our research and make informed decisions regarding the course of our future research activities. Additionally, overviews of past research are particularly useful for researchers or conferences new to the field. In this work we use topic modeling to identify topics within the field of security and privacy research using the publications of the IEEE Symposium on Security & Privacy (1980-2015), the ACM Conference on Computer and Communications Security (1993-2015), the USENIX Security Symposium (1993-2015), and the Network and Distributed System Security Symposium (1997-2015). We analyze and present data via the perspective of topics trends and authorship. We believe our work serves to contextualize the academic field of computer security and privacy research via one of the first data-driven analyses. An interactive visualization of the topics and corresponding publications is available at https://secprivmeta.net.

I like seeing how our field has morphed over the years.


Dark Web Site Taken Down without Breaking Encryption

[2019.10.25] The US Department of Justice unraveled a dark web child-porn website, leading to the arrest of 337 people in at least 18 countries. This was all accomplished not through any backdoors in communications systems, but by analyzing the bitcoin transactions and following the money:

Welcome to Video made money by charging fees in bitcoin, and gave each user a unique bitcoin wallet address when they created an account. Son operated the site as a Tor hidden service, a dark web site with a special address that helps mask the identity of the site’s host and its location. But Son and others made mistakes that allowed law enforcement to track them. For example, according to the indictment, very basic assessments of the Welcome to Video website revealed two unconcealed IP addresses managed by a South Korean internet service provider and assigned to an account that provided service to Son’s home address. When agents searched Son’s residence, they found the server running Welcome to Video.

To “follow the money,” as officials put it in Wednesday’s press conference, law enforcement agents sent fairly small amounts of bitcoin—roughly equivalent at the time to $125 to $290—to the bitcoin wallets Welcome to Video listed for payments. Since the bitcoin blockchain leaves all transactions visible and verifiable, they could observe the currency in these wallets being transferred to another wallet. Law enforcement learned from a bitcoin exchange that the second wallet was registered to Son with his personal phone number and one of his personal email addresses.

Remember this the next time some law enforcement official tells us that they’re powerless to investigate crime without breaking cryptography for everyone.

More news articles. The indictment is here. Some of it is pretty horrifying to read.


Former FBI General Counsel Jim Baker Chooses Encryption Over Backdoors

[2019.10.28] In an extraordinary essay, the former FBI general counsel Jim Baker makes the case for strong encryption over government-mandated backdoors:

In the face of congressional inaction, and in light of the magnitude of the threat, it is time for governmental authorities—including law enforcement—to embrace encryption because it is one of the few mechanisms that the United States and its allies can use to more effectively protect themselves from existential cybersecurity threats, particularly from China. This is true even though encryption will impose costs on society, especially victims of other types of crime.

[…]

I am unaware of a technical solution that will effectively and simultaneously reconcile all of the societal interests at stake in the encryption debate, such as public safety, cybersecurity and privacy as well as simultaneously fostering innovation and the economic competitiveness of American companies in a global marketplace.

[…]

All public safety officials should think of protecting the cybersecurity of the United States as an essential part of their core mission to protect the American people and uphold the Constitution. And they should be doing so even if there will be real and painful costs associated with such a cybersecurity-forward orientation. The stakes are too high and our current cybersecurity situation too grave to adopt a different approach.

Basically, he argues that the security value of strong encryption greatly outweighs the security value of encryption that can be bypassed. He endorses a “defense dominant” strategy for Internet security.

Keep in mind that Baker led the FBI’s legal case against Apple regarding the San Bernardino shooter’s encrypted iPhone. In writing this piece, Baker joins the growing list of former law enforcement and national security senior officials who have come out in favor of strong encryption over backdoors: Michael Hayden, Michael Chertoff, Richard Clarke, Ash Carter, William Lynn, and Mike McConnell.

Edward Snowden also agrees.

EDITED TO ADD: Good commentary from Cory Doctorow.


ICT Supply-Chain Security

[2019.10.29] The Carnegie Endowment for Peace published a comprehensive report on ICT (information and communication technologies) supply-chain security and integrity. It’s a good read, but nothing that those who are following this issue don’t already know.


WhatsApp Sues NSO Group

[2019.10.30] WhatsApp is suing the Israeli cyberweapons arms manufacturer NSO Group in California court:

WhatsApp’s lawsuit, filed in a California court on Tuesday, has demanded a permanent injunction blocking NSO from attempting to access WhatsApp computer systems and those of its parent company, Facebook.

It has also asked the court to rule that NSO violated US federal law and California state law against computer fraud, breached their contracts with WhatsApp and “wrongfully trespassed” on Facebook’s property.

This could be interesting.

EDITED TO ADD: Citizen Lab has a research paper in the technology involved in this case. WhatsApp has an op ed on their actions. And this is a good news article on how the attack worked.

EDITED TO ADD: Facebook is deleting the accounts of NSO Group employees.

EDITED TO ADD (11/13): Details on the vulnerability.


A Broken Random Number Generator in AMD Microcode

[2019.10.31] Interesting story.

I always recommend using a random number generator like Fortuna, even if you’re using a hardware random source. It’s just safer.


Resources for Measuring Cybersecurity

[2019.11.01] Kathryn Waldron at R Street has collected all of the different resources and methodologies for measuring cybersecurity.


Homemade TEMPEST Receiver

[2019.11.04] Tom’s Guide writes about home brew TEMPEST receivers:

Today, dirt-cheap technology and free software make it possible for ordinary citizens to run their own Tempest programs and listen to what their own—and their neighbors’—electronic devices are doing.

Elliott, a researcher at Boston-based security company Veracode, showed that an inexpensive USB dongle TV tuner costing about $10 can pick up a broad range of signals, which can be “tuned” and interpreted by software-defined radio (SDR) applications running on a laptop computer.


Obfuscation as a Privacy Tool

[2019.11.05] This essay discusses the futility of opting out of surveillance, and suggests data obfuscation as an alternative.

We can apply obfuscation in our own lives by using practices and technologies that make use of it, including:

  • The secure browser Tor, which (among other anti-surveillance technologies) muddles our Internet activity with that of other Tor users, concealing our trail in that of many others.
  • The browser plugins TrackMeNot and AdNauseam, which explore obfuscation techniques by issuing many fake search requests and loading and clicking every ad, respectively.
  • The browser extension Go Rando, which randomly chooses your emotional “reactions” on Facebook, interfering with their emotional profiling and analysis.
  • Playful experiments like Adam Harvey’s “HyperFace” project, finding patterns on textiles that fool facial recognition systems—not by hiding your face, but by creating the illusion of many faces.

I am generally skeptical about obfuscation tools. I think of this basically as a signal-to-noise problem, and that adding random noise doesn’t do much to obfuscate the signal. But against broad systems of financially motivated corporate surveillance, it might be enough.


Details of an Airbnb Fraud

[2019.11.06] This is a fascinating article about a bait-and-switch Airbnb fraud. The article focuses on one particular group of scammers and how they operate, using the fact that Airbnb as a company doesn’t do much to combat fraud on its platform. But I am more interested in how the fraudsters essentially hacked the complex sociotechnical system that is Airbnb.

The whole article is worth reading.


Eavesdropping on SMS Messages inside Telco Networks

[2019.11.07] Fireeye reports on a Chinese-sponsored espionage effort to eavesdrop on text messages:

FireEye Mandiant recently discovered a new malware family used by APT41 (a Chinese APT group) that is designed to monitor and save SMS traffic from specific phone numbers, IMSI numbers and keywords for subsequent theft. Named MESSAGETAP, the tool was deployed by APT41 in a telecommunications network provider in support of Chinese espionage efforts. APT41’s operations have included state-sponsored cyber espionage missions as well as financially-motivated intrusions. These operations have spanned from as early as 2012 to the present day. For an overview of APT41, see our August 2019 blog post or our full published report.

Yet another example that demonstrates why end-to-end message encryption is so important.


xHelper Malware for Android

[2019.11.08] xHelper is not interesting because of its infection mechanism; the user has to side-load an app onto his phone. It’s not interesting because of its payload; it seems to do nothing more than show unwanted ads. it’s interesting because of its persistence:

Furthermore, even if users spot the xHelper service in the Android operating system’s Apps section, removing it doesn’t work, as the trojan reinstalls itself every time, even after users perform a factory reset of the entire device.

How xHelper survives factory resets is still a mystery; however, both Malwarebytes and Symantec said xHelper doesn’t tamper with system services system apps. In addition, Symantec also said that it was “unlikely that Xhelper comes preinstalled on devices.”

In some cases, users said that even when they removed the xHelper service and then disabled the “Install apps from unknown sources” option, the setting kept turning itself back on, and the device was reinfected in a matter of minutes after being cleaned.

From Symantec:

We first began seeing Xhelper apps in March 2019. Back then, the malware’s code was relatively simple, and its main function was visiting advertisement pages for monetization purposes. The code has changed over time. Initially, the malware’s ability to connect to a C&C server was written directly into the malware itself, but later this functionality was moved to an encrypted payload, in an attempt to evade signature detection. Some older variants included empty classes that were not implemented at the time, but the functionality is now fully enabled. As described previously, Xhelper’s functionality has expanded drastically in recent times.

We strongly believe that the malware’s source code is still a work in progress.

It’s a weird piece of malware. That level of persistence speaks to a nation-state actor. The continuous evolution of the malware implies an organized actor. But sending unwanted ads is far too noisy for any serious use. And the infection mechanism is pretty random. I just don’t know.


Fooling Voice Assistants with Lasers

[2019.11.11] Interesting:

Siri, Alexa, and Google Assistant are vulnerable to attacks that use lasers to inject inaudible—and sometimes invisible—commands into the devices and surreptitiously cause them to unlock doors, visit websites, and locate, unlock, and start vehicles, researchers report in a research paper published on Monday. Dubbed Light Commands, the attack works against Facebook Portal and a variety of phones.

Shining a low-powered laser into these voice-activated systems allows attackers to inject commands of their choice from as far away as 360 feet (110m). Because voice-controlled systems often don’t require users to authenticate themselves, the attack can frequently be carried out without the need of a password or PIN. Even when the systems require authentication for certain actions, it may be feasible to brute force the PIN, since many devices don’t limit the number of guesses a user can make. Among other things, light-based commands can be sent from one building to another and penetrate glass when a vulnerable device is kept near a closed window.


Identifying and Arresting Ransomware Criminals

[2019.11.12] The Wall Street Journal has a story about how two people were identified as the perpetrators of a ransomware scheme. They were found because—as generally happens—they made mistakes covering their tracks. They were investigated because they had the bad luck of locking up Washington, DC’s video surveillance cameras a week before the 2017 inauguration.

EDITED TO ADD (11/13): Link without a paywall.


NTSB Investigation of Fatal Driverless Car Accident

[2019.11.13] Autonomous systems are going to have to do much better than this.

The Uber car that hit and killed Elaine Herzberg in Tempe, Ariz., in March 2018 could not recognize all pedestrians, and was being driven by an operator likely distracted by streaming video, according to <https://dms.ntsb.gov/pubdms/search/hitlist.cfm?docketID=62978&CurrentPage=1&EndRow=15&StartRow=1&order=1&sort=0&TXTSEARCHT=>documents released by the U.S. National Transportation Safety Board (NTSB) this week.

But while the technical failures and omissions in Uber’s self-driving car program are shocking, the NTSB investigation also highlights safety failures that include the vehicle operator’s lapses, lax corporate governance of the project, and limited public oversight.

The details of what happened in the seconds before the collision are worth reading. They describe a cascading series of issues that led to the collision and the fatality.

As computers continue to become part of things, and affect the world in a direct physical manner, this kind of thing will become even more important.


Technology and Policymakers

[2019.11.14] Technologists and policymakers largely inhabit two separate worlds. It’s an old problem, one that the British scientist CP Snow identified in a 1959 essay entitled The Two Cultures. He called them sciences and humanities, and pointed to the split as a major hindrance to solving the world’s problems. The essay was influential—but 60 years later, nothing has changed.

When Snow was writing, the two cultures theory was largely an interesting societal observation. Today, it’s a crisis. Technology is now deeply intertwined with policy. We’re building complex socio-technical systems at all levels of our society. Software constrains behavior with an efficiency that no law can match. It’s all changing fast; technology is literally creating the world we all live in, and policymakers can’t keep up. Getting it wrong has become increasingly catastrophic. Surviving the future depends in bringing technologists and policymakers together.

Consider artificial intelligence (AI). This technology has the potential to augment human decision-making, eventually replacing notoriously subjective human processes with something fairer, more consistent, faster and more scalable. But it also has the potential to entrench bias and codify inequity, and to act in ways that are unexplainable and undesirable. It can be hacked in new ways, giving attackers from criminals and nation states new capabilities to disrupt and harm. How do we avoid the pitfalls of AI while benefiting from its promise? Or, more specifically, where and how should government step in and regulate what is largely a market-driven industry? The answer requires a deep understanding of both the policy tools available to modern society and the technologies of AI.

But AI is just one of many technological areas that needs policy oversight. We also need to tackle the increasingly critical cybersecurity vulnerabilities in our infrastructure. We need to understand both the role of social media platforms in disseminating politically divisive content, and what technology can and cannot to do mitigate its harm. We need policy around the rapidly advancing technologies of bioengineering, such as genome editing and synthetic biology, lest advances cause problems for our species and planet. We’re barely keeping up with regulations on food and water safety—let alone energy policy and climate change. Robotics will soon be a common consumer technology, and we are not ready for it at all.

Addressing these issues will require policymakers and technologists to work together from the ground up. We need to create an environment where technologists get involved in public policy – where there is a viable career path for what has come to be called “public-interest technologists.”

The concept isn’t new, even if the phrase is. There are already professionals who straddle the worlds of technology and policy. They come from the social sciences and from computer science. They work in data science, or tech policy, or public-focused computer science. They worked in Bush and Obama’s White House, or in academia and NGOs. The problem is that there are too few of them; they are all exceptions and they are all exceptional. We need to find them, support them, and scale up whatever the process is that creates them.

There are two aspects to creating a scalable career path for public-interest technologists, and you can think of them as the problems of supply and demand. In the long term, supply will almost certainly be the bigger problem. There simply aren’t enough technologists who want to get involved in public policy. This will only become more critical as technology further permeates our society. We can’t begin to calculate the number of them that our society will need in the coming years and decades.

Fixing this supply problem requires changes in educational curricula, from childhood through college and beyond. Science and technology programs need to include mandatory courses in ethics, social science, policy and human-centered design. We need joint degree programs to provide even more integrated curricula. We need ways to involve people from a variety of backgrounds and capabilities. We need to foster opportunities for public-interest tech work on the side, as part of their more traditional jobs, or for a few years during their more conventional careers during designed sabbaticals or fellowships. Public service needs to be part of an academic career. We need to create, nurture and compensate people who aren’t entirely technologists or policymakers, but instead an amalgamation of the two. Public-interest technology needs to be a respected career choice, even if it will never pay what a technologist can make at a tech firm.

But while the supply side is the harder problem, the demand side is the more immediate problem. Right now, there aren’t enough places to go for scientists or technologists who want to do public policy work, and the ones that exist tend to be underfunded and in environments where technologists are unappreciated. There aren’t enough positions on legislative staffs, in government agencies, at NGOs or in the press. There aren’t enough teaching positions and fellowships at colleges and universities. There aren’t enough policy-focused technological projects. In short, not enough policymakers realize that they need scientists and technologists—preferably those with some policy training—as part of their teams.

To make effective tech policy, policymakers need to better understand technology. For some reason, ignorance about technology isn’t seen as a deficiency among our elected officials, and this is a problem. It is no longer okay to not understand how the internet, machine learning—or any other core technologies—work.

This doesn’t mean policymakers need to become tech experts. We have long expected our elected officials to regulate highly specialized areas of which they have little understanding. It’s been manageable because those elected officials have people on their staff who do understand those areas, or because they trust other elected officials who do. Policymakers need to realize that they need technologists on their policy teams, and to accept well-established scientific findings as fact. It is also no longer okay to discount technological expertise merely because it contradicts your political biases.

The evolution of public health policy serves as an instructive model. Health policy is a field that includes both policy experts who know a lot about the science and keep abreast of health research, and biologists and medical researchers who work closely with policymakers. Health policy is often a specialization at policy schools. We live in a world where the importance of vaccines is widely accepted and well-understood by policymakers, and is written into policy. Our policies on global pandemics are informed by medical experts. This serves society well, but it wasn’t always this way. Health policy was not always part of public policy. People lived through a lot of terrible health crises before policymakers figured out how to actually talk and listen to medical experts. Today we are facing a similar situation with technology.

Another parallel is public-interest law. Lawyers work in all parts of government and in many non-governmental organizations, crafting policy or just lawyering in the public interest. Every attorney at a major law firm is expected to devote some time to public-interest cases; it’s considered part of a well-rounded career. No law firm looks askance at an attorney who takes two years out of his career to work in a public-interest capacity. A tech career needs to look more like that.

In his book Future Politics, Jamie Susskind writes: “Politics in the twentieth century was dominated by a central question: how much of our collective life should be determined by the state, and what should be left to the market and civil society? For the generation now approaching political maturity, the debate will be different: to what extent should our lives be directed and controlled by powerful digital systems—and on what terms?”

I teach cybersecurity policy at the Harvard Kennedy School of Government. Because that question is fundamentally one of economics—and because my institution is a product of both the 20th century and that question—its faculty is largely staffed by economists. But because today’s question is a different one, the institution is now hiring policy-focused technologists like me.

If we’re honest with ourselves, it was never okay for technology to be separate from policy. But today, amid what we’re starting to call the Fourth Industrial Revolution, the separation is much more dangerous. We need policymakers to recognize this danger, and to welcome a new generation of technologists from every persuasion to help solve the socio-technical policy problems of the 21st century. We need to create ways to speak tech to power—and power needs to open the door and let technologists in.

This essay previously appeared on the World Economic Forum blog.


Upcoming Speaking Engagements

[2019.11.14] This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.

Copyright © 2019 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.