Entries Tagged "essays"

Page 3 of 47

Attacking Machine Learning Systems

The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It’s a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn’t lose sight of the fact that insecure software will be the likely attack vector for most ML systems.

We are amazed by real-world demonstrations of adversarial attacks on ML systems, such as a 3D-printed object that looks like a turtle but is recognized (from any orientation) by the ML system as a gun. Or adding a few stickers that look like smudges to a stop sign so that it is recognized by a state-of-the-art system as a 45 mi/h speed limit sign. But what if, instead, somebody hacked into the system and just switched the labels for “gun” and “turtle” or swapped “stop” and “45 mi/h”? Systems can only match images with human-provided labels, so the software would never notice the switch. That is far easier and will remain a problem even if systems are developed that are robust to those adversarial attacks.

At their core, modern ML systems have complex mathematical models that use training data to become competent at a task. And while there are new risks inherent in the ML model, all of that complexity still runs in software. Training data are still stored in memory somewhere. And all of that is on a computer, on a network, and attached to the Internet. Like everything else, these systems will be hacked through vulnerabilities in those more conventional parts of the system.

This shouldn’t come as a surprise to anyone who has been working with Internet security. Cryptography has similar vulnerabilities. There is a robust field of cryptanalysis: the mathematics of code breaking. Over the last few decades, we in the academic world have developed a variety of cryptanalytic techniques. We have broken ciphers we previously thought secure. This research has, in turn, informed the design of cryptographic algorithms. The classified world of the NSA and its foreign counterparts have been doing the same thing for far longer. But aside from some special cases and unique circumstances, that’s not how encryption systems are exploited in practice. Outside of academic papers, cryptosystems are largely bypassed because everything around the cryptography is much less secure.

I wrote this in my book, Data and Goliath:

The problem is that encryption is just a bunch of math, and math has no agency. To turn that encryption math into something that can actually provide some security for you, it has to be written in computer code. And that code needs to run on a computer: one with hardware, an operating system, and other software. And that computer needs to be operated by a person and be on a network. All of those things will invariably introduce vulnerabilities that undermine the perfection of the mathematics…

This remains true even for pretty weak cryptography. It is much easier to find an exploitable software vulnerability than it is to find a cryptographic weakness. Even cryptographic algorithms that we in the academic community regard as “broken”—meaning there are attacks that are more efficient than brute force—are usable in the real world because the difficulty of breaking the mathematics repeatedly and at scale is much greater than the difficulty of breaking the computer system that the math is running on.

ML systems are similar. Systems that are vulnerable to model stealing through the careful construction of queries are more vulnerable to model stealing by hacking into the computers they’re stored in. Systems that are vulnerable to model inversion—this is where attackers recover the training data through carefully constructed queries—are much more vulnerable to attacks that take advantage of unpatched vulnerabilities.

But while security is only as strong as the weakest link, this doesn’t mean we can ignore either cryptography or ML security. Here, our experience with cryptography can serve as a guide. Cryptographic attacks have different characteristics than software and network attacks, something largely shared with ML attacks. Cryptographic attacks can be passive. That is, attackers who can recover the plaintext from nothing other than the ciphertext can eavesdrop on the communications channel, collect all of the encrypted traffic, and decrypt it on their own systems at their own pace, perhaps in a giant server farm in Utah. This is bulk surveillance and can easily operate on this massive scale.

On the other hand, computer hacking has to be conducted one target computer at a time. Sure, you can develop tools that can be used again and again. But you still need the time and expertise to deploy those tools against your targets, and you have to do so individually. This means that any attacker has to prioritize. So while the NSA has the expertise necessary to hack into everyone’s computer, it doesn’t have the budget to do so. Most of us are simply too low on its priorities list to ever get hacked. And that’s the real point of strong cryptography: it forces attackers like the NSA to prioritize.

This analogy only goes so far. ML is not anywhere near as mathematically sound as cryptography. Right now, it is a sloppy misunderstood mess: hack after hack, kludge after kludge, built on top of each other with some data dependency thrown in. Directly attacking an ML system with a model inversion attack or a perturbation attack isn’t as passive as eavesdropping on an encrypted communications channel, but it’s using the ML system as intended, albeit for unintended purposes. It’s much safer than actively hacking the network and the computer that the ML system is running on. And while it doesn’t scale as well as cryptanalytic attacks can—and there likely will be a far greater variety of ML systems than encryption algorithms—it has the potential to scale better than one-at-a-time computer hacking does. So here again, good ML security denies attackers all of those attack vectors.

We’re still in the early days of studying ML security, and we don’t yet know the contours of ML security techniques. There are really smart people working on this and making impressive progress, and it’ll be years before we fully understand it. Attacks come easy, and defensive techniques are regularly broken soon after they’re made public. It was the same with cryptography in the 1990s, but eventually the science settled down as people better understood the interplay between attack and defense. So while Google, Amazon, Microsoft, and Tesla have all faced adversarial ML attacks on their production systems in the last three years, that’s not going to be the norm going forward.

All of this also means that our security for ML systems depends largely on the same conventional computer security techniques we’ve been using for decades. This includes writing vulnerability-free software, designing user interfaces that help resist social engineering, and building computer networks that aren’t full of holes. It’s the same risk-mitigation techniques that we’ve been living with for decades. That we’re still mediocre at it is cause for concern, with regard to both ML systems and computing in general.

I love cryptography and cryptanalysis. I love the elegance of the mathematics and the thrill of discovering a flaw—or even of reading and understanding a flaw that someone else discovered—in the mathematics. It feels like security in its purest form. Similarly, I am starting to love adversarial ML and ML security, and its tricks and techniques, for the same reasons.

I am not advocating that we stop developing new adversarial ML attacks. It teaches us about the systems being attacked and how they actually work. They are, in a sense, mechanisms for algorithmic understandability. Building secure ML systems is important research and something we in the security community should continue to do.

There is no such thing as a pure ML system. Every ML system is a hybrid of ML software and traditional software. And while ML systems bring new risks that we haven’t previously encountered, we need to recognize that the majority of attacks against these systems aren’t going to target the ML part. Security is only as strong as the weakest link. As bad as ML security is right now, it will improve as the science improves. And from then on, as in cryptography, the weakest link will be in the software surrounding the ML system.

This essay originally appeared in the May 2020 issue of IEEE Computer. I forgot to reprint it here.

Posted on February 6, 2023 at 6:02 AMView Comments

AIs as Computer Hackers

Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.

These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies.

In 2016, DARPA ran a similarly styled event for artificial intelligence (AI). One hundred teams entered their systems into the Cyber Grand Challenge. After completing qualifying rounds, seven finalists competed at the DEFCON hacker convention in Las Vegas. The competition occurred in a specially designed test environment filled with custom software that had never been analyzed or tested. The AIs were given 10 hours to find vulnerabilities to exploit against the other AIs in the competition and to patch themselves against exploitation. A system called Mayhem, created by a team of Carnegie-Mellon computer security researchers, won. The researchers have since commercialized the technology, which is now busily defending networks for customers like the U.S. Department of Defense.

There was a traditional human–team capture-the-flag event at DEFCON that same year. Mayhem was invited to participate. It came in last overall, but it didn’t come in last in every category all of the time.

I figured it was only a matter of time. It would be the same story we’ve seen in so many other areas of AI: the games of chess and go, X-ray and disease diagnostics, writing fake news. AIs would improve every year because all of the core technologies are continually improving. Humans would largely stay the same because we remain humans even as our tools improve. Eventually, the AIs would routinely beat the humans. I guessed that it would take about a decade.

But now, five years later, I have no idea if that prediction is still on track. Inexplicably, DARPA never repeated the event. Research on the individual components of the software vulnerability lifecycle does continue. There’s an enormous amount of work being done on automatic vulnerability finding. Going through software code line by line is exactly the sort of tedious problem at which machine learning systems excel, if they can only be taught how to recognize a vulnerability. There is also work on automatic vulnerability exploitation and lots on automatic update and patching. Still, there is something uniquely powerful about a competition that puts all of the components together and tests them against others.

To see that in action, you have to go to China. Since 2017, China has held at least seven of these competitions—called Robot Hacking Games—many with multiple qualifying rounds. The first included one team each from the United States, Russia, and Ukraine. The rest have been Chinese only: teams from Chinese universities, teams from companies like Baidu and Tencent, teams from the military. Rules seem to vary. Sometimes human–AI hybrid teams compete.

Details of these events are few. They’re Chinese language only, which naturally limits what the West knows about them. I didn’t even know they existed until Dakota Cary, a research analyst at the Center for Security and Emerging Technology and a Chinese speaker, wrote a report about them a few months ago. And they’re increasingly hosted by the People’s Liberation Army, which presumably controls how much detail becomes public.

Some things we can infer. In 2016, none of the Cyber Grand Challenge teams used modern machine learning techniques. Certainly most of the Robot Hacking Games entrants are using them today. And the competitions encourage collaboration as well as competition between the teams. Presumably that accelerates advances in the field.

None of this is to say that real robot hackers are poised to attack us today, but I wish I could predict with some certainty when that day will come. In 2018, I wrote about how AI could change the attack/defense balance in cybersecurity. I said that it is impossible to know which side would benefit more but predicted that the technologies would benefit the defense more, at least in the short term. I wrote: “Defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.”

Unfortunately, it’s the People’s Liberation Army and not DARPA that will be the first to learn if I am right or wrong and how soon it matters.

This essay originally appeared in the January/February 2022 issue of IEEE Security & Privacy.

Posted on February 2, 2023 at 6:59 AMView Comments

Decarbonizing Cryptocurrencies through Taxation

Maintaining bitcoin and other cryptocurrencies causes about 0.3 percent of global CO2 emissions. That may not sound like a lot, but it’s more than the emissions of Switzerland, Croatia, and Norway combined. As many cryptocurrencies crash and the FTX bankruptcy moves into the litigation stage, regulators are likely to scrutinize the cryptocurrency world more than ever before. This presents a perfect opportunity to curb their environmental damage.

The good news is that cryptocurrencies don’t have to be carbon intensive. In fact, some have near-zero emissions. To encourage polluting currencies to reduce their carbon footprint, we need to force buyers to pay for their environmental harms through taxes.

The difference in emissions among cryptocurrencies comes down to how they create new coins. Bitcoin and other high emitters use a system called “proof of work“: to generate coins, participants, or “miners,” have to solve math problems that demand extraordinary computing power. This allows currencies to maintain their decentralized ledger—the blockchain—but requires enormous amounts of energy.

Greener alternatives exist. Most notably, the “proof of stake” system enables participants to maintain their blockchain by depositing cryptocurrency holdings in a pool. When the second-largest cryptocurrency, Ethereum, switched from proof of work to proof of stake earlier this year, its energy consumption dropped by more than 99.9% overnight.

Bitcoin and other cryptocurrencies probably won’t follow suit unless forced to, because proof of work offers massive profits to miners—and they’re the ones with power in the system. Multiple legislative levers could be used to entice them to change.

The most blunt solution is to ban cryptocurrency mining altogether. China did this in 2018, but it only made the problem worse; mining moved to other countries with even less efficient energy generation, and emissions went up. The only way for a mining ban to meaningfully reduce carbon emissions is to enact it across most of the globe. Achieving that level of international consensus is, to say the least, unlikely.

A second solution is to prohibit the buying and selling of proof-of-work currencies. The European Parliament’s Committee on Economic and Monetary Affairs considered making such a proposal, but voted against it in March. This is understandable; as with a mining ban, it would be both viewed as paternalistic and difficult to implement politically.

Employing a tax instead of an outright ban would largely skirt these issues. As with taxes on gasoline, tobacco, plastics, and alcohol, a cryptocurrency tax could reduce real-world harm by making consumers pay for it.

Most ways of taxing cryptocurrencies would be inefficient, because they’re easy to circumvent and hard to enforce. To avoid these pitfalls, the tax should be levied as a fixed percentage of each proof-of-work-cryptocurrency purchase. Cryptocurrency exchanges should collect the tax, just as merchants collect sales taxes from customers before passing the sum on to governments. To make it harder to evade, the tax should apply regardless of how the proof-of-work currency is being exchanged—whether for a fiat currency or another cryptocurrency. Most important, any state that implements the tax should target all purchases by citizens in its jurisdiction, even if they buy through exchanges with no legal presence in the country.

This sort of tax would be transparent and easy to enforce. Because most people buy cryptocurrencies from one of only a few large exchanges—such as Binance, Coinbase, and Kraken—auditing them should be cheap enough that it pays for itself. If an exchange fails to comply, it should be banned.

Even a small tax on proof-of-work currencies would reduce their damage to the planet. Imagine that you’re new to cryptocurrency and want to become a first-time investor. You’re presented with a range of currencies to choose from: bitcoin, ether, litecoin, monero, and others. You notice that all of them except ether add an environmental tax to your purchase price. Which one do you buy?

Countries don’t need to coordinate across borders for a proof-of-work tax on their own citizens to be effective. But early adopters should still consider ways to encourage others to come on board. This has precedent. The European Union is trying to influence global policy with its carbon border adjustments, which are designed to discourage people from buying carbon-intensive products abroad in order to skirt taxes. Similar rules for a proof-of-work tax could persuade other countries to adopt one.

Of course, some people will try to evade the tax, just as people evade every other tax. For example, people might buy tax-free coins on centralized exchanges and then swap them for polluting coins on decentralized exchanges. To some extent, this is inevitable; no tax is perfect. But the effort and technical know-how needed to evade a proof-of-work tax will be a major deterrent.

Even if only a few countries implement this tax—and even if some people evade it—the desirability of bitcoin will fall globally, and the environmental benefit will be significant. A high enough tax could also cause a self-reinforcing cycle that will drive down these cryptocurrencies’ prices. Because the value of many cryptocurrencies rely largely on speculation, they are dependent on future buyers. When speculators are deterred by the tax, the lack of demand will cause the price of bitcoin to fall, which could prompt more current holders to sell—further lowering prices and accelerating the effect. Declining prices will pressure the bitcoin community to abandon proof of work altogether.

Taxing proof-of-work exchanges might hurt them in the short run, but it would not hinder blockchain innovation. Instead, it would redirect innovation toward greener cryptocurrencies. This is no different than how government incentives for electric vehicles encourage carmakers to improve green alternatives to the internal combustion engine. These incentives don’t restrict innovation in automobiles—they promote it.

Taxing environmentally harmful cryptocurrencies can gain support across the political spectrum, from people with varied interests. It would benefit blockchain innovators and cryptocurrency researchers by shifting focus from environmental harm to beneficial uses of the technology. It has the potential to make our planet significantly greener. It would increase government revenues.

Even bitcoin maximalists have reason to embrace the proposal: it would offer the bitcoin community a chance to prove it can survive and grow sustainably.

This essay was written with Christos Porios, and previously appeared in the Atlantic.

Posted on January 4, 2023 at 7:17 AMView Comments

Regulating DAOs

In August, the US Treasury’s Office of Foreign Assets Control (OFAC) sanctioned the cryptocurrency platform Tornado Cash, a virtual currency “mixer” designed to make it harder to trace cryptocurrency transactions—and a worldwide favorite money-laundering platform. Americans are now forbidden from using it. According to the US government, Tornado Cash was sanctioned because it allegedly laundered over $7 billion in cryptocurrency, $455 million of which was stolen by a North Korean state-sponsored hacking group.

Tornado Cash is not a traditional company run by human beings, but instead a series of “smart contracts”: self-executing code that exists only as software. Critics argue that prohibiting Americans from using Tornado Cash is a restraint of free speech, pointing to court rulings in the 1990s that established that computer language is a form of language, and that software programs are a form of speech. They also suggest that the Treasury Department has the authority to sanction only humans and not software.

We think that the most useful way to understand the speech issues involved with regulating Tornado Cash and other decentralized autonomous organizations (DAOs) is through an analogy: the golem. There are many versions of the Jewish golem legend, but in most of them, a person-like clay statue comes to life after someone writes the word “truth” in Hebrew on its forehead, and eventually starts doing terrible things. The golem stops only when a rabbi erases one of those letters, turning “truth” into the Hebrew word for “death,” and the golem ceases to function.

The analogy between DAOs and golems is quite precise, and has important consequences for the relationship between free speech and code. Ultimately, just as the golem needed the intervention of a rabbi to stop wreaking havoc on the world, so too do DAOs need to be subject to regulation.

The equivalency of code and free speech was established during the first “crypto wars” of the 1990s, which were about cryptography, not cryptocurrencies. US agencies tried to use export control laws to prevent sophisticated cryptography software from being exported outside the US. Activists and lawyers cleverly showed how code could be transformed into speech and vice versa, turning the source code for a cryptographic product into a printed book and daring US authorities to prevent its export. In 1996, US District Judge Marilyn Hall Patel ruled that computer code is a language, just like German or French, and that coded programs deserve First Amendment protection. That such code is also functional, instructing a computer to do something, was irrelevant to its expressive capabilities, according to Patel’s ruling. However, both a concurring and dissenting opinion argued that computer code also has the “functional purpose of controlling computers and, in that regard, does not command protection under the First Amendment.”

This disagreement highlights the awkward distinction between ordinary language and computer code. Language does not change the world, except insofar as it persuades, informs, or compels other people. Code, however, is a language where words have inherent power. Type the appropriate instructions and the computer will implement them without hesitation, second-guessing, or independence of will. They are like the words inscribed on a golem’s forehead (or the written instructions that, in some versions of the folklore, are placed in its mouth). The golem has no choice, because it is incapable of making choices. The words are code, and the golem is no different from a computer.

Unlike ordinary organizations, DAOs don’t rely on human beings to carry out many of their core functions. Instead, those functions have been translated into a set of instructions that are implemented in software. In the case of Tornado Cash, its code exists as part of Ethereum, a widely used cryptocurrency that can also run arbitrary computer code.

Cryptocurrency zealots thought that DAOs would allow them to place their trust in secure computer code, which would do exactly what they wanted it to do, rather than fallible human beings who might fail or cheat. Humans could still have input, but under rules that were enshrined in self-running software. The past several years of DAO activity has taught these zealots a series of painful and expensive lessons on the limits of both computer security and incomplete contracts: Software has bugs, and contracts may do weird things under unanticipated circumstances. The combination frequently results in multimillion-dollar frauds and thefts.

Further complicating the matter is that individual DAOs can have very different rules. DAOs were supposed to create truly decentralized services that could never turn into a source of state power and coercion. Today, some DAOs talk a big game about decentralization, but provide power to founders and big investors like Andreessen Horowitz. Others are deliberately set up to frustrate outside control. Indeed, the creators of Tornado Cash explicitly wanted to create a golem-like entity that would be immune from law. In doing so, they were following in a long libertarian tradition.

In 2014, Gavin Woods, one of Ethereum’s core developers, gave a talk on what he called “allegality” of decentralized software services. Woods’s argument was very simple. Companies like PayPal employ real people and real lawyers. That meant that “if they provide a service to you that is deemed wrong or illegal … then they get fucked … maybe [go] to prison.” But cryptocurrencies like Bitcoin “had no operator.” By using software running on blockchains rather than people to run your organization, you could do an end-run around normal, human law. You could create services that “cannot be shut down. Not by a court, not by a police force, not by a nation state.” People would be able to set whatever rules they wanted, regardless of what any government prohibited.

Woods’s speech helped inspire the first DAO (The DAO), and his ideas live on in Tornado Cash. Tornado Cash was designed, in its founder’s words, “to be unstoppable.” The way the protocol is “designed, decentralized and autonomous …[,] there’s nobody in charge.” The people who ran Tornado Cash used a decentralized protocol running on the Ethereum computing platform, which is itself radically decentralized. But they used indelible ink. The protocol was deliberately instructed never to accept an update command.

Other elements of Tornado Cash—­its website, and the GitHub repository where its source code was stored—­have been taken down. But the protocol that actually mixes cryptocurrency is still available through the Ethereum network, even if it doesn’t have a user-friendly front end. Like a golem that has been set in motion, it will just keep on going, taking in, processing, and returning cryptocurrency according to its original instructions.

This gets us to the argument that the US government, by sanctioning a software program, is restraining free speech. Not only is it more complicated than that, but it’s complicated in ways that undercut this argument. OFAC’s actions aren’t aimed against free speech and the publication of source code, as its clarifications have made clear. Researchers are not prohibited from copying, posting, “discussing, teaching about, or including open-source code in written publications, such as textbooks.” GitHub could potentially still host the source code and the project. OFAC’s actions are aimed at preventing persons from using software applications that undercut one of the most basic functions of government: regulating activities that it deems endangers national security.

The question is whether the First Amendment covers golems. When your words are used not to persuade or argue, but to animate a mindless entity that will exist as long as the Ethereum blockchain exists and will carry out your final instructions no matter what, should your golem be immune from legal action?

When Patel issued her famous ruling, she caustically dismissed the argument that “even one drop of ‘direct functionality'” overwhelmed people’s expressive rights. Arguably, the question with Tornado Cash is whether a possibly notional droplet of free speech expressivity can overwhelm the direct functionality of running code, especially code designed to refuse any further human intervention. The Tornado Cash protocol will accept and implement the routine commands described by its protocol: It will still launder cryptocurrency. But the protocol itself is frozen.

We certainly don’t think that the US government should ban DAOs or code running on Ethereum or other blockchains, or demand any universal right of access to their workings. That would be just as sweeping—and wrong—as the general claim that encrypted messaging results in a “lawless space,” or the contrary notion that regulating code is always a prior restraint on free speech. There is wide scope for legitimate disagreement about government regulation of code and its legal authorities over distributed systems.

However, it’s hard not to sympathize with OFAC’s desire to push back against a radical effort to undermine the very idea of government authority. What would happen if the Tornado Cash approach to the law prevailed? That is, what would be the outcome if judges and politicians decided that entities like Tornado Cash could not be regulated, on free speech or any other grounds?

Likely, anyone who wanted to facilitate illegal activities would have a strong incentive to turn their operation into a DAO—and then throw away the key. Ethereum’s programming language is Turing-complete. That means, as Woods argued back in 2014, that one could turn all kinds of organizational rules into software, whether or not they were against the law.

In practice, it wouldn’t be so easy. Turning business principles into running code is hard, and doing it without creating bugs or loopholes is much harder still. Ethereum and other blockchains still have hard limits on computing power. But human ingenuity can accomplish many things when there’s a lot of money at stake.

People have legitimate reasons for seeking anonymity in their financial transactions, but these reasons need to be weighed against other harms to society. As privacy advocate Cory Doctorow wrote recently: “When you combine anonymity with finance—­not the right to speak anonymously, but the right to run an investment fund anonymously—you’re rolling out the red carpet for serial scammers, who can run a scam, get caught, change names, and run it again, incorporating the lessons they learned.”

It’s a mistake to defend DAOs on the grounds that code is free speech. Some code is speech, but not all code is speech. And code can also directly affect the world. DAOs, which are in essence autonomous golems, made from code rather than clay, make this distinction especially stark.

This will become even more important as robots become more capable and prevalent. Robots are even more obviously golems than DAOs are, performing actions in the physical world. Should their code enjoy a safe harbor from the law? What if robots, like DAOs, are designed to obey only their initial instructions, however unlawful­—and refuse all further updates or commands? Assuming that code is free speech and only free speech, and ignoring its functional purpose, will at best tangle the law up in knots.

Tying free speech arguments to the cause of DAOs like Tornado Cash imperils some of the important free speech victories that were won in the past. But the risks for everyone might be even greater if that argument wins. A world where democratic governments are unable to enforce their laws is not a world where civic spaces or civil liberties will thrive.

This essay was written with Henry Farrell, and previously appeared on Lawfare.com.

EDITED TO ADD (10/26): Peter Van Valkenburgh wrote a rebuttal to our essay. My co-author responds. And Evan Geer, who started this whole conversation, responds to Henry.

Posted on October 14, 2022 at 9:08 AMView Comments

On the Dangers of Cryptocurrencies and the Uselessness of Blockchain

Earlier this month, I and others wrote a letter to Congress, basically saying that cryptocurrencies are an complete and total disaster, and urging them to regulate the space. Nothing in that letter is out of the ordinary, and is in line with what I wrote about blockchain in 2019. In response, Matthew Green has written—not really a rebuttal—but a “a general response to some of the more common spurious objections…people make to public blockchain systems.” In it, he makes several broad points:

  1. Yes, current proof-of-work blockchains like bitcoin are terrible for the environment. But there are other modes like proof-of-stake that are not.
  2. Yes, a blockchain is an immutable ledger making it impossible to undo specific transactions. But that doesn’t mean there can’t be some governance system on top of the blockchain that enables reversals.
  3. Yes, bitcoin doesn’t scale and the fees are too high. But that’s nothing inherent in blockchain technology—that’s just a bunch of bad design choices bitcoin made.
  4. Blockchain systems can have a little or a lot of privacy, depending on how they are designed and implemented.

There’s nothing on that list that I disagree with. (We can argue about whether proof-of-stake is actually an improvement. I am skeptical of systems that enshrine a “they who have the gold make the rules” system of governance. And to the extent any of those scaling solutions work, they undo the decentralization blockchain claims to have.) But I also think that these defenses largely miss the point. To me, the problem isn’t that blockchain systems can be made slightly less awful than they are today. The problem is that they don’t do anything their proponents claim they do. In some very important ways, they’re not secure. They don’t replace trust with code; in fact, in many ways they are far less trustworthy than non-blockchain systems. They’re not decentralized, and their inevitable centralization is harmful because it’s largely emergent and ill-defined. They still have trusted intermediaries, often with more power and less oversight than non-blockchain systems. They still require governance. They still require regulation. (These things are what I wrote about here.) The problem with blockchain is that it’s not an improvement to any system—and often makes things worse.

In our letter, we write: “By its very design, blockchain technology is poorly suited for just about every purpose currently touted as a present or potential source of public benefit. From its inception, this technology has been a solution in search of a problem and has now latched onto concepts such as financial inclusion and data transparency to justify its existence, despite far better solutions to these issues already in use. Despite more than thirteen years of development, it has severe limitations and design flaws that preclude almost all applications that deal with public customer data and regulated financial transactions and are not an improvement on existing non-blockchain solutions.”

Green responds: “‘Public blockchain’ technology enables many stupid things: today’s cryptocurrency schemes can be venal, corrupt, overpromised. But the core technology is absolutely not useless. In fact, I think there are some pretty exciting things happening in the field, even if most of them are further away from reality than their boosters would admit.” I have yet to see one. More specifically, I can’t find a blockchain application whose value has anything to do with the blockchain part, that wouldn’t be made safer, more secure, more reliable, and just plain better by removing the blockchain part. I postulate that no one has ever said “Here is a problem that I have. Oh look, blockchain is a good solution.” In every case, the order has been: “I have a blockchain. Oh look, there is a problem I can apply it to.” And in no cases does it actually help.

Someone, please show me an application where blockchain is essential. That is, a problem that could not have been solved without blockchain that can now be solved with it. (And “ransomware couldn’t exist because criminals are blocked from using the conventional financial networks, and cash payments aren’t feasible” does not count.)

For example, Green complains that “credit card merchant fees are similar, or have actually risen in the United States since the 1990s.” This is true, but has little to do with technological inefficiencies or existing trust relationships in the industry. It’s because pretty much everyone who can and is paying attention gets 1% back on their purchases: in cash, frequent flier miles, or other affinity points. Green is right about how unfair this is. It’s a regressive subsidy, “since these fees are baked into the cost of most retail goods and thus fall heavily on the working poor (who pay them even if they use cash).” But that has nothing to do with the lack of blockchain, and solving it isn’t helped by adding a blockchain. It’s a regulatory problem; with a few exceptions, credit card companies have successfully pressured merchants into charging the same prices, whether someone pays in cash or with a credit card. Peer-to-peer payment systems like PayPal, Venmo, MPesa, and AliPay all get around those high transaction fees, and none of them use blockchain.

This is my basic argument: blockchain does nothing to solve any existing problem with financial (or other) systems. Those problems are inherently economic and political, and have nothing to do with technology. And, more importantly, technology can’t solve economic and political problems. Which is good, because adding blockchain causes a whole slew of new problems and makes all of these systems much, much worse.

Green writes: “I have no problem with the idea of legislators (intelligently) passing laws to regulate cryptocurrency. Indeed, given the level of insanity and the number of outright scams that are happening in this area, it’s pretty obvious that our current regulatory framework is not up to the task.” But when you remove the insanity and the scams, what’s left?

EDITED TO ADD: Nicholas Weaver is also adamant about this. David Rosenthal is good, too.

EDITED TO ADD (7/8/2022): This post has been translated into German.

EDITED TO ADD (4/10/2023): This post has been translated into Italian.

Posted on June 24, 2022 at 6:13 AMView Comments

Corporate Involvement in International Cybersecurity Treaties

The Paris Call for Trust and Stability in Cyberspace is an initiative launched by French President Emmanuel Macron during the 2018 UNESCO’s Internet Governance Forum. It’s an attempt by the world’s governments to come together and create a set of international norms and standards for a reliable, trustworthy, safe, and secure Internet. It’s not an international treaty, but it does impose obligations on the signatories. It’s a major milestone for global Internet security and safety.

Corporate interests are all over this initiative, sponsoring and managing different parts of the process. As part of the Call, the French company Cigref and the Russian company Kaspersky chaired a working group on cybersecurity processes, along with French research center GEODE. Another working group on international norms was chaired by US company Microsoft and Finnish company F-Secure, along with a University of Florence research center. A third working group’s participant list includes more corporations than any other group.

As a result, this process has become very different than previous international negotiations. Instead of governments coming together to create standards, it is being drive by the very corporations that the new international regulatory climate is supposed to govern. This is wrong.

The companies making the tools and equipment being regulated shouldn’t be the ones negotiating the international regulatory climate, and their executives shouldn’t be named to key negotiation roles without appointment and confirmation. It’s an abdication of responsibility by the US government for something that is too important to be treated this cavalierly.

On the one hand, this is no surprise. The notions of trust and stability in cyberspace are about much more than international safety and security. They’re about market share and corporate profits. And corporations have long led policymakers in the fast-moving and highly technological battleground that is cyberspace.

The international Internet has always relied on what is known as a multistakeholder model, where those who show up and do the work can be more influential than those in charge of governments. The Internet Engineering Task Force, the group that agrees on the technical protocols that make the Internet work, is largely run by volunteer individuals. This worked best during the Internet’s era of benign neglect, where no one but the technologists cared. Today, it’s different. Corporate and government interests dominate, even if the individuals involved use the polite fiction of their own names and personal identities.

However, we are a far cry from decades past, where the Internet was something that governments didn’t understand and largely ignored. Today, the Internet is an essential infrastructure that underpins much of society, and its governance structure is something that nations care about deeply. Having for-profit tech companies run the Paris Call process on regulating tech is analogous to putting the defense contractors Northrop Grumman or Boeing in charge of the 1970s SALT nuclear agreements between the US and the Soviet Union.

This also isn’t the first time that US corporations have led what should be an international relations process regarding the Internet. Since he first gave a speech on the topic in 2017, Microsoft President Brad Smith has become almost synonymous with the term “Digital Geneva Convention.” It’s not just that corporations in the US and elsewhere are taking a lead on international diplomacy, they’re framing the debate down to the words and the concepts.

Why is this happening? Different countries have their own problems, but we can point to three that currently plague the US.

First and foremost, “cyber” still isn’t taken seriously by much of the government, specifically the State Department. It’s not real to the older military veterans, or to the even older politicians who confuse Facebook with TikTok and use the same password for everything. It’s not even a topic area for negotiations for the US Trade Representative. Nuclear disarmament is “real geopolitics,” while the Internet is still, even now, seen as vaguely magical, and something that can be “fixed” by having the nerds yank plugs out of a wall.

Second, the State Department was gutted during the Trump years. It lost many of the up-and-coming public servants who understood the way the world was changing. The work of previous diplomats to increase the visibility of the State Department’s cyber efforts was abandoned. There are few left on staff to do this work, and even fewer to decide if they’re any good. It’s hard to hire senior information security professionals in the best of circumstances; it’s why charlatans so easily flourish in the cybersecurity field. The built-up skill set of the people who poured their effort and time into this work during the Obama years is gone.

Third, there’s a power struggle at the heart of the US government involving cyber issues, between the White House, the Department of Homeland Security (represented by CISA), and the military (represented by US Cyber Command). Trying to create another cyber center of power within the State Department threatens those existing powers. It’s easier to leave it in the hands of private industry, which does not affect those government organizations’ budgets or turf.

We don’t want to go back to the era when only governments set technological standards. The governance model from the days of the telephone is another lesson in how not to do things. The International Telecommunications Union is an agency run out of the United Nations. It is moribund and ponderous precisely because it is run by national governments, with civil society and corporations largely alienated from the decision-making processes.

Today, the Internet is fundamental to global society. It’s part of everything. It affects national security and will be a theater in any future war. How individuals, corporations, and governments act in cyberspace is critical to our future. The Internet is critical infrastructure. It provides and controls access to healthcare, space, the military, water, energy, education, and nuclear weaponry. How it is regulated isn’t just something that will affect the future. It is the future.

Since the Paris Call was finalized in 2018, it has been signed by 81 countries—including the US in 2021—36 local governments and public authorities, 706 companies and private organizations, and 390 civil society groups. The Paris Call isn’t the first international agreement that puts companies on an equal signatory footing as governments. The Global Internet Forum to Combat Terrorism and the Christchurch Call to eliminate extremist content online do the same thing. But the Paris Call is different. It’s bigger. It’s more important. It’s something that should be the purview of governments and not a vehicle for corporate power and profit.

When something as important as the Paris Call comes along again, perhaps in UN negotiations for a cybercrime treaty, we call for actual State Department officials with technical expertise to be sitting at the table with the interests of the entire US in their pocket…not people with equity shares to protect.

This essay was written with Tarah Wheeler, and previously published on The Cipher Brief.

Posted on May 6, 2022 at 6:01 AMView Comments

Why Vaccine Cards Are So Easily Forged

My proof of COVID-19 vaccination is recorded on an easy-to-forge paper card. With little trouble, I could print a blank form, fill it out, and snap a photo. Small imperfections wouldn’t pose any problem; you can’t see whether the paper’s weight is right in a digital image. When I fly internationally, I have to show a negative COVID-19 test result. That, too, would be easy to fake. I could change the date on an old test, or put my name on someone else’s test, or even just make something up on my computer. After all, there’s no standard format for test results; airlines accept anything that looks plausible.

After a career spent in cybersecurity, this is just how my mind works: I find vulnerabilities in everything I see. When it comes to the measures intended to keep us safe from COVID-19, I don’t even have to look very hard. But I’m not alarmed. The fact that these measures are flawed is precisely why they’re going to be so helpful in getting us past the pandemic.

Back in 2003, at the height of our collective terrorism panic, I coined the term security theater to describe measures that look like they’re doing something but aren’t. We did a lot of security theater back then: ID checks to get into buildings, even though terrorists have IDs; random bag searches in subway stations, forcing terrorists to walk to the next station; airport bans on containers with more than 3.4 ounces of liquid, which can be recombined into larger bottles on the other side of security. At first glance, asking people for photos of easily forged pieces of paper or printouts of readily faked test results might look like the same sort of security theater. There’s an important difference, though, between the most effective strategies for preventing terrorism and those for preventing COVID-19 transmission.

Security measures fail in one of two ways: Either they can’t stop a bad actor from doing a bad thing, or they block an innocent person from doing an innocuous thing. Sometimes one is more important than the other. When it comes to attacks that have catastrophic effects—say, launching nuclear missiles—we want the security to stop all bad actors, even at the expense of usability. But when we’re talking about milder attacks, the balance is less obvious. Sure, banks want credit cards to be impervious to fraud, but if the security measures also regularly prevent us from using our own credit cards, we would rebel and banks would lose money. So banks often put ease of use ahead of security.

That’s how we should think about COVID-19 vaccine cards and test documentation. We’re not looking for perfection. If most everyone follows the rules and doesn’t cheat, we win. Making these systems easy to use is the priority. The alternative just isn’t worth it.

I design computer security systems for a living. Given the challenge, I could design a system of vaccine and test verification that makes cheating very hard. I could issue cards that are as unforgeable as passports, or create phone apps that are linked to highly secure centralized databases. I could build a massive surveillance apparatus and enforce the sorts of strict containment measures used in China’s zero-COVID-19 policy. But the costs—in money, in liberty, in privacy—are too high. We can get most of the benefits with some pieces of paper and broad, but not universal, compliance with the rules.

It also helps that many of the people who break the rules are so very bad at it. Every story of someone getting arrested for faking a vaccine card, or selling a fake, makes it less likely that the next person will cheat. Every traveler arrested for faking a COVID-19 test does the same thing. When a famous athlete such as Novak Djokovic gets caught lying about his past COVID-19 diagnosis when trying to enter Australia, others conclude that they shouldn’t try lying themselves.

Our goal should be to impose the best policies that we can, given the trade-offs. The small number of cheaters isn’t going to be a public-health problem. I don’t even care if they feel smug about cheating the system. The system is resilient; it can withstand some cheating.

Last month, I visited New York City, where restrictions that are now being lifted were then still in effect. Every restaurant and cocktail bar I went to verified the photo of my vaccine card that I keep on my phone, and at least pretended to compare the name on that card with the one on my photo ID. I felt a lot safer in those restaurants because of that security theater, even if a few of my fellow patrons cheated.

This essay previously appeared in the Atlantic.

Posted on March 18, 2022 at 6:12 AMView Comments

Vulnerabilities in Weapons Systems

“If you think any of these systems are going to work as expected in wartime, you’re fooling yourself.”

That was Bruce’s response at a conference hosted by US Transportation Command in 2017, after learning that their computerized logistical systems were mostly unclassified and on the Internet. That may be necessary to keep in touch with civilian companies like FedEx in peacetime or when fighting terrorists or insurgents. But in a new era facing off with China or Russia, it is dangerously complacent.

Any twenty-first century war will include cyber operations. Weapons and support systems will be successfully attacked. Rifles and pistols won’t work properly. Drones will be hijacked midair. Boats won’t sail, or will be misdirected. Hospitals won’t function. Equipment and supplies will arrive late or not at all.

Our military systems are vulnerable. We need to face that reality by halting the purchase of insecure weapons and support systems and by incorporating the realities of offensive cyberattacks into our military planning.

Over the past decade, militaries have established cyber commands and developed cyberwar doctrine. However, much of the current discussion is about offense. Increasing our offensive capabilities without being able to secure them is like having all the best guns in the world, and then storing them in an unlocked, unguarded armory. They just won’t be stolen; they’ll be subverted.

During that same period, we’ve seen increasingly brazen cyberattacks by everyone from criminals to governments. Everything is now a computer, and those computers are vulnerable. Cars, medical devices, power plants, and fuel pipelines have all been targets. Military computers, whether they’re embedded inside weapons systems or on desktops managing the logistics of those weapons systems, are similarly vulnerable. We could see effects as stodgy as making a tank impossible to start up, or sophisticated as retargeting a missile midair.

Military software is unlikely to be any more secure than commercial software. Although sensitive military systems rely on domestically manufactured chips as part of the Trusted Foundry program, many military systems contain the same foreign chips and code that commercial systems do: just like everyone around the world uses the same mobile phones, networking equipment, and computer operating systems. For example, there has been serious concern over Chinese-made 5G networking equipment that might be used by China to install “backdoors” that would allow the equipment to be controlled. This is just one of many risks to our normal civilian computer supply chains. And since military software is vulnerable to the same cyberattacks as commercial software, military supply chains have many of the same risks.

This is not speculative. A 2018 GAO report expressed concern regarding the lack of secure and patchable US weapons systems. The report observed that “in operational testing, the [Department of Defense] routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic.” It’s a similar attitude to corporate executives who believe that they can’t be hacked—and equally naive.

An updated GAO report from earlier this year found some improvements, but the basic problem remained: “DOD is still learning how to contract for cybersecurity in weapon systems, and selected programs we reviewed have struggled to incorporate systems’ cybersecurity requirements into contracts.” While DOD now appears aware of the issue of lack of cybersecurity requirements, they’re still not sure yet how to fix it, and in three of the five cases GAO reviewed, DOD simply chose to not include the requirements at all.

Militaries around the world are now exploiting these vulnerabilities in weapons systems to carry out operations. When Israel in 2007 bombed a Syrian nuclear reactor, the raid was preceded by what is believed to have been a cyber attack on Syrian air defenses that resulted in radar screens showing no threat as bombers zoomed overhead. In 2018, a 29-country NATO exercise, Trident Juncture, that included cyberweapons was disrupted by Russian GPS jamming. NATO does try to test cyberweapons outside such exercises, but has limited scope in doing so. In May, Jens Stoltenberg, the NATO secretary-general, said that “NATO computer systems are facing almost daily cyberattacks.”

The war of the future will not only be about explosions, but will also be about disabling the systems that make armies run. It’s not (solely) that bases will get blown up; it’s that some bases will lose power, data, and communications. It’s not that self-driving trucks will suddenly go mad and begin rolling over friendly soldiers; it’s that they’ll casually roll off roads or into water where they sit, rusting, and in need of repair. It’s not that targeting systems on guns will be retargeted to 1600 Pennsylvania Avenue; it’s that many of them could simply turn off and not turn back on again.

So, how do we prepare for this next war? First, militaries need to introduce a little anarchy into their planning. Let’s have wargames where essential systems malfunction or are subverted­not all of the time, but randomly. To help combat siloed military thinking, include some civilians as well. Allow their ideas into the room when predicting potential enemy action. And militaries need to have well-developed backup plans, for when systems are subverted. In Joe Haldeman’s 1975 science-fiction novel The Forever War, he postulated a “stasis field” that forced his space marines to rely on nothing more than Roman military technologies, like javelins. We should be thinking in the same direction.

NATO isn’t yet allowing civilians not employed by NATO or associated military contractors access to their training cyber ranges where vulnerabilities could be discovered and remediated before battlefield deployment. Last year, one of us (Tarah) was listening to a NATO briefing after the end of the 2020 Cyber Coalition exercises, and asked how she and other information security researchers could volunteer to test cyber ranges used to train its cyber incident response force. She was told that including civilians would be a “welcome thought experiment in the tabletop exercises,” but including them in reality wasn’t considered. There is a rich opportunity for improvement here, providing transparency into where improvements could be made.

Second, it’s time to take cybersecurity seriously in military procurement, from weapons systems to logistics and communications contracts. In the three year span from the original 2018 GAO report to this year’s report, cybersecurity audit compliance went from 0% to 40% (those 2 of 5 programs mentioned earlier). We need to get much better. DOD requires that its contractors and suppliers follow the Cybersecurity Maturity Model Certification process; it should abide by the same standards. Making those standards both more rigorous and mandatory would be an obvious second step.

Gone are the days when we can pretend that our technologies will work in the face of a military cyberattack. Securing our systems will make everything we buy more expensive—maybe a lot more expensive. But the alternative is no longer viable.

The future of war is cyberwar. If your weapons and systems aren’t secure, don’t even bother bringing them onto the battlefield.

This essay was written with Tarah Wheeler, and previously appeared in Brookings TechStream.

Posted on June 8, 2021 at 5:32 AMView Comments

The Misaligned Incentives for Cloud Security

Russia’s Sunburst cyberespionage campaign, discovered late last year, impacted more than 100 large companies and US federal agencies, including the Treasury, Energy, Justice, and Homeland Security departments. A crucial part of the Russians’ success was their ability to move through these organizations by compromising cloud and local network identity systems to then access cloud accounts and pilfer emails and files.

Hackers said by the US government to have been working for the Kremlin targeted a widely used Microsoft cloud service that synchronizes user identities. The hackers stole security certificates to create their own identities, which allowed them to bypass safeguards such as multifactor authentication and gain access to Office 365 accounts, impacting thousands of users at the affected companies and government agencies.

It wasn’t the first time cloud services were the focus of a cyberattack, and it certainly won’t be the last. Cloud weaknesses were also critical in a 2019 breach at Capital One. There, an Amazon Web Services cloud vulnerability, compounded by Capital One’s own struggle to properly configure a complex cloud service, led to the disclosure of tens of millions of customer records, including credit card applications, Social Security numbers, and bank account information.

This trend of attacks on cloud services by criminals, hackers, and nation states is growing as cloud computing takes over worldwide as the default model for information technologies. Leaked data is bad enough, but disruption to the cloud, even an outage at a single provider, could quickly cost the global economy billions of dollars a day.

Cloud computing is an important source of risk both because it has quickly supplanted traditional IT and because it concentrates ownership of design choices at a very small number of companies. First, cloud is increasingly the default mode of computing for organizations, meaning ever more users and critical data from national intelligence and defense agencies ride on these technologies. Second, cloud computing services, especially those supplied by the world’s four largest providers—Amazon, Microsoft, Alibaba, and Google—concentrate key security and technology design choices inside a small number of organizations. The consequences of bad decisions or poorly made trade-offs can quickly scale to hundreds of millions of users.

The cloud is everywhere. Some cloud companies provide software as a service, support your Netflix habit, or carry your Slack chats. Others provide computing infrastructure like business databases and storage space. The largest cloud companies provide both.

The cloud can be deployed in several different ways, each of which shift the balance of responsibility for the security of this technology. But the cloud provider plays an important role in every case. Choices the provider makes in how these technologies are designed, built, and deployed influence the user’s security—yet the user has very little influence over them. Then, if Google or Amazon has a vulnerability in their servers—which you are unlikely to know about and have no control over—you suffer the consequences.

The problem is one of economics. On the surface, it might seem that competition between cloud companies gives them an incentive to invest in their users’ security. But several market failures get in the way of that ideal. First, security is largely an externality for these cloud companies, because the losses due to data breaches are largely borne by their users. As long as a cloud provider isn’t losing customers by the droves—which generally doesn’t happen after a security incident—it is incentivized to underinvest in security. Additionally, data shows that investors don’t punish the cloud service companies either: Stock price dips after a public security breach are both small and temporary.

Second, public information about cloud security generally doesn’t share the design trade-offs involved in building these cloud services or provide much transparency about the resulting risks. While cloud companies have to publicly disclose copious amounts of security design and operational information, it can be impossible for consumers to understand which threats the cloud services are taking into account, and how. This lack of understanding makes it hard to assess a cloud service’s overall security. As a result, customers and users aren’t able to differentiate between secure and insecure services, so they don’t base their buying and use decisions on it.

Third, cybersecurity is complex—and even more complex when the cloud is involved. For a customer like a company or government agency, the security dependencies of various cloud and on-premises network systems and services can be subtle and hard to map out. This means that users can’t adequately assess the security of cloud services or how they will interact with their own networks. This is a classic “lemons market” in economics, and the result is that cloud providers provide variable levels of security, as documented by Dan Geer, the chief information security officer for In-Q-Tel, and Wade Baker, a professor at Virginia Tech’s College of Business, when they looked at the prevalence of severe security findings at the top 10 largest cloud providers. Yet most consumers are none the wiser.

The result is a market failure where cloud service providers don’t compete to provide the best security for their customers and users at the lowest cost. Instead, cloud companies take the chance that they won’t get hacked, and past experience tells them they can weather the storm if they do. This kind of decision-making and priority-setting takes place at the executive level, of course, and doesn’t reflect the dedication and technical skill of product engineers and security specialists. The effect of this underinvestment is pernicious, however, by piling on risk that’s largely hidden from users. Widespread adoption of cloud computing carries that risk to an organization’s network, to its customers and users, and, in turn, to the wider internet.

This aggregation of cybersecurity risk creates a national security challenge. Policymakers can help address the challenge by setting clear expectations for the security of cloud services—and for making decisions and design trade-offs about that security transparent. The Biden administration, including newly nominated National Cyber Director Chris Inglis, should lead an interagency effort to work with cloud providers to review their threat models and evaluate the security architecture of their various offerings. This effort to require greater transparency from cloud providers and exert more scrutiny of their security engineering efforts should be accompanied by a push to modernize cybersecurity regulations for the cloud era.

The Federal Risk and Authorization Management Program (FedRAMP), which is the principal US government program for assessing the risk of cloud services and authorizing them for use by government agencies, would be a prime vehicle for these efforts. A recent executive order outlines several steps to make FedRAMP faster and more responsive. But the program is still focused largely on the security of individual services rather than the cloud vendors’ deeper architectural choices and threat models. Congressional action should reinforce and extend the executive order by adding new obligations for vendors to provide transparency about design trade-offs, threat models, and resulting risks. These changes could help transform FedRAMP into a more effective tool of security governance even as it becomes faster and more efficient.

Cloud providers have become important national infrastructure. Not since the heights of the mainframe era between the 1960s and early 1980s has the world witnessed computing systems of such complexity used by so many but designed and created by so few. The security of this infrastructure demands greater transparency and public accountability—if only to match the consequences of its failure.

This essay was written with Trey Herr, and previously appeared in Foreign Policy.

Posted on May 28, 2021 at 6:20 AMView Comments

AIs and Fake Comments

This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of US democracy ­—the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.

This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.

That’s what should happen. But as the New York report shows, it often doesn’t. The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress. All of them said that they supported the corporations’ position on something called “net neutrality,” the idea that telecommunications companies must treat all Internet content equally and not prioritize any company or service. Three AstroTurf companies—Fluent, Opt-Intelligence and React2Media ­—agreed to pay nearly $4 million in fines.

The fakes were crude. Many of them were identical, while others were patchworks of simple textual variations: substituting “Federal Communications Commission” and “FCC” for each other, for example.

Next time, though, we won’t be so lucky. New technologies are about to make it far easier to generate enormous numbers of convincing personalized comments and letters, each with its own word choices, expressive style and pithy examples. The people who create fake grass-roots organizations have always been enthusiastic early adopters of technology, weaponizing letters, faxes, emails and Web comments to manufacture the appearance of public support or public outrage.

Take Generative Pre-trained Transformer 3, or GPT-3, an AI model created by OpenAI, a San Francisco based start-up. With minimal prompting, GPT-3 can generate convincing seeming newspaper articles, résumé cover letters, even Harry Potter fan fiction in the style of Ernest Hemingway. It is trivially easy to use these techniques to compose large numbers of public comments or letters to lawmakers.

OpenAI restricts access to GPT-3, but in a recent experiment, researchers used a different text-generation program to submit 1,000 comments in response to a government request for public input on a Medicaid issue. They all sounded unique, like real people advocating a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. Others won’t be so ethical.

When the floodgates open, democratic speech is in danger of drowning beneath a tide of fake letters and comments, tweets and Facebook posts. The danger isn’t just that fake support can be generated for unpopular positions, as happened with net neutrality. It is that public commentary will be completely discredited. This would be bad news for specialist AstroTurf companies, which would have no business model if there isn’t a public that they can pretend to be representing. But it would empower still further other kinds of lobbyists, who at least can prove that they are who they say they are.

We may have a brief window to shore up the flood walls. The most effective response would be to regulate what UCLA sociologist Edward Walker has described as the “grassroots for hire” industry. Organizations that deliberately fabricate citizen voices shouldn’t just be subject to civil fines, but to criminal penalties. Businesses that hire these organizations should be held liable for failures of oversight. It’s impossible to prove or disprove whether telecommunications companies knew their subcontractors would create bogus citizen voices, but a liability standard would at least give such companies an incentive to find out. This is likely to be politically difficult to put in place, though, since so many powerful actors benefit from the status quo.

This essay was written with Henry Farrell, and previously appeared in the Washington Post.

EDITED TO ADD: CSET published an excellent report on AI-generated partisan content. Short summary: it’s pretty good, and will continue to get better. Renee DeRista has also written about this.

This paper is about a lower-tech version of this threat. Also this.

EDITED TO ADD: Another essay on the same topic.

Posted on May 24, 2021 at 6:20 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.