The Internet was going to set us all free. At least, that is what U.S. policy makers, pundits, and scholars believed in the 2000s. The Internet would undermine authoritarian rulers by reducing the government’s stranglehold on debate, helping oppressed people realize how much they all hated their government, and simply making it easier and cheaper to organize protests.
This is Democracy’s Dilemma: the open forms of input and exchange that it relies on can be weaponized to inject falsehood and misinformation that erode democratic debate.
Today, we live in darker times. Authoritarians are using these same technologies to bolster their rule. Even worse, the Internet seems to be undermining democracy by allowing targeted disinformation, turning public debate into a petri dish for bots and propagandists, and spreading general despair. A new consensus is emerging that democracy is less a resilient political system than a free-fire zone in a broader information war.
This despairing, technologically determinist response is premature. The Arab Spring wasn’t the twilight of dictatorship, yes, but today isn’t the twilight of democracy, either. Still, we agree that to the extent democracy has revealed systemic weaknesses, we should be working overtime to repair them.
To pursue this project of repair, we need a better understanding of democracy’s resiliency in the face of information attacks. Building that understanding is harder than it might seem. Our theories have mostly assumed that democracies are better off when there is less control over information. The central assumption, which owes much to John Stuart Mill and Louis Brandeis, is that the answer to bad speech is more and better speech.
We need new frameworks to understand the limits of this optimistic view. Changes in technology have made speech cheap, and the bad guys have figured out that more speech can be countered with even more bad speech. In this world, the easy flow of information can cause trouble for democracy.
To understand the informational weaknesses of democracy, we propose to start in what may be a surprising place: the parallel weaknesses of autocracies. Consider what is called the “Dictator’s Dilemma,” the tradeoff autocrats face between political stability and open information flows.
On the one hand, accurate and freely available information helps governments, including authoritarian ones, to run better: by alerting government officials to what is happening among their citizens, allowing markets to function properly, and identifying corrupt low-level officials who stymie policy and make citizens unhappy. A government without accurate information about its country and its people risks enacting unpopular and ineffective policies it might otherwise have avoided.
Focusing on Democracy’s Dilemma may help us to cut the problem down to size. We can begin to understand what the fundamental threats are, and how best to respond to them.
On the other hand, freely available information can also undermine an autocracy. It allows citizens to figure out how deeply unpopular a detested regime is, builds up their confidence in shared collective action, and makes it easier for regime opponents to turn that confidence into popular protests. A despot who allows information to flow freely risks losing power. Authoritarian regimes constantly need to think about how to protect their own power while maintaining some economic and political efficiency. They thus view the open Internet as threatening their stability, but they also worry about how restricting information can hurt economic growth and damage the ability of the higher levels of government to keep track of what the lower levels are doing.
Autocrats have addressed this dilemma with a variety of mitigating strategies that strike tradeoffs between the risks and benefits of free information. Traditionally, authoritarian governments have tried to restrict access to most information and limit speech through a variety of censorship mechanisms. More recently, Margaret Roberts has explained how the Chinese government uses “flooding” techniques to maintain domestic stability. Instead of just censoring people, they seed public debate with nonsense, disinformation, distractions, vexatious opinions and counter-arguments, making it harder for their opponents to mobilize against them.
What we need now is to understand the corresponding Democracy’s Dilemma. Democracies depend on the free flow of accurate information more fundamentally than autocracies do, not only for functioning markets and better public policy, but also to allow citizens to make informed voting decisions, provide policy input, and hold officials accountable. At the same time, information flows can be manipulated to undermine democracy by allowing the unchecked spread of propaganda and pseudo-facts, all made more efficient by the Internet, automation, and machine learning. This is Democracy’s Dilemma: the open forms of input and exchange that it relies on can be weaponized to inject falsehood and misinformation that erode democratic debate.
Understanding Democracy’s Dilemma will require that social scientists—who try to explain democratic legitimacy and why people accept election outcomes that aren’t in their short-term interests—think more carefully about the information and knowledge that democracy requires. It will require, too, information security specialists—who model information systems and their vulnerabilities—to think more carefully about how the more complex systems underpinning legitimacy and shared beliefs can create serious vulnerabilities. Focusing on Democracy’s Dilemma may help us to cut the problem down to size. We can begin to understand what the fundamental threats are, and how best to respond to them. Political science questions about what citizens need to know (or believe they know) for democracy to be stable must be translated into information security questions about the attack surface and threat models of democracy, and vice versa.
Over the past few decades, political scientists such as Adam Przeworski and Barry Weingast have explained democratic stability as a kind of institutional equilibrium: it depends on rules that citizens and politicians respect. In Przeworski’s understanding, the rules tend to channel uncertainty in stability-enhancing ways. Losers accept electoral outcomes rather than trying to overturn democracy because they know they have a chance of doing better later. In Weingast’s framework, rules coordinate shared expectations about how the political system works, which then allow people with diverse perspectives to engage peacefully and fruitfully with each other in civil society and commercial exchange.
Such models help us understand what can go wrong with democracy. Steven Levitsky and Daniel Ziblatt’s excellent best-selling book, How Democracies Die (2018), depicts the decline of U.S. democracy as resulting from the decay of norms that previously stabilized political competition. This analysis may be right up to a point, but it also leaves a lot out. Some instability is precisely what democracy is good for. As their critics point out, democracies often make progress by tearing up old norms—about gender and race, for example—that oppress groups or that just don’t make sense anymore. Norm erosion, per se, is not a bad thing.
If democracy is to succeed in responding creatively to new problems, it needs to be able to draw upon the diverse beliefs of its citizens. Yet it also has to ensure that these differences don’t destabilize it.
More fundamentally, theories that focus on shared rules and norms don’t sufficiently capture the central place of disagreement in democracy. For sure, shared expectations are being eroded by disagreement—over whether voting is fair, whether the other party is committed to democracy, whether the institutional knowledge located in traditional authorities such as government, journalism, and science can be trusted. Yet some shared expectations—again, such as oppressive race and gender norms or simple deference to lazy centrist shibboleths—might richly deserve to be destabilized. Preserving democracy means preserving the space for democratic disagreement and the capacity to question, challenge, and demolish institutions that are no longer fit for purpose.
What we really need is what complex systems theorists would call a “dynamical” model of democracy, which would capture how democratic systems can remain stable in the face of deep disagreements and the changing needs of a complex environment.
Building such a model is hard, but one advantage of a well-working democracy over autocracy is that it can draw on the differences of perspective among its population, harnessing disagreement to solve problems. The open-endedness of democracy doesn’t just provide greater stability. It also can turn dispute into an engine of creativity. As political theorist Nancy Rosenblum argues, partisan competition has great virtues. Disagreement can lead to parties organizing over the key problems of society and how to solve them, winning or losing power according to their ability to convince voters. If one party claims that climate change is a problem and that the best way to solve it is through large-scale government action, the other may advocate market measures instead.
Such disagreement can lead to better policy making, as long as it is anchored in a roughly shared understanding of the common problems that confront society. Disagreement over how to tackle global warming will only be useful when people accept that global warming is a real problem. If voters are truly irrational, then efforts to appeal to voters may lead to misguided policy. Of course, the reality of politics is more complicated than this, but we have no hope of getting the politics right if we get the facts wrong.
This creative aspect of democratic disagreement can also be a source of institutional dynamism, provoking parties to relentlessly probe the deficiencies of government and to argue for institutional improvements. For example, most of the opposition to partisan gerrymandering is driven by the party or parties that are disadvantaged by gerrymandering. Another example is the civil rights movement, where people angry at the whole system of discriminatory rules and informal norms fought for substantial, if still grossly imperfect, institutional changes.
Yet this kind of competition can also be the source of bad policy. When one party does not want to acknowledge that climate change is a problem, because the plausible solutions would damage the interests of its backers, it may try (as U.S. Republicans are trying) to undermine the science by telling its supporters that scientists are lying, or by suppressing findings. It might even start dismantling the institutional architecture that supports such knowledge.
To take another example, the newly Republican-majority U.S. Congress got rid of its Office of Technology Assessment in 1995 in part because its scientists told inconvenient truths about Republican policies, including the feasibility of Ronald Reagan’s Strategic Defense Initiative. Parties struggling to compete with each other will always find it tempting to rig the game in their favor, making democratic competition a potential source of democratic instability.
Democratic policy success and stability thus involves a dynamic, rather than a static, equilibrium. But if partisan disputes within democracy become so polarized that one party pursues institutional changes that lock in its advantages, the equilibrium may break down.
The very Internet information operations that autocracies use to survive are used to destabilize democracies.
Maintaining this dynamic equilibrium is at the crux of Democracy’s Dilemma: How do you maintain democratic rules and preserve deep disagreement? If democracy is to succeed in responding creatively to new problems, it needs to be able to draw upon the diverse beliefs of its citizens. Yet it also has to ensure that these differences don’t destabilize it.
This suggests that democracy can go wrong in two ways. First, if it suppresses disagreement or diversity among its citizens, either by censoring particular perspectives or by drowning them out by amplifying others, it starts to lose its advantage and drift towards autocracy. Illiberal democrats such as Viktor Orbán in Hungary are on such a route. Second, if it is so overwhelmed by the differences it contains that different factions or parties no longer believe in each other’s commitment to democracy, it will become liable to disruption, dysfunction, and disarray.
These risks aren’t new. Scholars have long realized that open forms of communication and exchange could be weaponized. What has changed is technology. Previously, people couldn’t game the system on any serious scale. But new technologies—especially centralized social media, automation, and now machine learning—have bridged that gap. Now individual propagandists can have outsized effects while disguising their own origins, and machine-generated opinionating will soon be able to overwhelm conversations online.
Such changes may make democracy less stable. It is plausible that there are self-equilibrating pressures within a well-functioning democracy that help to prevent it from sinking into chaos or autocracy. But it is also possible to have a self-reinforcing cycle of failure. A faltering illiberal democracy could give way to chaos and factionalism. A chaotic democracy will have weaker equilibrating forces and hence provide greater opportunities for an autocrat to take and cement power. A chaotic democracy is also more vulnerable to outside attack, especially by regimes wanting to exacerbate the chaos in order to serve their own strategic objectives.
This description of Democracy’s Dilemma helps us to identify what information security specialists would describe as the “attack surface” of democracy. Democracy is vulnerable to attacks that create positive feedback loops of self-reinforcing and damaging expectations. Such attacks succeed when they exacerbate already existing political social divisions, transforming disagreements that might otherwise be an engine for valuable policy and institutional change into disruptive spirals of mutual distrust.
But we can’t focus solely on external manipulation efforts such as those from Russia. Domestic actors are likely to cause bigger and more immediate problems.
These spirals are most likely to develop where distrust—the sense that others might be able to rig the game—is mutually reinforcing. The most obvious target is elections, but there are other targets as well, such as legislative hearings and public comment processes—forms of public consultation that guide the policy making process. Other institutions, such as the U.S. census, shape politics by determining the allocation of congressional seats and shape policy by determining the facts on which spending decisions are based. All of these mechanisms are crucial to democracy, and all are vulnerable to disruption.
Consider a hypothetical example. If I believe that Donald Trump is Vladimir Putin’s catspaw, or that socialists are plotting to remove the president, I won’t simply accept it when my preferred candidates lose elections. Similarly, if I discover that my opponents aren’t committed to democracy and will cheat with a good chance of getting away with it, then I may be tempted to cheat myself. This could create a self-reinforcing dynamic of fear and distrust that might start to undermine general confidence in elections and lead to general refusal to comply with electoral results.
Such dynamics explain why Russian influence operations didn’t just try to exacerbate disagreements between conservatives and liberals, but also between Black Lives Matter demonstrators and their opponents and between Bernie Sanders supporters and Hillary Clinton supporters. The effort was to turn pre-existing tensions into outright distrust. These dynamics also suggest that Russian influence operations weren’t primarily aimed at getting Trump elected, but instead at ensuring maximum chaos in the event of a Hillary Clinton victory. Just before the election, “Guccifer 2.0,” a front for Russian military intelligence, was claiming that the election was rigged, thus helping build a case for angry Trump supporters to try to bring U.S. politics to a standstill.
These examples illustrate how international actors can weaponize information flows to target the domestic politics of democracies. Moreover, they can do so by repurposing the same tools of confusion and disarray that they have developed to shore up their own domestic security. The very Internet information operations that autocracies use to survive are used to destabilize democracies.
Domestic political actors, however, may also disrupt collective political knowledge over process, either deliberately (where they believe that they will benefit from weaker democratic institutions) or as a side product of short-term goals (where, for example, they disingenuously contest electoral results, with possible longer term implications for their supporters’ trust in the electoral process).
This means we can’t focus solely on external manipulation efforts such as those from Russia. Its 2016 campaigns are primarily important as specific evidence of a more general set of informational vulnerabilities, where domestic actors are likely to cause bigger and more immediate problems.
Consider, for example, three apparently unconnected recent controversies: efforts by groups supporting Democrat Doug Jones in the 2016 Alabama senatorial election to manipulate social media against conservatives; problems in the Federal Communications Commission (FCC) 2017 commenting process on net neutrality; and disagreements over the 2020 U.S. census. None of these examples involved direct foreign attacks on U.S. democracy—the kind of attacks that most of the news media focuses on. But they do illustrate different potential attack vectors against the common political knowledge that helps to stabilize democracy.
Take the Doug Jones case first. Although there is disagreement over circumstances, extent, aims and lines of authority, it appears that groups supporting Jones—the Democratic candidate in a heated 2017 Alabama special Senate election—used manipulative techniques on Facebook to target conservatives. Specifically, they created a fake Facebook page aimed at dividing Republicans. This page amplified the reports that Roy Moore, the Republican candidate, had pursued teenage girls. It also suggested that he was supported on social media by Russian bots.
Defending against disruptions to common political knowledge requires a very different approach than the information war perspective, which emphasizes deterrence, counterattack, and active defense.
The effort appears, in part, to have been spurred by the desire to fight fire with fire: in this case, the belief that the Russians had manipulated social media to help Trump get elected led some Democrats to experiment with comparable tactics. If Republicans in turn consider escalating their own use of such techniques, the likely consequence of this spiral will be increased disagreement and contestation over the legitimacy of elections. This consequence is a by-product, rather than direct aim, of the actors involved. The news that Republican consultants are setting up fake local newspapers ahead of 2020 provides just one example suggesting that this escalation is underway.
Now consider efforts to game the FCC’s comment process in the 2017 fight over net neutrality. Net neutrality is highly popular with a mobilized and technically literate population of Internet users, but it is universally disliked by large telecommunications companies, since it restrains them from using their privileged position to extract rents. When the FCC, under new Trump-appointed chairman Ajit Pai, proposed abandoning net neutrality, it had to accept public comments. The result was a flood of legitimate comments from identifiable citizens, who were overwhelmingly in favor of keeping net neutrality, as well as a far larger flood of automatically generated comments—millions of which had erroneous or stolen email addresses—that supported Pai’s plans to abandon it. The resulting numbers apparently favored getting rid of net neutrality, but only because of a deliberate generalized attack on the FCC’s public commenting system. The attack undermined the legitimacy of the commenting process, robbing the supporters of net neutrality of what would otherwise have been an overwhelming demonstration of apparent public support. Ongoing investigations by the New York attorney general’s office and the FBI have targeted telecommunications trade groups, lobbyists, and advocacy organizations with subpoenas. This action calls into question the integrity of all future public input processes.
Finally, the 2020 U.S. census process is emerging as another new information battleground. The census plays a key role both in determining the distribution of seats in the House of Representatives and in the allocation of spending in a wide variety of government programs. It is an example of common political knowledge: citizens should generally agree on its accuracy, but that agreement is now being undermined.
The Trump administration wants to include a new question about citizenship status, which would likely depress the willingness of noncitizens, especially those with complex visa situations, to respond to census officials. This response bias would lead to a substantial undercount of noncitizens. This particular problem is exacerbated by activists on social media who are circulating information aimed at convincing noncitizens and minorities not to fill out the census. They are doing so in order to “protect” those who could be targeted for deportation, of course, but at the same time they are undermining an important source of shared political knowledge in pursuit of particular political goals. These actions, combined with other problems of funding and mismanagement, may result in many believing that the census is inaccurate, further bolstering beliefs that the government, representational votes, and funding allocations are rigged.
These three examples illustrate the vulnerabilities of democracy to specific informational effects and techniques. They all disrupt common political knowledge. Defending against them requires a very different approach than the information war perspective—which emphasizes deterrence, counterattack, and active defense—that currently dominates public argument. It also means avoiding any easy equivalence between the informational problems of autocracies and the informational problems of democracies. It is cheaper for autocracies to resort to censorship and information control because they rely less on decentralized choice and distributed public disagreement as engines of innovation.
Instead, we need to think about how to build negative feedback loops that pull democracy back closer to its dynamic equilibrium. What do negative feedback loops look like? Let us go back to the three examples discussed above.
The goal of these proposals is not to eliminate dissent about democratic institutions and policies. Our goal is to channel dissent in ways that reinforce democracy and make it stronger.
First, elections. The Doug Jones case spiraled so easily because Democrats felt the 2016 election had been unfairly manipulated. It is not just fake news: Russian hackers were known to have probed voter rolls in several states. While there was no evidence of vote tampering in the U.S. presidential election, the climate was one of distrust. Additionally, Russia attempted and failed to manipulate the public announcement process of the 2014 Ukraine election. If this had succeeded, it would have influenced public willingness to accept the result.
Security experts already have a well-established consensus on how to protect the core function of voting: voter-verifiable paper ballots, random post-election auditing, and better coordination between the federal government and state and local authorities could alleviate mistrust and lead to better policies. But stopping voting abuses—or even the perception of voting abuses—from spiraling out of control also requires more vigorous enforcement of the law. Election security is about much more than voting systems. First Amendment rights should not provide a general license for actively deceitful political communications strategies. Unchecked, these will cause a cycle of deceptive action, counteraction, and reaction to counteraction.
Second, we need to recognize that the public policy commenting process is broken. Awareness of the problem defends against attacks aimed at misrepresenting public opinion, but not attacks aimed at further destabilizing confidence in public commenting as a whole. The problems are only going to become worse as machine-learning systems are deployed to generate realistic comments and synthetic media. Soon representatives will not be able to tell when they are hearing from an actual constituent or a propaganda spewing bot. Some combination of authentication of commenters with after-the-fact random sampling and auditing would likely mitigate the problem, albeit at the cost of weakening or preventing anonymous commenting. Michael Neblo, Kevin Esterling and David Lazer have proposed exciting new forms of online townhall deliberation though building these at scale will require serious security design.
Lastly, protecting institutions such as the census from manipulation will require significant institutional reforms. Steve Ballmer has suggested applying the same kinds of political protections to the census office as exist for the office of the comptroller-general—such as a lengthy term of office and onerous appointment requirements. This would be a good start, but combatting disinformation will also require greater transparency of process, so that the public understands both how the census is carried out and how decisions about the census are made. Additionally, the agency needs a budget sufficient both to carry out the census at an appropriate level of sophistication and to provide public education that both encourages participation and results in broad trust in the results. More attention to cybersecurity of the systems that collect and aggregate census information is also essential, especially as people will be able to fill out the 2020 census online.
The goal of these proposals is not to eliminate dissent about democratic institutions and policies. Maintaining dynamic stability does not mean trapping political actors in existing institutions like flies in amber, but instead guiding dynamic forces so that they help reinforce the equilibrium, strengthening democracy rather than undermining it.
Our claim that the best way to shore up democracy’s vulnerabilities is to strengthen democratic institutions is unsurprising, but it is surprisingly uncommon in contemporary debate.
Our goal is to channel dissent in ways that reinforce democracy and make it stronger. The best way to tackle Democracy’s Dilemma is to build appropriate and justified confidence that democratic processes indeed work as they ought. This isn’t a call to make democracy fairer. While that is important, it is different from the related problem that we describe here: to make democracy more resilient to attempts to game it. As long as powerful actors can advance their interests at the expense of the body politic, democracies are vulnerable to information attacks. We need to redesign political systems and institutions with security against gamification in mind. While this implies better and more visible public communication, it also implies the existence of institutions and systems that work more or less as they are supposed to.
Our claim that the best way to shore up democracy’s vulnerabilities is to strengthen democratic institutions is unsurprising, but it is surprisingly uncommon in contemporary debate. A strong democracy is more likely to be a secure one, and the vulnerabilities of U.S. democracy to informational attacks reflect problems in democratic institutions, rather than the unerring skill and craftiness of the attackers.
Addressing Democracy’s Dilemma will involve figuring out new tradeoffs between openness and vulnerability. This has always been a core problem of democracy, but solving it today will require new techniques and the development of new kinds of knowledge. If democracy is a complex system, helping it work better is a complex problem that will benefit from the varying perspectives not only of political scientists and information security specialists but also sociologists, economists, lawyers, engineers, media scholars, and data scientists. In the long run, we need a new set of disciplinary debates to emerge organically from these arguments: a democratic counterpart to the political technologists of authoritarian and semi-authoritarian regimes.
The risks of inaction are serious. In an optimistic scenario, we would experience weaker and more chaotic democracies, with poorer economic and social outcomes for citizens. In a pessimistic one, we could see a backsliding of democracies towards authoritarianism. In either case, democracies will become more vulnerable to manipulation by foreign authoritarian governments—and domestic authoritarian influences—who are better able to wield information power to advance their own objectives.
Radical changes to make democracy work better aren’t simply important in themselves; they are also justified by security concerns. Such changes are not easy to accomplish in a political system that seems almost purposely designed to stymie such reforms, but we hope we have provided a broader rationale for them, as well as the beginnings of a systematic approach for thinking through what they might involve.
This essay originally appeared as part of a Boston Review forum. Responses can be found on the Boston Review site. Our final response follows:
In our lead essay, we argued that Democracy’s Dilemma is that the information flows that democracy relies on can also be weaponized against it. This means that we need to start to understand democracy as an information system to be defended, analyzing its attack surfaces and modeling the threats against it.
We are enormously grateful to everyone for their sharp, insightful responses. We recognize that we’re at the beginning of a long intellectual process, and we do not believe that everything we wrote will turn out to be right. Our goal is to provide a framework to think about the issues of influence operations and their effects on democracy, and—more importantly—to stir debate and disagreement. Other people, with different knowledge and perspectives than ours, are an important part of this. It is heartening that such a smart and varied bunch of people have come up with so many interesting things to say.
Some commentaries start from a similar way of thinking to ours but emphasize different tradeoffs. Riana Pfefferkorn highlights the benefits of anonymity and pushes back against our suggestion that some kinds of identity authentication might help address the abuse of public comment systems. We agree that anonymity plays an important part in democratic speech and that many problems, such as fake news, propaganda, and hate speech, do not magically go away once people are identified. What we would say (and we suspect she would very likely agree) is that the key first step is to recognize that there are tradeoffs. We are only beginning to think about how flooding attacks—overwhelming torrents of speech—can overwhelm democratic systems, and figuring out the appropriate responses will require practical experimentation rather than the appeals to abstract principle that have often dominated debate.
Joe Nye draws a helpful distinction between “soft power” (a concept he has spent decades developing) and “sharp power.” He usefully highlights the costs of defenses against information attacks that lurch in the direction of increased authoritarianism. “We had to destroy democracy in order to save it” is not a winning strategy, and democracies should not employ propagandistic attacks. This said, we don’t think we should focus on whether a particular information operation involves soft or sharp power. We think time is better spent on understanding the nature of the political information systems it is attacking. The relative brittleness of different information systems vis-à-vis different kinds of attacks is an important empirical question, as are the differences between autocracies and democracies. This is why we emphasize the different common knowledge requirements of autocratic and democratic systems, which help explain why media outlets such as Russia Today can play a political, stabilizing role in autocracies such as Russia (as Peter Pomerantsev and others have argued), while at the same time destabilizing democracies.
Allison Berke is generally worried about foreign attacks, which she sees as fundamentally different than domestic information campaigns. Our way of thinking about the problem leads us to disagree. Just as you need to worry about insider attacks in traditional information systems, where authorized users can easily do far more harm, democratic systems too can be undermined by politicians and citizens. That said, changes made to secure democracy against inside attacks may risk doing more harm than good by, for example, closing off democratic openness. This, too, needs to be part of the discussion.
Berke’s disquiet speaks to a broader question: is it appropriate for outside actors to have any influence within a democracy? We think that it is. Since U.S. actions have outside consequences for billions of human beings who happen to have been born without U.S. citizenship, it seems to us wrong that they should not express their voice. Here, Nye’s distinctions may be useful. There are wide-ranging debates in political theory about cosmopolitanism. Philosophers such as John Dewey argue that as problems become more global, democratic institutions need to shift to accommodate the people whom the problems afflict.
We owe a particular intellectual debt to Anna Grzymala-Busse—critical parts of our argument come from conversations with her—and she identifies an important disagreement. Grzymala-Busse says we need to be more respectful to norms, as the “crucial underpinning” of democracy, taking Steven Levitsky and Daniel Ziblatt’s side in our friendly disagreement with them. Levitsky and Ziblatt argue that the current problems of democracy are in large part a result of the decay in stabilizing norms, with the implication that we need to return to them. This is an important debate, and possibly the most important theoretical debate about democracy right now. We don’t have space to provide the full response that her argument deserves, but we can at least sketch it out.
Our approach, which stresses the importance of technological change and democratic disagreement, is clearly incomplete—but so too is that of Levitsky and Ziblatt. It is hard to take arguments that stress the stabilizing force of institutions or norms and combine them with arguments that explain how these institutions or norms may themselves change in a dynamic process. This is one reason why people who were not happy with the state of democratic politics before the last few years of Donald Trump, Viktor Orbán, and others tend to be more skeptical of norm-based accounts; they think that we need to destabilize some norms to strengthen other aspects of democracy or to respond to genuinely existential threats, such as global warming.
Yet whatever your position in this broader dispute, there is a more pragmatic question: are the problems we are discussing the results of decaying norms, or of the technological changes that make certain actions easier and cheaper? When we look at how politicians game elections or lobbyists rig public commenting systems, we suspect that the major problem isn’t that they have stopped obeying political norms. We think that they’ve never been bound by norms, but by technological limits. The technological changes that have made online commenting easier have also made flooding attacks, which are nothing new, easier and more effective. Lobbyists tried to flood commenting processes back in the era of fax machines. Similarly, what is new in election shenanigans is not people’s willingness to manipulate the process, but their access to much more effective means of manipulation. This is why we focus on the relationship between technology and disagreement in these cases. Norms can surely help people who disagree radically to live together in a democratic society, but technology-enhanced spirals both cause normative change and act as an independent cause of change in behavior.
Jason Healey’s comments about the mismatch between the rapid acceleration in the speed of change and the far slower ability of politics to respond are very well taken. The question he asks is how best to slow things down. One possible response is the one that he raises: regulation like the European Union’s GDPR. More broadly, the European Union has often adopted a version of the “precautionary principle.” Under this principle, rather than letting things happen and trying to deal with problems afterwards, regulators should try to anticipate problems in advance and regulate to prevent them from occurring. This approach is widely disliked by Silicon Valley firms that have adopted “move fast and break things” as an existential credo. When the things that they are breaking potentially include democracy and society, the scales necessarily tip towards precaution.
Precaution may not be nearly enough. There is a lot of common ground between Astra Taylor’s and danah boyd’s essays. Both of them politely but emphatically suggest that our framework isn’t nearly ambitious enough. Securing democracy against attack will require nothing less than what Taylor describes as a remaking of the “underlying political economy of the Internet” and reforms that in boyd’s words would “prevent financialized interests from controlling our information ecosystem.” The problem that both of them identify is that our information system is driven by commercial interests and shaped by gross inequalities of power, rather than by and for democratic needs.
We think that boyd and Taylor are right. What we have now is not “freely available information” of the kind that is necessary to democracy, but instead corporate-curated information designed to maximize engagement and profit. This helps explain why it was easy for Russia to weaponize social media; it was able to use an ecosystem that was designed to maximize engagement over anything else and connect would-be commercial-influencers with an audience. This shouldn’t lead one to overestimate the efficacy of these attacks; most advertising is ineffective, whether it is corporate or propaganda. However, it does highlight how our current information ecosystem is simply not well suited to democracy. And, as Berke notes, this is only likely to get worse as new generations of commercial and political influencers start to employ generative adversarial networks to skew debate further.
In any case, fixing the political economy of social media wouldn’t provide a complete solution to Democracy’s Dilemma. We would still be faced with internal and external challenges to democracy’s dynamic stability and problem solving ability. If one examines the attack surface of most democracies, the current social media architecture creates a multitude of vulnerabilities. As always, it is far harder to identify the solutions than the problems, and we are certainly going to make mistakes.
We will come up with better solutions and make fewer mistakes if specialists from a variety of perspectives come together to think, to argue with each other, to provide practical suggestions, and to engage with a public that has its own understanding of the issues. We think that the kind of frank, useful, and goodhearted debate that has happened over the last couple of weeks is an excellent start, and—again—we are extraordinarily grateful to the participants for engaging in it.