AI Has a Democracy Problem—Here’s Why

A thorough examination of artificial intelligence’s promise in politics rests on a thorny premise: democracy is an information system.

  • Virginia Eubanks
  • Nature
  • November 18, 2025

Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship Bruce Schneier and Nathan E. Sanders MIT Press (2025)

The tsunami of writing on artificial intelligence tends towards either bald hype or panicked dystopianism. Proponents say that AI will revolutionize health care, drive business growth and become our new best friend. But for its critics, AI could cause massive unemployment, perpetuate fake news and pose an extinction risk to humankind.

In Rewiring Democracy, cybersecurity expert Bruce Schneier and data scientist Nathan Sanders offer a welcome middle path by focusing on practical politics. In a heartfelt, if workmanlike, way, they craft a framework for maximizing the democratic potential of AI. Yet, by shrinking and distorting the vexing political challenges that the world faces today to fit a single solution—AI—they short-change the frustrating glories of living together as human beings.

Structured clearly enough, even for readers who know little about AI, the book is rich with concrete examples and absorbing speculation. Schneier and Sanders propose that every aspect of democratic governance—such as negotiating procurement contracts, drafting legal briefs, producing local news or facilitating conversation across political divides—could be enhanced by the thoughtful application of AI developed under public control for public benefit.

“Entrenched elected officials, political movements with authoritarian tendencies, and the billionaire class all regard AI as a new tool to consolidate and centralize power,” they write. “But the rest of us, the public, can harness it as a tool to distribute power instead.”

For example, a personal “army of AI minions” could extend individual power by making it easier to speak out. AI agents could make political decisions on our behalf, guessing our preferences on the basis of past behaviour and communicating them swiftly to legislators.

AI for the people

AI, the authors write, could either enrich or undercut different modes of political participation: campaigns, legislation, public administration, courts, organizing and advocacy. AI as is, designed by corporate players and supercharging an already unequal political system, might exacerbate discrimination and harm, allow lobbyists to concentrate their power and leave the most vulnerable people with shoddy digital attorneys rather than pricier human ones. Public AI—designed to enrich democracy—could target government resources more effectively, lower the cost and expertise needed to generate legislation, and simplify the drafting of complaints.

However, the democratic potential of AI will not be realized unless four conditions are met. First, the commercial AI ecosystem must be reformed, by providing robust public alternatives to technologies that are privately owned and controlled. Second, authoritarian or unethical uses of AI must be resisted by ensuring that “democratic principles … govern its development and deployment”. Third, responsible government AI must consider social impacts, weigh the risks and benefits carefully and ensure accountability. Fourth, Schneier and Sanders think, social movements could help to renovate democracy by galvanizing “their constituents to respond to the long-standing democratic threats magnified by AI”.

In the final section, Schneier and Sanders describe their vision for nurturing “robust noncorporate alternatives” to commercial tools. Imagining public AI as a “universal economic infrastructure”, like publicly run schools and highways, the authors favour multilateral, anti-monopoly regulation that might allow citizen AI to compete against the products of ‘big tech’—an approach that the United Nations is discussing. They offer organizing principles for a shared vision of democratic AI, one that is broadly capable, widely available, transparent, meaningfully responsive, actively stripped of its biases, reasonably secure and non-exploitative.

Many readers will be sympathetic to these goals. Which is why it is so disappointing that the book’s fundamental premise—that democracy can best be understood as an information system—fatally undermines those objectives.

Government is not a machine

Think of democracy “as an information system for turning individual preferences into group policy decisions, and then executing those decisions through society”, Schneier and Sanders write. But democracy isn’t a flow chart or an executable piece of computer code. Preferences are formed in the process of governance, not just counted. And as any first-year student of political science can affirm, implementation, particularly at the street level, creates, shapes and modifies policy as often as it executes it.

Governments need formal rules and copious, reliable data to function efficiently and fairly. But democracies are also human, value-laden institutions for negotiating the contradictions that come with living together well. The model of government as information system has no place for the sophistication and difficulty of what democracies attempt to accomplish, or the standards to which they are held.

For example, Schneier and Sanders argue that the purposes of democracy include “peaceful transition of power, majority rule, fair decision-making, better outcomes”. They fail to mention equally crucial aims, such as preventing tyranny, protecting minority rights, balancing conflicting values, stewarding collective wealth and nurturing human capacity.

The premise that democracy is an information system obscures the thorniest challenges that AI poses to governance. It assumes that, if everyone has equal access to reliable, accurate and timely data, our disagreements will dissolve. But we live in a world of genuinely incommensurate interests.

Policy processes don’t simply aggregate preferences, as Schneier and Sanders suggest. What’s best for the individual, or even for the majority of individuals, is not necessarily what’s best for the collective. Statistical models, no matter their sophistication, tend to default to the mean, working best for those who live under the meatiest part of the bell curve. Automated decision-making is notoriously ineffective for outliers and edge cases.

Changing norms

More troublingly, the ‘democracy is an information system’ story forecloses ethical growth in human beings. Schneier and Sanders argue, for example, that the brain is a black box and that “no one can open it and examine how you reached any particular decision”. Later, they continue, “To a greater extent than humans, AIs can be studied. Their biases can be known, and—in theory at least—corrected.”

The idea that human biases are nearly impossible to understand or alter ignores vast evidence to the contrary. Although none of us is completely transparent to ourselves, every field in the humanities and social sciences rests on the assumption that human motivations can be studied, understood and even shifted. Attitudes change; societies morph; people—and nations—grow.

Whether talking about race, gender and sexuality, or more mundane issues such as smoking and wearing seatbelts, norms change, sometimes surprisingly quickly. So many of the democratic challenges that people face today are in fact about social practices and cultural understandings, not laws or rules.

Shaping governance to align with the capacities of AI—or any technological innovation—fundamentally distorts democracy, because it discounts what is gained from the difficult, inefficient, costly, but ultimately transformative, work of human politics. Imagining an information-processing machine, Schneier and Sanders lop off pieces of the democratic project. For example, they claim that direct democracy could be improved if AI proxies could vote on our behalf without consulting us. This prioritizes outputs over the messy, but ultimately productive, processes that form political opinions and bind communities together.

At another moment, this might be excusable over-optimism. But where I sit in the United States, people are experiencing political attacks on federal employees, the dismantling of government programmes that identify and address bias and discrimination, the collapse of AI regulation, and an unprecedented global wave of what political scientists call ‘democratic backsliding’. In this context, the idea that democracy is an information system is not just empirically incorrect—it is dangerous.

Former US Supreme Court judge William Brennan famously argued that the dynamism of democratic institutions requires a complex “internal dialogue of reason and passion”. Logic, rules, evidence and data fertilize good governance. But passion is the soil in which democracy grows. Passion is necessary to understand the real-world impacts of government decisions on stakeholders, to affirm the essential dignity and worth of each individual, and to acknowledge our common humanity.

Rewiring Democracy left me feeling that, at some level, Schneier and Sanders find the human work of politics distasteful and rather grubby. But, as the writer George Orwell famously suggested in The Road to Wigan Pier (1937), “the price of liberty is not so much eternal vigilance as eternal dirt”.

Categories: Book Reviews, Rewiring Democracy, Text

Sidebar photo of Bruce Schneier by Joe MacInnis.