Third Interdisciplinary Workshop on Reimagining Democracy (IWORD 2024)

Last month, Henry Farrell and I convened the Third Interdisciplinary Workshop on Reimagining Democracy (IWORD 2024) at Johns Hopkins University’s Bloomberg Center in Washington DC. This is a small, invitational workshop on the future of democracy. As with the previous two workshops, the goal was to bring together a diverse set of political scientists, law professors, philosophers, AI researchers and other industry practitioners, political activists, and creative types (including science fiction writers) to discuss how democracy might be reimagined in the current century.

The goal of the workshop is to think very broadly. Modern democracy was invented in the mid-eighteenth century, using mid-eighteenth-century technology. If democracy were to be invented today, it would look very different. Elections would look different. The balance between representation and direct democracy would look different. Adjudication and enforcement would look different. Everything would look different, because our conceptions of fairness, justice, equality, and rights are different, and we have much more powerful technology to bring to bear on the problems. Also, we could start from scratch without having to worry about evolving our current democracy into this imagined future system.

We can’t do that, of course, but it’s still still valuable to speculate. Of course we need to figure out how to reform our current systems, but we shouldn’t limit our thinking to incremental steps. We also need to think about discontinuous changes as well. I wrote about the philosophy more in this essay about IWORD 2022.

IWORD 2024 was easily the most intellectually stimulating two days of my year. It’s also intellectually exhausting; the speed and intensity of ideas is almost too much. I wrote about the format in my blog post on IWORD 2023.

Summaries of all the IWORD 2024 talks are in the first set of comments below. And here are links to the previous IWORDs:

IWORD 2025 will be held either in New York or New Haven; still to be determined.

Posted on January 23, 2025 at 9:58 AM18 Comments

Comments

Bruce Schneier January 23, 2025 9:59 AM

Session 1: Institutions

Emily Clough, Northeastern University: Democracy is self-rule by the people. Our implementation of democracy uses majoritarian rule combined with a set of fundamental rights and liberties that can’t easily be changed. However, democratic backsliding occurs when democratic institutions lose their power and people feel left out. AI can’t substitute entirely for people in democratic systems, but AI can make it easier for people to be heard and make clear what matters to them.

Kevin Elliott, Yale University: Politics is practiced in three modes: Friend/Enemy, Pluralist, Technocratic. Optimism for AI tools presumes technocratic politics, but not everyone will trust AI. Fewer applications in pluralist politics. Applications in Friend/Enemy politics are the most dystopian.

Ada Palmer, University of Chicago: All political systems take a battering over time (become corrupted)–if we compare new systems to existing systems, we are comparing a shiny new fridge to a battered old one. We need to model how a new system would become battered over time by various political forces and design for when the fridge begins to leak.

Manon Revel, Harvard University: Algorithmic facilitation for deliberative online forums–can we find posts that bridge different groups of people, increasing civility and engagement?

Joshua Tan, Metagov: Mathematical models–many smaller games to create larger scale institutions. Public AI systems can have public access, public accountability and be permanent public goods.

Bruce Schneier January 23, 2025 9:59 AM

Session 2: Participation

Eugene Fischer, Author: More frequent, low-stakes elections could increase democratic engagement. More frequent elections could strengthen the link between voting and citizens’ daily lives, offering continuous voter education and reducing the impact of election interference. Though concerns about inefficiency, instability, and cost exist, Fischer argues that increased elections could improve democratic robustness and engagement.

Nick Garcia, Public Knowledge: Public AI: Making AI development a participatory process by establishing frameworks that prioritize public access, accountability, and the creation of AI as a permanent public good. Public oversight in AI development is crucial for accountability, suggesting government involvement to ensure public values are integrated into AI systems. By building a centralized AI expertise within government and attracting AI practitioners to public service, the public sector can counteract the tech industry’s dominance. Public AI could address sector-specific regulatory needs, protect public values, and sustain public resources like digital infrastructure.

Saffron Huang, Anthropic: Presentation delivered under Chatham House Rules.

Nathan Sanders, Harvard BKC: Massachusetts is a historical laboratory for democracy, offering insights into the challenges of legislative engagement rooted in outdated democratic technologies. Key issues are access, attention, and accountability. MAPLE is a platform designed to enhance public engagement in policymaking by facilitating online testimony submission, improving legislative transparency, and incorporating AI tools for accessible bill summaries and multilingual support.

Ted Suzman, Independent Researcher: Public agents: hybrid systems combining human and AI efforts, to facilitate a more participatory democracy. By leveraging AI for adaptive interviews and goal extraction, this approach seeks to prevent the tyranny of the majority while allowing for specialization in subgoals. Delegated tasks are divided among humans and AI, with a focus on transparency and continual feedback to optimize outcomes. While challenges such as AI misalignment and epistemic injustice remain, this model emphasizes the importance of hybrid collaboration and aims to enhance democratic processes through innovative technology and collective action.

Bruce Schneier January 23, 2025 10:00 AM

Session 3: Information

Aditi Juneja, Democracy 2076: Entertainment media leads as civic education: Research reveals 58% of stories feature government themes, with science fiction leading at 90%. With 60% of Americans being “low information voters” not actively seeking news, Hollywood provides most of their civic education. Diverse narratives cater to different audiences, but current portrayals may foster complacency. Highlights the significant role of media in shaping perceptions of democracy and government in the U.S.

Quinta Jurecic, Brookings: Examining the challenges of misinformation and trust in professional spheres. Focus on how law and medicine grapple with falsehoods, leveraging existing disciplinary procedures. Highlights the mixed results of professional organizations’ efforts to combat misinformation, from license revocations to claims of censorship. Emphasizes the difficulty in distinguishing between truth and falsehood in tense media environments, questioning whether professional bodies should push harder to defend truth. Underscores the complexities of maintaining professional integrity and public trust in an era of widespread misinformation and polarization.

Laura Maher, Siegel Family Endowment: Civic knowledge infrastructure needed for an information ecosystem in democracy. Emphasizes the importance of physical, digital, and social infrastructures in creating, assembling, and stewarding data, information, and knowledge within specific contexts. Introduces the idea of a “civic knowledge infrastructure” pipeline, transforming raw data into contextualized knowledge. This infrastructure operates across various timescales, from highly dynamic to static, and considers different units of context (physical, social, digital). The goal is to develop safe, trusted, and transparent data pipelines than can support modern civic knowledge institutions and enhance democratic processes.

C. Thi Nguyen, University of Utah: Games as art and the limits of data. Games serve as a unique form of art that sculpts our practical activities and allows us to experience the beauty of our own actions and reasoning. By temporarily adopting artificial goals, players immerse themselves in environments crafted by game designers, experiencing new ways of thinking and being. This “motivational inversion” from normal life offers insights into human agency and decision-making. However, Nguyen cautions against overreliance on quantification and institutional metrics in real-world contexts. He argues that while data-driven approaches may eliminate subjective bias, they introduce new biases towards institutionally measurable outcomes. This shift from qualitative to quantitative evaluation risks losing nuance and context, potentially constraining complex systems to simplistic, algorithm-like rules that may not capture the full richness of human experiences and values.

Bruce Schneier, Harvard Kennedy School: Emphasizes the importance of thinking beyond current problems and imagining future possibilities for democracy. Introduces the concept of an “Atlas of Democracies,” inspired by his work on “The Carbon Almanac,” a crowdsourced book about climate change. This proposed atlas would be a large, inviting book with diverse entries on democratic concepts, aiming to expand people’s imagination about what democracy could be. Many people struggle to envision alternatives to current systems, and this project could help bridge that gap.

Ivan Vendrov, Midjourney: AI as an existential shift in human society. Parallels between current AI revolution and advent of nuclear weapons suggest that impact of AI on governance and society is being vastly underestimated. Cites the “bitter lesson” from AI research, which posits only general methods leveraging massive amounts of data and computation are successful, potentially leaving little room for human-centric approaches in future governance systems. Expresses concern about emergence of “giant machinic bureaucratic state gods” devoid of human elements, suggesting that nothing human may survive the near future. However, finds hope in cryptography, arguing that political units wishing to survive must defend themselves using encryption and empower themselves to manage large amounts of computational resources. Highlights the potential shift in nature of political power and sovereignty in AI-dominated future, emphasizing the need for technological literacy among political leaders to navigate new landscape.

Bruce Schneier January 23, 2025 10:00 AM

Session 4: Conversations

Nicholas Carter, Civic Digital Organizing Group: Advocates for more equitable and participatory democracy through empowerment of local civic organizations. Emphasizes critical role of technology in addressing economic inequality and combating authoritarianism. Emerging tools should be integrated into electoral campaigns, like early voting and relational voter contact, to enhance civic engagement. Significant challenges faced by over 560,000 civic organizations in adapting to rapidly evolving digital landscape, where technical proficiency is essential for effective operation. By fostering meaningful conversations that mobilize individuals towards civic outcomes, can reshape power dynamics within legislatures. Reimagine civic engagement strategies with local actors taking the lead.

Nick Couldry, London School of Economics: Critical misalignment between decision-making capacities of political systems and actual problems faced by populations. Democratic societies must foster large-scale collaborations to push systems toward more effective governance. Need for innovative civic collaborations beyond existing frameworks. Create resonance chambers rather than echo chambers, bridging ties that transform initially costly social relationships into mutually beneficial ones. Five design principles for rebuilding democratic spaces–prioritize small-scale platforms, enhance intersections between spaces, maximize experimentation, trust existing communities, and facilitate knowledge sharing. Necessity of rethinking our associational ecosystems and leveraging AI to support community reflection, ultimately leading to ecosystems that transcend toxic social media dynamics to cultivate healthier democratic environment.

Joshua Fairfield, Washington and Lee University School of Law: Evolving landscape of social technology and its implications for communication and law. New words and concepts shape our understanding of social interactions. We update our frameworks not in geological time, but in contextually relevant increments. Critical of prioritization of propositional knowledge over participatory approaches in computer science. The narratives we create about living together are essential for measuring collective progress. Generative AI poses challenge of decontextualization from being trained on past data and recursion, where AI outputs become self-referential and lose grounding in human experience. When algorithms dictate legal processes without human oversight, we risk losing the essential narratives that underpin legal systems. Must reexamine how we integrate technology into our social frameworks to ensure human stories remain central to collective future.

Yasmin Green, Jigsaw: Content moderation in online communities is a fundamentally human-centric process, rather than a mere set of rules. Co-design principles at Jigsaw aim to understand role of moderators in fostering community engagement. Moderators navigate nuanced situations–like determining the intent behind posts about homemade firearms or local pizza recommendations–while balancing community needs and platform policies. Concern over the potential for AI to dominate moderation. Instead, AI tools should replicate human decision-making processes without replacing moderators themselves. Importance of accurately expressing community preferences and communicating rules to ensure AI systems serve their intended purpose. Messiness of human interactions and necessity of AI to accommodate this complexity to effectively support community dynamics.

Galen Hines-Pierce, independent: Intersection of AI and participatory governance amidst backdrop of low trust and political polarization. Historical waves of AI and civic engagement show the importance of legitimacy in state capacity–defined by its ability to listen to citizens and ensure their safety. Critical of notion that simply connecting everyone to internet will resolve societal issues. Effective governance requires more than just increased communication. Draws on insights from former CIA chief Rob Johnson, who criticized agency’s analytical processes. Similar flaws exist in how institutions respond to citizen demands. Augmenting existing systems with AI tools can enhance understanding and responsiveness to public needs, rather than automating decision-making processes. Wary of dangers posed by techno-authoritarianism. Need for hybrid systems that can adapt to civil unrest while ensuring citizen engagement remains at forefront. Calls for reevaluation of how technology can be harnessed to foster trust and collaboration in democratic institutions.

Eli Pariser, New_Public: Decline of social trust in America and implications for democracy. Erosion of local civic institutions–such as libraries, parks, and newspapers–that historically connected communities and provided essential information. While online spaces have emerged to fill some of these gaps, they often fall short due to toxic dynamics and lack of thoughtful design. Need community stewardship and support systems that empower local leaders to foster trust and healthy conversations within their neighborhoods. Existing platforms prioritize engagement over unity, leading to environments that amplify fears and deepen divides. Models like Front Porch Forum successfully create inclusive spaces for dialogue across diverse political and social backgrounds. Need smaller, more focused digital communities that nurture belonging and connection. Revitalizing local contexts crucial for rebuilding social capital and enhancing democratic participation.

Bruce Schneier January 23, 2025 10:00 AM

Session 5: Openness and Trust

Daniel Davies, The Unaccountability Machine: Trust and openness: a cybernetic approach. The exploration of cybernetics in managing large systems reveals its potential to maintain stability amid external shocks, tracing its origins back to Norbert Wiener. Cybernetics suggests that control systems must match the complexity of the systems they regulate, as advocated by Stafford Beer in complex management contexts. Recursion is a key concept, allowing systems to operate on simplified models and discard unnecessary information, while flexibility is crucial since static models may not adapt over time. Within this framework, trust is understood as a shared model of the system and openness as its capacity to embrace new varieties. Such systems can be destabilized by overloading parts of the system so they can’t make decisions, because they can’t handle the variety presented to them.

Judith Donath, Berkman-Klein Center, Harvard University: Surveillance: the opposite of trust. As surveillance increases, it is changing society because it changes people’s behaviors–people respond to perceived scrutiny. This echoes ancient social dynamics like gossip. Through the lens of signaling theory, surveillance influences interactions by making people want to shape how they are perceived. This can lead to exclusion by powerful groups or coalition-building among less powerful ones. Early notions of surveillance, such as “surveillant gods,” illustrate psychological oversight’s role in societal growth. Modern surveillance systems, like those replacing hitchhiking with Uber, mitigate the need for trust by providing structured oversight.

Renee DiResta, Georgetown University: There is a misdiagnosis of a crisis of misinformation, while there is instead a crisis of trust. It is the intersection of trust and democracy that sustains institutions, but trust has been greatly weakened through propaganda, rather than misinformation campaigns. Trust isn’t built through performance, it’s built through accountability, transparency… and institutions aren’t updating their means of interacting with the public to achieve this.

Richard Ngo, Independent: trust in an AI-dominated world. Reimagining democracy should be discomfiting. Realistically, it will involve creative destruction and chaos: uncomfortable both for the powerful (who stand to lose) and powerless. AI should be discomfiting. AI has progressed farther than anyone could imagine, and many people haven’t yet grasped the implications. AI can lead to extreme concentrations of power. We need to imagine what will happen if all intellectual labor is free. AI trust requires designing unprecedentedly trustworthy institutions, starting with design principles from best human organizations and then accounting for structural properties that AI has and humans lack.

Primavera de Filippi, CERSA/CNRS, Berkman Klein Center: Network nations offer a new framework for considering digitally interconnected communities that transcend traditional territorial boundaries. Unlike conventional nations that rely on physical location and government sovereignty, network nations are translocal, uniting members through shared identity and aspirations. They function polycentrically with self-governance and collective action, supported by decentralized technologies like blockchain. This structure encourages mutual resource management, highlighting interdependence and collaboration rather than independence. Network nations represent an emergent concept that addresses global interdependencies and challenges, striving for a more interconnected geopolitical landscape that empowers civil society beyond national constraints.

Bruce Schneier January 23, 2025 10:01 AM

Session 6: Deliberation

Henry Farrell, John Hopkins University: Deliberation (whether augmented by AI or not) is not the be-all and end-all in politics: it has a specific and limited role. Deliberation only works in specific settings, because we all have cognitive biases like motivated reasoning and groupthink that are hard to overcome, and we don’t all have the time to engage in deliberations. Minipublics can be useful, but we also need to improve our more traditional publics like political parties–this is how AI can affect deliberation: it can lead to microtargeting and can allow to achieve outcomes without breaking apart publics.

Bailey Flanigan, Harvard University: Deliberative processes need public trust, and AI can help. Processes like citizens’ assemblies depend on the public’s trust since the public needs to accept the assembly’s recommendation. AI-powered platforms allowing them to “see in” these assemblies could foster this trust. Such prototype platforms could help learn more about what people care about, what drives trust (e.g. how the selection process works, who was in the room, what information was available to assembly members). Ideally, this data would be collected from real assemblies, but for privacy concerns a pilot study/deliberation could be simulated using AI.

Melissa Schwartzberg, NYU: The history of juries, notably in England, highlights how citizens partaking in deliberative processes share local knowledge in exchange for greater political standing. Jurors are often drawn from the geographic area, which confers legitimacy to the information collected, allows for a jury of peers, and can increase the status of jurors. In England, the state’s informational needs led to expanding the jury from knights and landowners to peasants. This led to an expansion of the franchise by lowering the eligibility threshold for voting from 100 shillings to 40 shillings per year. The use of local knowledge by the state is mixed: it can be used to coerce as much as to benefit the public. When thinking of new, modern citizen juries or deliberative processes, we must think why and whether jurors will want to share local knowledge: in exchange of what? in whose interest?

Divya Siddarth, Collective Intelligence Project (CIP): CIP has been working in recent years on reducing the tradeoffs between the three visions in the transformative technology trilemma: capitalist acceleration, authoritarian technocracy, and shared stagnation. They worked on collective constitutional AI with Anthropic, with the AI Safety institutes, and on imagining “public AI networks.” But AI still poses some key unresolved challenges for democracy, such as extreme economic shifts, defense race dynamics, and cultural homogenization. Institutions, information, and incentives are three levers through which to act. New institutions such as public libraries on AI could be useful, but existing institutions also have a role to play, e.g. bureaucracies, political parties, religion.

MH Tessler, Google Deepmind: Deepmind led research published in Science on how an AI (the “Habermas Machine”) can help find consensus in democratic deliberations. How it works: a group of participants write their opinions, passed to an AI mediator, who writes an initial group statement, designed to maximize approval for participants; the statement is then sent back to participants for critique. The AI is effective at finding common ground, even better than human mediators, and represents all viewpoints equally. Future research could investigate whether agreement is the correct goal, how to make the AI robust to strategic behavior, open-sourcing to allow for users to pick their LLM of choice, and even prompt it, notably for new tasks like clarifying disagreements.

Bruce Schneier January 23, 2025 10:01 AM

Session 7: Artificial Intelligence

Michiel Bakker, Deepmind, MIT: Deepmind is leading research on scaling AI-mediated deliberation, with the goal of augmenting collective decision making systems. This is already done in pol.is, and could be done on X’s community notes with AI-generated notes bridging diverse notes–his work on this is called “Supernotes.” This work is important in the short term because LLMs are already being used by important decision makers, even if only for brainstorming–it is crucial that LLMs are aligned with a diverse set of values. Long-term, AI could help scale deliberation to entire populations, notably to collectively refine the values that AI is aligned to.

Ann Lewis, GSA, US Government: Implementation in government is broken, and failures directly degrade trust in institutions. Implementation depends on tech but the government doesn’t hire enough tech talent, acquisition processes worsen the problem, and the siloed structure of government limits the creation of coherent services. Biden’s AI executive order is addressing a lot of this, especially talent. Whether AI will be useful for governing remains uncertain, and will depend on: how it is used, the role of regulation, challenges with data, the role of the public and private sector in funding and supporting key architectural components like foundation models.

Ray Nayler, Speculative fiction author, Foreign Service Officer: His first two books were at face value largely about animals (octopuses, mammoths) and extinction, but in fact closely related to AI. The world’s increasing focus on AI was reflected in the public’s comments on “Mountain in the Sea”, which were about octopuses in October 2022, and all about AI in 2023. His latest book, “Where the Axe is Buried,” is expressly about AI and extinction. In his work as a foreign service officer, he noticed first-hand the “Guns of August problem,” or how assessing the effectiveness of diplomacy depends on counterfactuals–how do you measure the effectiveness of something you prevented?

Aviv Ovadya, AI & Democracy Foundation: He worked in tech, helped raise alarm about risks of AI for democracy, which led him to this place of bringing democratic/deliberative processes to corporate governance. He recently published a paper on “democracy levels for AI,” or a framework for evaluating the degree to which decisions in a given domain are made democratically, including the domain of how AI is being developed. The framework has a few critical steps, and level 2 particularly matters: it involves getting the outputs (e.g. recommendations) in the shape you want, even if they are not binding. Not all components need to be deliberative–but some should! If the power of these companies is commensurate with governments, this is necessary.

Helen Toner, CSET: Even if we discard the idea of AGI this decade, we do need to consider the possibility that we might have AGI in 10, 20, 30 years. This is eventually plausible and we have to think about it–and institutional changes take a long time, so we should think about this now. AGI could fundamentally change the realpolitik of democracy. What happens as AI gets more sophisticated, if they can do a range of economic functions in the economy, and they can also perform military, forceful functions, law enforcement, such that you don’t need people? As a government, you don’t have to care if your people don’t work or don’t fight. The people had power when this wasn’t true. She is worried about what it looks like if this is not true anymore and it becomes possible for a capital owning, extractive class, to fully disregard a whole class of other people. Separately, sufficiently advanced AI may require us to update our ideas about who should be democratically enfranchised and how, which will be dicey given how poorly we’ve handled that question in the past and how poorly we still understand consciousness, sentience, etc.

Bruce Schneier January 23, 2025 10:01 AM

Session 8: Past and Future

Ted Chiang, Author: Our current, growth-based capitalist model is fundamentally unsustainable due to hard laws of nature: physicist Tom Murphy showed that a 2.5% yearly growth rate in energy use would lead to a 10,000x increase over 200 years, which would require the earth to be 100°C. A long-term solution requires rethinking capitalism, for instance through a “steady state economy” (Herman Daly), e.g. by imposing caps on our energy use. This might seem undemocratic, but might actually enhance democracy, if paired with economic democracy–this could include moving from shareholders to worker-owned cooperatives, which would not have growth built into their DNA. The argument that our strive for status requires growth is a fallacy: indigenous cultures have long dodged the growth trap, and compete for status in other ways.

Julie Cohen, Georgetown Law: Utopian imaginaries have become cryptopian imaginaries–for both libertarian techies and lefty academics, decentralization has become the solution. We currently have big political problems in tech due to oligarchy and infrastructure combined. Tech oligarchy emerged from informational capitalism, the rinse cycle of hedge funds/private equity, and institutional entrepreneurship through dual-class ownership. Infrastructure has shifted from fixed installations to flexible templates. Tech oligarchy has several pathologies, from regulatory noncompliance to Silicon Valley groupthink–while nominally decentralized (e.g. DeFi), it is overtly centralized, with AI in everything, everything through AI. Countermovements need to scale up: don’t gaslight government, dismantle the rinse cycle, use regulatory power, fund a public AI option.

Ruthanna Emrys, Author: Separating the “dignified” and “efficient” governance functions, or head of state vs head of government, could be a desirable thing to have in the US, as is the case in many other countries. Dignified leaders are a lonstanding technology for coordination, norm-setting, and creation of group identity, and voters often have different preferences in terms of charismatic leaders, and of actual policies. Furthermore, what if this dignified leader was fictional instead of real? This could make it a stronger and more consistent character, less prone to scandal and disappointment; real people and institutions could carry out the “efficient” function in parallel, looking to the fictional leader for soft-power modeling and guidance. However, this fictional leader should not be an AI, as people want dignified leaders to be backed by real moral judgment and caring.

Samuel Hammond, Foundation for American Innovation: new technology (e.g. x-ray glasses) leads to 3 possible responses: cultural evolution, mitigation and adaptation, regulation and enforcement. Tech and disruption drives demand for a form of Leviathan, but centralized control of fast-diffusing tech is hard. History is now running in reverse: there are early signs that the internet, digitization and AI are eroding the returns to high modernism. “Seeing like an AI state,” in the James C. Scott sense of the term, or forcing uniformity on local heterogeneity, seems to be the trend. Differential diffusion of AI is creating a Red Queen dynamic between the public and private sector. There is a legibility arms race, as ownership structures are becoming more complex.

Hal Hodson, The Economist: AI’s impact on democracy will not come through its introduction to electoral or deliberative processes–it will come through the transformation of markets with which democracy interacts. Generative AI will largely further the trends started by social media, from reducing the cost of content creation, to increasing polarization. Hopefully, its uncanny ability to produce knowledge (e.g. across languages) will also be realized, once more reliable. We have moved from “trustworld”, in which the high costs of media production created trust and legitimacy, to “slopworld”, in which trust must be earned without this capital mooring. The old gatekeepers are gone, but new ones might emerge (e.g. YouTubers, start-ups), even if their position is less secure.

Gideon Lichfield, Freelance: We have lost track of what democracy is supposed to mean: it wasn’t initially intended as democratic–the founders wanted to name the US a republic not a democracy. All of the expansions of the franchise were given grudgingly by elites, to maintain power–but there are no more expansions to make, and the public does not feel represented by elites anymore. We have confused the meaning of democracy (rule by the people) and its purpose (to make the best decisions for society). In order to achieve its purpose, maybe we should not poll everyone on everything all the time–instead, people should have a say on an issue to the degree they are directly affected by it and well informed on it. We also don’t need to use elections for everything, or to poll the entire population (representative samples can be an alternative).

M. Adelson January 23, 2025 11:03 AM

Also, please see the upcoming IEEE workshop in on changing the proportionality constants of the inverse square principles.

It is fascinating how people with little or no ability to actually affect change will congregate and engage in wishful thinking for hours on end. I suppose it offers to participants the impression that their wheel spinning is meaningful. Mass therapy for alleged intellectuals who dream of wielding power as opposed to actually possessing it.

mark January 23, 2025 12:11 PM

I am somewhat confused. The date/times posted above are for today (23 Jan 2025), yet you speak of iWORD as being in the future.

Why, yes, as a computer professional (programmer, sysadmin) and now sf author, current novel Becoming Terran, very political), I’d be happy to collaborate on this.

Sean Flaherty January 23, 2025 4:53 PM

Is anyone talking about how to verify the correct functioning of an AI, that an AI has not been maliciously trained or otherwise made to execute its functions according to the wishes of rogue human actors or an adversarial AI?

America is still tearing at its hair about establishing and maintaining consistent trust in computer vote recording and tabulation. All Western democracies have run up against this wildly simple issue – and all solutions that get traction involve allowing citizens very simple ways of confirming to their satisfaction that vote counts are accurate enough to provide correct outcomes. Hand counts, ballot images, public cameras in the ballot processing and counting facilities.

The Germans effectively did away with electronic vote counting because the Constitutional Court ruled in 2009 that a constitutionally legitimate voting system had to be publicly comprehensible.

Is anyone looking at ways of allowing the public to audit the correct functioning of an AI used to facilitate democratic or alternate governance models?

Rich Seidner January 23, 2025 5:11 PM

What’s missing from the Commentary is the fact that any system of rules can –and will be– gamed.

It’s not clear how to provide both for the ability of a democracy to evolve, and simultaneously to protect against (possibly malicious) gaming of the rules.

I welcome any thoughts about this.

Winter January 24, 2025 4:01 AM

@Rich Seidner

It’s not clear how to provide both for the ability of a democracy to evolve, and simultaneously to protect against (possibly malicious) gaming of the rules.

There is centuries of history on governance and democracy so there is a lot to learn from.

What’s missing from the Commentary is the fact that any system of rules can –and will be– gamed.

I once read a very nice metaphor (not sure where).

To defend a fortress both the material defenses and the people defending them are important. Without motivated people, no fortress will stand up against attack. Without the fortress, people are defenseless.

Same with a democratic republic. Rules and institutions are a prerequisite to a functioning republic. But the people must actually want to defend them. As they say:

The price of freedom is eternal vigilance

What we see in nations sliding into dictatorship is a destruction of institutions because the people do not want to defend them. No laws will prevent a tyranny when no one is willing to enforce them. Look at the constitution of the Russian Federation and its daily practice.[1]

When a large part of the population denies the rest the right to be part of the nation, laws are of no help.

[1] The Russian federation had no functioning institutions in the nineties, so the people were defenseless to usurpation by a well organized group inside their ruling class.

Clive Robinson January 24, 2025 12:30 PM

@ Sean Flaherty, ALL,

With regards your simple question,

“Is anyone talking about how to verify the correct functioning of an AI”

The answer is unfortunately not simple. Because it can not realistically be done. Which is not the answer most want to hear.

Because as you note there is a quite reasoned set of fears that people want removed. Such as,

“[T]hat an AI has not been maliciously trained or otherwise made to execute its functions according to the wishes of rogue human actors or an adversarial AI”

The reason it can not be realistically done is the way current LLM “Digital Neural Networks” work.

In many ways they act as a “One Way Function”(OWF) with a semi “stochastic” input vector.

That is for a first order thought model you can think of the LLM network like a “Cryptographic Hash” of an Unknown input with a random element”.

The effective OWF means you can not work backwards from output to input, and the stochastic or random element in the input could not be determined.

But you need to consider that the output is a vector based weighted average. Thus even if there were no nonlinear elements, the output would in effect be,

Output = root((AA) + (BB) + (R*R))

Where A and B are input vectors of tokens known within the model and R is that random element.

In effect even if you knew all the tokens in the network you would not be able to know the output except as a probability.

And even doing a brut force or dictionary attack on the network would not actually get you very far as the probability vector for each token is based on the order and value of each preceding token in an input that can now be measured in the thousands, with each having a random element added of varying size.

All we can really say is the output will have a probability function and that if you gave the same question repeatedly to the network “independently” enough times you would see outputs that lie on the function given randomly.

Unfortunately some current AI systems although they are LLM, they also have a dynamic ML element that in effect changes the probability function based on previous input.

So even if you could remove the random element from the input, the probability function for each token usage would be in continuous flux.

And that is before we talk about “guide rails” in the interface that surrounds the LLM.

In theory the “guide rails” prevent not only “inappropriate input” but “inappropriate output” as well. The second is supposedly to stop “hallucinations” or “hard bullshit” depending on your chosen “terms of art”. Either way the guide rails process to stop inappropriate output is effectively “to add bias” to the LLM network…

So you drop into the question of,

“What is appropriate bias, and what bias is inappropriate?”

That is the “Good or Bad” issue, determined by “an independent observer” based on their moors, morals and ethics in a given societal setting that continuously changes, sometimes quite abruptly. Also sometimes seen as forwards sometimes seen as backwards, thus changes that were seen as being for “the good” are now seen as for “the bad”.

This type of turn around is usually “driven” by “dog whistle” statements and claims that often have no correspondence with reality as actual fact and risk evaluation, just emotion jerking spin playing on cognitive bias.

An example of which was “think of the children” and “faux ‘organised’ terrorism” used by the FBI and DoJ on their “self serving” campaign against E2EE.

BernhardM January 26, 2025 9:59 AM

I was struck by the missing ‘ordinary’ persons on your listings of types invited to your workshop. One might challenge your use of “diverse”.

Will February 4, 2025 10:07 PM

I just have to say, after watching tech world committees make new standards or naming conventions or etc… over the decades along with all of the promised increases to the quality of all human lives, etc…, I’m just going to be blunt: stay out. The fact that people do not participate in the affairs of their country and then complain about the way things are run in their country, does not mean the system is broken. It means the people are broken. Any solution with tech always ends up the same way: more power consolidated in an ever shrinking number of hands. I remember the promise of the internet, and so many other things. They all have ended the same way. It does not matter the good you may intend. Just like the internet, it will be taken and warped to the purposes of others. So enough already.
Originally posted to the wrong article. Ah well.

Danish February 7, 2025 12:25 PM

As BernhardM said, there’s something perverse about discussing democracy in an exclusive technocratic forum.

I think that the biggest threat to democracy these days is technocracy. It could be argued that technocratic elites replaced the nobility as the de facto ruling class.
More radically it could be argued that democracy hasn’t, as of this date, been implemented anywhere in the world, meaning that in practice the majority never controls the country in a meaningful way. The technocrats replaced the nobles and the people never stood a chance.

Sure, there are special cases like Brexit but they are the exception to the rule, including in the UK. The circumstances of the Brexit referendum underscore this, with the party that proposed it eventually campaigning against it and reluctantly accepting the results after years of postponing it.

The Swiss experiment in direct democracy is mostly failed by design when giving the elected government a say in the process of the frequent referendums, making them mostly a rubber stamping of the people to the elected government’s position. If the government would be forbidden from any involvement and opinion then those referendums could be much more meaningful.

The biggest question though is about taking time sensitive major real time decisions. The most common example is on security matters and wars, which also bring the problem of classified information. So you could claim that the first question about democracy is who should have the nuclear codes to decide about a second strike if the enemy strikes. One of the commentators in the conference talked about “dignified heads of state”, implying they should be head figures without real power. Then who should hold the nuclear codes, according to him? A technocrat chosen by whom and accountable to whom?

So I think that pure direct democracy can’t work, there’s a need to some sort of elected representative to make some time sensitive classified real time decisions.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.