Entries Tagged "artificial intelligence"

Page 4 of 11

Political Disinformation and AI

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the US presidential election. Over the next seven years, a number of countries—most prominently China and Iran—used social media to influence foreign elections, both in the US and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different.

But there is a new element: generative AI and large language models. These have the ability to quickly and easily produce endless reams of text on any topic in any tone from any perspective. As a security expert, I believe it’s a tool uniquely suited to Internet-era propaganda.

This is all very new. ChatGPT was introduced in November 2022. The more powerful GPT-4 was released in March 2023. Other language and image production AIs are around the same age. It’s not clear how these technologies will change disinformation, how effective they will be or what effects they will have. But we are about to find out.

Election season will soon be in full swing in much of the democratic world. Seventy-one percent of people living in democracies will vote in a national election between now and the end of next year. Among them: Argentina and Poland in October, Taiwan in January, Indonesia in February, India in April, the European Union and Mexico in June, and the US in November. Nine African democracies, including South Africa, will have elections in 2024. Australia and the UK don’t have fixed dates, but elections are likely to occur in 2024.

Many of those elections matter a lot to the countries that have run social media influence operations in the past. China cares a great deal about Taiwan, Indonesia, India, and many African countries. Russia cares about the UK, Poland, Germany, and the EU in general. Everyone cares about the United States.

And that’s only considering the largest players. Every US national election from 2016 has brought with it an additional country attempting to influence the outcome. First it was just Russia, then Russia and China, and most recently those two plus Iran. As the financial cost of foreign influence decreases, more countries can get in on the action. Tools like ChatGPT significantly reduce the price of producing and distributing propaganda, bringing that capability within the budget of many more countries.

A couple of months ago, I attended a conference with representatives from all of the cybersecurity agencies in the US. They talked about their expectations regarding election interference in 2024. They expected the usual players—Russia, China, and Iran—and a significant new one: “domestic actors.” That is a direct result of this reduced cost.

Of course, there’s a lot more to running a disinformation campaign than generating content. The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral. Companies like Meta have gotten much better at identifying these accounts and taking them down. Just last month, Meta announced that it had removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups, and 15 Instagram accounts associated with a Chinese influence campaign, and identified hundreds more accounts on TikTok, X (formerly Twitter), LiveJournal, and Blogspot. But that was a campaign that began four years ago, producing pre-AI disinformation.

Disinformation is an arms race. Both the attackers and defenders have improved, but also the world of social media is different. Four years ago, Twitter was a direct line to the media, and propaganda on that platform was a way to tilt the political narrative. A Columbia Journalism Review study found that most major news outlets used Russian tweets as sources for partisan opinion. That Twitter, with virtually every news editor reading it and everyone who was anyone posting there, is no more.

Many propaganda outlets moved from Facebook to messaging platforms such as Telegram and WhatsApp, which makes them harder to identify and remove. TikTok is a newer platform that is controlled by China and more suitable for short, provocative videos—ones that AI makes much easier to produce. And the current crop of generative AIs are being connected to tools that will make content distribution easier as well.

Generative AI tools also allow for new techniques of production and distribution, such as low-level propaganda at scale. Imagine a new AI-powered personal account on social media. For the most part, it behaves normally. It posts about its fake everyday life, joins interest groups and comments on others’ posts, and generally behaves like a normal user. And once in a while, not very often, it says—or amplifies—something political. These persona bots, as computer scientist Latanya Sweeney calls them, have negligible influence on their own. But replicated by the thousands or millions, they would have a lot more.

That’s just one scenario. The military officers in Russia, China, and elsewhere in charge of election interference are likely to have their best people thinking of others. And their tactics are likely to be much more sophisticated than they were in 2016.

Countries like Russia and China have a history of testing both cyberattacks and information operations on smaller countries before rolling them out at scale. When that happens, it’s important to be able to fingerprint these tactics. Countering new disinformation campaigns requires being able to recognize them, and recognizing them requires looking for and cataloging them now.

In the computer security world, researchers recognize that sharing methods of attack and their effectiveness is the only way to build strong defensive systems. The same kind of thinking also applies to these information campaigns: The more that researchers study what techniques are being employed in distant countries, the better they can defend their own countries.

Disinformation campaigns in the AI era are likely to be much more sophisticated than they were in 2016. I believe the US needs to have efforts in place to fingerprint and identify AI-produced propaganda in Taiwan, where a presidential candidate claims a deepfake audio recording has defamed him, and other places. Otherwise, we’re not going to see them when they arrive here. Unfortunately, researchers are instead being targeted and harassed.

Maybe this will all turn out okay. There have been some important democratic elections in the generative AI era with no significant disinformation issues: primaries in Argentina, first-round elections in Ecuador, and national elections in Thailand, Turkey, Spain, and Greece. But the sooner we know what to expect, the better we can deal with what comes.

This essay previously appeared in The Conversation.

Posted on October 5, 2023 at 7:12 AMView Comments

NSA AI Security Center

The NSA is starting a new artificial intelligence security center:

The AI security center’s establishment follows an NSA study that identified securing AI models from theft and sabotage as a major national security challenge, especially as generative AI technologies emerge with immense transformative potential for both good and evil.

Nakasone said it would become “NSA’s focal point for leveraging foreign intelligence insights, contributing to the development of best practices guidelines, principles, evaluation, methodology and risk frameworks” for both AI security and the goal of promoting the secure development and adoption of AI within “our national security systems and our defense industrial base.”

He said it would work closely with U.S. industry, national labs, academia and the Department of Defense as well as international partners.

Posted on October 2, 2023 at 12:40 PMView Comments

Detecting AI-Generated Text

There are no reliable ways to distinguish text written by a human from text written by an large language model. OpenAI writes:

Do AI detectors work?

  • In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.
  • Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.
  • To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.
    • When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.
    • There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.
  • Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.

There is some good research in watermarking LLM-generated text, but the watermarks are not generally robust.

I don’t think the detectors are going to win this arms race.

Posted on September 19, 2023 at 7:08 AMView Comments

LLM Summary of My Book Beyond Fear

Claude (Anthropic’s LLM) was given this prompt: Please summarize the themes and arguments of Bruce Schneier’s book Beyond Fear. I’m particularly interested in a taxonomy of his ethical arguments—please expand on that. Then lay out the most salient criticisms of the book.

Claude’s reply:

Here’s a brief summary of the key themes and arguments made in security expert Bruce Schneier’s book Beyond Fear: Thinking Sensibly About Security in an Uncertain World:

Central Themes:

  • Effective security requires realistic risk assessment, not fear-driven responses. We need to rationally determine actual threats versus perceived ones.
  • Security trade-offs need to be carefully balanced against costs to other values like privacy, freedom, and social/economic impacts.
  • Human psychology and biases lead to exaggerated fears that politicians/marketers can exploit, resulting in wasteful and theatrical “security theater.”
  • Security is a process that requires ongoing flexibility, not total prevention. We can’t anticipate every threat but can build resilience.

Key Arguments:

  • Analyze risks empirically using evidence, statistics, and probability rather than intuition. Avoid underestimating everyday risks while overestimating novel or sensational ones.
  • Reject the temptation to implement oppressive security restrictions after tragic but unlikely events. Focus only on proportional responses.
  • Where possible, favor openness and transparency over aggressive data collection or restrictions which erode civil liberties.
  • Seek security policies and technologies that enhance society broadly rather than benefit special interests.

Schneier argues for rational, ethical approaches to security aimed at genuine protection while avoiding violations of human rights in the name of safety.

Here are some of the key ethical arguments Bruce Schneier makes in Beyond Fear regarding security policy and practices:

  • Utilitarian Ethics – Security measures should aim to provide the greatest societal benefit and well-being for the most people, not just special interests.
  • Individual Autonomy – Restrictions should be minimized to avoid excessive infringements on civil liberties and individual freedoms. Surveillance creep risks violating autonomy.
  • Privacy Rights – Pervasive monitoring and data collection erode privacy rights and dignity. Citizens are entitled to some degree of opacity.
  • Proportional Responses – Reactive policies should weigh costs versus statistically likely benefits. Draconian rules beyond proportionality are unethical.
  • Justice and Fairness – Security should not create racial, religious or other profiling biases that punish groups rather than conduct.
  • Accountability and Transparency – Ethical security requires checks against excessive power and oversight for responsibility. Secrecy enables abuses.
  • Honest Representation – Officials and pundits have an ethical duty not to stoke disproportionate public fears. Security theater manufactures consent.
  • Integrity and Character – Leaders should have the wisdom and courage to make difficult trade-offs rather than exploit fears for authority and control.

Overall Schneier argues security policies grounded in ethics rather than fear promote human well-being while recognizing inherent trade-offs. Prudence, transparency, and proportionality are essential.

Here are some of the most salient criticisms made against the arguments put forth in Bruce Schneier’s book Beyond Fear:

  • Downplays real threats – Some argue Schneier dangerously downplays serious risks like terrorism and cybercrime by calling fears overblown.
  • Too theoretical – His risk analysis approach may rely too much on statistics versus real-world uncertainties that defy probabilistic reasoning.
  • No easy answers – While citing misguided policies, critics say Schneier provides few actionable solutions or concrete policy guidance.
  • Conflicting values – His balancing of security, privacy and liberty involves inherent trade-offs and tough choices he gives little direction on.
  • Politically naive – Suggesting special interests drive policy may under-appreciate legitimate national security concerns of elected officials.
  • Hindsight bias – Assessing past overreactions easier than predicting future threats that justify caution today.
  • No accounting for psychology – Factoring human irrationality into policy is essential, not dismissed as “security theater.”
  • Technical focus – Focuses narrowly on certain technologies while ignoring broader organizational and sociological dimensions.

In summary, critics argue Schneier’s risk analysis orientation is too clinical, abstract, and devoid of political and human realities to transform concrete policy in the ways he intends. His framework may therefore lack sufficient nuance or guidance.

It’s interesting. The summary is pretty accurate, and so are the criticisms. The book was a product of my thinking twenty years ago, and my own thinking has evolved to address those criticisms.

Of course, this only works with older books that the LLM has ingested, and probably works better with books that have been written about.

Posted on September 15, 2023 at 3:12 PMView Comments

On Robots Killing People

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, When Robots Kill: Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettitt during a routine operation that had occurred a few years earlier.

You get the picture. Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic "dogs" are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet.

Historically, major disasters have needed to occur to spur regulation—the types of disasters we would ideally foresee and avoid in today’s AI paradigm. The 1905 Grover Shoe Factory disaster led to regulations governing the safe operation of steam boilers. At the time, companies claimed that large steam-automation machines were too complex to rush safety regulations. This, of course, led to overlooked safety flaws and escalating disasters. It wasn’t until the American Society of Mechanical Engineers demanded risk analysis and transparency that dangers from these huge tanks of boiling water, once considered mystifying, were made easily understandable. The 1911 Triangle Shirtwaist Factory fire led to regulations on sprinkler systems and emergency exits. And the preventable 1912 sinking of the Titanic resulted in new regulations on lifeboats, safety audits, and on-ship radios.

Perhaps the best analogy is the evolution of the Federal Aviation Administration. Fatalities in the first decades of aviation forced regulation, which required new developments in both law and technology. Starting with the Air Commerce Act of 1926, Congress recognized that the integration of aerospace tech into people’s lives and our economy demanded the highest scrutiny. Today, every airline crash is closely examined, motivating new technologies and procedures.

Any regulation of industrial robots stems from existing industrial regulation, which has been evolving for many decades. The Occupational Safety and Health Act of 1970 established safety standards for machinery, and the Robotic Industries Association, now merged into the Association for Advancing Automation, has been instrumental in developing and updating specific robot-safety standards since its founding in 1974. Those standards, with obscure names such as R15.06 and ISO 10218, emphasize inherent safe design, protective measures, and rigorous risk assessments for industrial robots.

But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and subjected to proper expert scrutiny.

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it has said that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)

Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than 40 deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.

As we move into a future where robots are becoming integral to our lives, we can’t forget that safety is a crucial part of innovation. True technological progress comes from applying comprehensive safety standards across technologies, even in the realm of the most futuristic and captivating robotic visions. By learning lessons from past fatalities, we can enhance safety protocols, rectify design flaws, and prevent further unnecessary loss of life.

For example, the UK government already sets out statements that safety matters. Lawmakers must reach further back in history to become more future-focused on what we must demand right now: modeling threats, calculating potential scenarios, enabling technical blueprints, and ensuring responsible engineering for building within parameters that protect society at large. Decades of experience have given us the empirical evidence to guide our actions toward a safer future with robots. Now we need the political will to regulate.

This essay was written with Davi Ottenheimer, and previously appeared on Atlantic.com.

Posted on September 11, 2023 at 7:04 AMView Comments

LLMs and Tool Use

Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date.

The Microsoft project aims to teach AI how to use any and all digital tools in one fell swoop, a clever and efficient approach. Today, LLMs can do a pretty good job of recommending pizza toppings to you if you describe your dietary preferences and can draft dialog that you could use when you call the restaurant. But most AI tools can’t place the order, not even online. In contrast, Google’s seven-year-old Assistant tool can synthesize a voice on the telephone and fill out an online order form, but it can’t pick a restaurant or guess your order. By combining these capabilities, though, a tool-using AI could do it all. An LLM with access to your past conversations and tools like calorie calculators, a restaurant menu database, and your digital payment wallet could feasibly judge that you are trying to lose weight and want a low-calorie option, find the nearest restaurant with toppings you like, and place the delivery order. If it has access to your payment history, it could even guess at how generously you usually tip. If it has access to the sensors on your smartwatch or fitness tracker, it might be able to sense when your blood sugar is low and order the pie before you even realize you’re hungry.

Perhaps the most compelling potential applications of tool use are those that give AIs the ability to improve themselves. Suppose, for example, you asked a chatbot for help interpreting some facet of ancient Roman law that no one had thought to include examples of in the model’s original training. An LLM empowered to search academic databases and trigger its own training process could fine-tune its understanding of Roman law before answering. Access to specialized tools could even help a model like this better explain itself. While LLMs like GPT-4 already do a fairly good job of explaining their reasoning when asked, these explanations emerge from a “black box” and are vulnerable to errors and hallucinations. But a tool-using LLM could dissect its own internals, offering empirical assessments of its own reasoning and deterministic explanations of why it produced the answer it did.

If given access to tools for soliciting human feedback, a tool-using LLM could even generate specialized knowledge that isn’t yet captured on the web. It could post a question to Reddit or Quora or delegate a task to a human on Amazon’s Mechanical Turk. It could even seek out data about human preferences by doing survey research, either to provide an answer directly to you or to fine-tune its own training to be able to better answer questions in the future. Over time, tool-using AIs might start to look a lot like tool-using humans. An LLM can generate code much faster than any human programmer, so it can manipulate the systems and services of your computer with ease. It could also use your computer’s keyboard and cursor the way a person would, allowing it to use any program you do. And it could improve its own capabilities, using tools to ask questions, conduct research, and write code to incorporate into itself.

It’s easy to see how this kind of tool use comes with tremendous risks. Imagine an LLM being able to find someone’s phone number, call them and surreptitiously record their voice, guess what bank they use based on the largest providers in their area, impersonate them on a phone call with customer service to reset their password, and liquidate their account to make a donation to a political party. Each of these tasks invokes a simple tool—an Internet search, a voice synthesizer, a bank app—and the LLM scripts the sequence of actions using the tools.

We don’t yet know how successful any of these attempts will be. As remarkably fluent as LLMs are, they weren’t built specifically for the purpose of operating tools, and it remains to be seen how their early successes in tool use will translate to future use cases like the ones described here. As such, giving the current generative AI sudden access to millions of APIs—as Microsoft plans to—could be a little like letting a toddler loose in a weapons depot.

Companies like Microsoft should be particularly careful about granting AIs access to certain combinations of tools. Access to tools to look up information, make specialized calculations, and examine real-world sensors all carry a modicum of risk. The ability to transmit messages beyond the immediate user of the tool or to use APIs that manipulate physical objects like locks or machines carries much larger risks. Combining these categories of tools amplifies the risks of each.

The operators of the most advanced LLMs, such as OpenAI, should continue to proceed cautiously as they begin enabling tool use and should restrict uses of their products in sensitive domains such as politics, health care, banking, and defense. But it seems clear that these industry leaders have already largely lost their moat around LLM technology—open source is catching up. Recognizing this trend, Meta has taken an “If you can’t beat ’em, join ’em” approach and partially embraced the role of providing open source LLM platforms.

On the policy front, national—and regional—AI prescriptions seem futile. Europe is the only significant jurisdiction that has made meaningful progress on regulating the responsible use of AI, but it’s not entirely clear how regulators will enforce it. And the US is playing catch-up and seems destined to be much more permissive in allowing even risks deemed “unacceptable” by the EU. Meanwhile, no government has invested in a “public option” AI model that would offer an alternative to Big Tech that is more responsive and accountable to its citizens.

Regulators should consider what AIs are allowed to do autonomously, like whether they can be assigned property ownership or register a business. Perhaps more sensitive transactions should require a verified human in the loop, even at the cost of some added friction. Our legal system may be imperfect, but we largely know how to hold humans accountable for misdeeds; the trick is not to let them shunt their responsibilities to artificial third parties. We should continue pursuing AI-specific regulatory solutions while also recognizing that they are not sufficient on their own.

We must also prepare for the benign ways that tool-using AI might impact society. In the best-case scenario, such an LLM may rapidly accelerate a field like drug discovery, and the patent office and FDA should prepare for a dramatic increase in the number of legitimate drug candidates. We should reshape how we interact with our governments to take advantage of AI tools that give us all dramatically more potential to have our voices heard. And we should make sure that the economic benefits of superintelligent, labor-saving AI are equitably distributed.

We can debate whether LLMs are truly intelligent or conscious, or have agency, but AIs will become increasingly capable tool users either way. Some things are greater than the sum of their parts. An AI with the ability to manipulate and interact with even simple tools will become vastly more powerful than the tools themselves. Let’s be sure we’re ready for them.

This essay was written with Nathan Sanders, and previously appeared on Wired.com.

Posted on September 8, 2023 at 7:05 AMView Comments

December’s Reimagining Democracy Workshop

Imagine that we’ve all—all of us, all of society—landed on some alien planet, and we have to form a government: clean slate. We don’t have any legacy systems from the US or any other country. We don’t have any special or unique interests to perturb our thinking.

How would we govern ourselves?

It’s unlikely that we would use the systems we have today. The modern representative democracy was the best form of government that mid-eighteenth-century technology could conceive of. The twenty-first century is a different place scientifically, technically and socially.

For example, the mid-eighteenth-century democracies were designed under the assumption that both travel and communications were hard. Does it still make sense for all of us living in the same place to organize every few years and choose one of us to go to a big room far away and create laws in our name?

Representative districts are organized around geography, because that’s the only way that made sense 200-plus years ago. But we don’t have to do it that way. We can organize representation by age: one representative for the thirty-one-year-olds, another for the thirty-two-year-olds, and so on. We can organize representation randomly: by birthday, perhaps. We can organize any way we want.

US citizens currently elect people for terms ranging from two to six years. Is ten years better? Is ten days better? Again, we have more technology and therefor more options.

Indeed, as a technologist who studies complex systems and their security, I believe the very idea of representative government is a hack to get around the technological limitations of the past. Voting at scale is easier now than it was 200 year ago. Certainly we don’t want to all have to vote on every amendment to every bill, but what’s the optimal balance between votes made in our name and ballot measures that we all vote on?

In December 2022, I organized a workshop to discuss these and other questions. I brought together fifty people from around the world: political scientists, economists, law professors, AI experts, activists, government officials, historians, science fiction writers and more. We spent two days talking about these ideas. Several themes emerged from the event.

Misinformation and propaganda were themes, of course—and the inability to engage in rational policy discussions when people can’t agree on the facts.

Another theme was the harms of creating a political system whose primary goals are economic. Given the ability to start over, would anyone create a system of government that optimizes the near-term financial interest of the wealthiest few? Or whose laws benefit corporations at the expense of people?

Another theme was capitalism, and how it is or isn’t intertwined with democracy. And while the modern market economy made a lot of sense in the industrial age, it’s starting to fray in the information age. What comes after capitalism, and how does it affect how we govern ourselves?

Many participants examined the effects of technology, especially artificial intelligence. We looked at whether—and when—we might be comfortable ceding power to an AI. Sometimes it’s easy. I’m happy for an AI to figure out the optimal timing of traffic lights to ensure the smoothest flow of cars through the city. When will we be able to say the same thing about setting interest rates? Or designing tax policies?

How would we feel about an AI device in our pocket that voted in our name, thousands of times per day, based on preferences that it inferred from our actions? If an AI system could determine optimal policy solutions that balanced every voter’s preferences, would it still make sense to have representatives? Maybe we should vote directly for ideas and goals instead, and leave the details to the computers. On the other hand, technological solutionism regularly fails.

Scale was another theme. The size of modern governments reflects the technology at the time of their founding. European countries and the early American states are a particular size because that’s what was governable in the 18th and 19th centuries. Larger governments—the US as a whole, the European Union—reflect a world in which travel and communications are easier. The problems we have today are primarily either local, at the scale of cities and towns, or global—even if they are currently regulated at state, regional or national levels. This mismatch is especially acute when we try to tackle global problems. In the future, do we really have a need for political units the size of France or Virginia? Or is it a mixture of scales that we really need, one that moves effectively between the local and the global?

As to other forms of democracy, we discussed one from history and another made possible by today’s technology.

Sortition is a system of choosing political officials randomly to deliberate on a particular issue. We use it today when we pick juries, but both the ancient Greeks and some cities in Renaissance Italy used it to select major political officials. Today, several countries—largely in Europe—are using sortition for some policy decisions. We might randomly choose a few hundred people, representative of the population, to spend a few weeks being briefed by experts and debating the problem—and then decide on environmental regulations, or a budget, or pretty much anything.

Liquid democracy does away with elections altogether. Everyone has a vote, and they can keep the power to cast it themselves or assign it to another person as a proxy. There are no set elections; anyone can reassign their proxy at any time. And there’s no reason to make this assignment all or nothing. Perhaps proxies could specialize: one set of people focused on economic issues, another group on health and a third bunch on national defense. Then regular people could assign their votes to whichever of the proxies most closely matched their views on each individual matter—or step forward with their own views and begin collecting proxy support from other people.

This all brings up another question: Who gets to participate? And, more generally, whose interests are taken into account? Early democracies were really nothing of the sort: They limited participation by gender, race and land ownership.

We should debate lowering the voting age, but even without voting we recognize that children too young to vote have rights—and, in some cases, so do other species. Should future generations get a “voice,” whatever that means? What about nonhumans or whole ecosystems?

Should everyone get the same voice? Right now in the US, the outsize effect of money in politics gives the wealthy disproportionate influence. Should we encode that explicitly? Maybe younger people should get a more powerful vote than everyone else. Or maybe older people should.

Those questions lead to ones about the limits of democracy. All democracies have boundaries limiting what the majority can decide. We all have rights: the things that cannot be taken away from us. We cannot vote to put someone in jail, for example.

But while we can’t vote a particular publication out of existence, we can to some degree regulate speech. In this hypothetical community, what are our rights as individuals? What are the rights of society that supersede those of individuals?

Personally, I was most interested in how these systems fail. As a security technologist, I study how complex systems are subverted—hacked, in my parlance—for the benefit of a few at the expense of the many. Think tax loopholes, or tricks to avoid government regulation. I want any government system to be resilient in the face of that kind of trickery.

Or, to put it another way, I want the interests of each individual to align with the interests of the group at every level. We’ve never had a system of government with that property before—even equal protection guarantees and First Amendment rights exist in a competitive framework that puts individuals’ interests in opposition to one another. But—in the age of such existential risks as climate and biotechnology and maybe AI—aligning interests is more important than ever.

Our workshop didn’t produce any answers; that wasn’t the point. Our current discourse is filled with suggestions on how to patch our political system. People regularly debate changes to the Electoral College, or the process of creating voting districts, or term limits. But those are incremental changes.

It’s hard to find people who are thinking more radically: looking beyond the horizon for what’s possible eventually. And while true innovation in politics is a lot harder than innovation in technology, especially without a violent revolution forcing change, it’s something that we as a species are going to have to get good at—one way or another.

This essay previously appeared in The Conversation.

Posted on August 23, 2023 at 7:06 AMView Comments

Applying AI to License Plate Surveillance

License plate scanners aren’t new. Neither is using them for bulk surveillance. What’s new is that AI is being used on the data, identifying “suspicious” vehicle behavior:

Typically, Automatic License Plate Recognition (ALPR) technology is used to search for plates linked to specific crimes. But in this case it was used to examine the driving patterns of anyone passing one of Westchester County’s 480 cameras over a two-year period. Zayas’ lawyer Ben Gold contested the AI-gathered evidence against his client, decrying it as “dragnet surveillance.”

And he had the data to back it up. A FOIA he filed with the Westchester police revealed that the ALPR system was scanning over 16 million license plates a week, across 480 ALPR cameras. Of those systems, 434 were stationary, attached to poles and signs, while the remaining 46 were mobile, attached to police vehicles. The AI was not just looking at license plates either. It had also been taking notes on vehicles’ make, model and color—useful when a plate number for a suspect vehicle isn’t visible or is unknown.

Posted on August 22, 2023 at 7:04 AMView Comments

White House Announces AI Cybersecurity Challenge

At Black Hat last week, the White House announced an AI Cyber Challenge. Gizmodo reports:

The new AI cyber challenge (which is being abbreviated “AIxCC”) will have a number of different phases. Interested would-be competitors can now submit their proposals to the Small Business Innovation Research program for evaluation and, eventually, selected teams will participate in a 2024 “qualifying event.” During that event, the top 20 teams will be invited to a semifinal competition at that year’s DEF CON, another large cybersecurity conference, where the field will be further whittled down.

[…]

To secure the top spot in DARPA’s new competition, participants will have to develop security solutions that do some seriously novel stuff. “To win first-place, and a top prize of $4 million, finalists must build a system that can rapidly defend critical infrastructure code from attack,” said Perri Adams, program manager for DARPA’s Information Innovation Office, during a Zoom call with reporters Tuesday. In other words: the government wants software that is capable of identifying and mitigating risks by itself.

This is a great idea. I was a big fan of DARPA’s AI capture-the-flag event in 2016, and am happy to see that DARPA is again inciting research in this area. (China has been doing this every year since 2017.)

Posted on August 21, 2023 at 7:10 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.