Harvard Technologist Encourages Use of AI to Protect Democracy

Exploring ways in which generative artificial intelligence will affect democracy, prominent Harvard lecturer and public-interest technologist Bruce Schneier said it’s important for people to look both ways and to be unafraid of using the technology when it can help.

Schneier said he foresees an “arms race” where those who fail to engage with the technology will quickly lose ground to those who do. He offered examples of how AI can be used throughout the democratic process, including to augment polling, fundraising and campaign strategies in electoral politics, and to more routinely submit comments to regulatory agencies, craft legislation, and improve law enforcement.

“It lowers the barrier to lobbying, I think that’s a good thing,” Schneier said during a Jan. 31 webinar facilitated by the Harvard Kennedy School. Schneier is at work on a new book about generative AI and Democracy.

His remarks come as members of Congress discuss integrating AI into their offices, and as some democracy and consumer advocates fear the technology’s search and summarization abilities will further decimate journalism and the broader marketplace for information if it is controlled only by a limited number of corporate giants.

Schneier balanced his recommendations encouraging use of the technology by imploring policymakers to ensure equity is centered and reiterating his call for a Public AI option.

But as the technology becomes more accessible, he sees greater use of AI potentially shining light on the corners of government that have gone dark due to the drop off in human journalists, for example. “A lot of what happens in government happens in secret now because we’ve lost journalism,” he said. “Well this can be used to get it back.”

More broadly, Schneier sees “enormous potential” for AI to be used as a moderator and consensus builder.

It’s “a perfectly reasonable human job where there aren’t enough humans,” he said noting experiments done on such abilities at MIT. “You can imagine a conversation … where the AI serves as the moderator. It ensures all voices are heard, can block hateful or off topic comments, highlight areas of agreement and disagreement and help groups reach consensus.”

In another example, Schneier noted ways AI can be used to enhance law enforcement, including the possibility of the IRS using it to better catch tax cheats. “We have a lot of rules and regulations, but not a lot of enforcement, AI can scale that,” he said. But, as with most of his examples, there were cautionary notes such as the danger of false positives and proprietary software.

Asked about the risk of bias associated with AI, Schneier highlighted the importance of transparency and auditing.

“We talk about AI being racist, but human policemen are really racist, so maybe we can do better. Or at least we can see how well we’re doing in a way we can’t with a human,” he said.

Describing the human brain as the ultimate “black box,” Schneier said, “There’s a lot of bias embedded in us, structural bias is a thing, and it’s hard to test. I don’t know why someone’s racist, I just see the outputs. At worst, AIs are no worse than humans.”

“I’m optimistic, because at least [with] AIs I can open them up and look at the details in a way that I can’t with a human brain,” Schneier said noting scientists are getting better at examining those details.

For now, “the AI’s are trained on human output and human output is kind of gross,” but “by cleaning the human output so it’s less gross, and looking at the AI system, we can find the grossness and manually remove it,” Schneier said.

Noting the way a human brain—the neural networks of which AIs mimic—can be trained, it turns out, to “forget addiction,” he said maybe, eventually, we can get AI to “forget racism.”

Categories: Articles, Text

Sidebar photo of Bruce Schneier by Joe MacInnis.