Essays in the Category "AI and Large Language Models"

Page 3 of 4

Artificial Intelligence Can’t Work Without Our Data

We should all be paid for it.

  • Barath Raghavan and Bruce Schneier
  • Politico
  • June 29, 2023

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds…

AI Could Shore Up Democracy—Here’s One Way

  • Bruce Schneier and Nathan Sanders
  • The Conversation
  • June 20, 2023

This essay also appeared in ArcaMax, Big News Network, Biloxi Local News & Events, Chicago Sun-Times, Fast Company, GCN, Government Technology, Inkl, Macau Daily Times, MENAFN, Nextgov, and Yahoo.

It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?…

Build AI by the People, for the People

Washington needs to take AI investment out of the hands of private companies.

  • Bruce Schneier and Nathan E. Sanders
  • Foreign Policy
  • June 12, 2023

Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of U.S. tech companies?

Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 U.S. presidential election and …

Big Tech Isn’t Prepared for A.I.’s Next Chapter

  • Bruce Schneier and Jim Waldo
  • Slate
  • May 30, 2023

In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of A.I. research has dramatically changed…

Rethinking Democracy for the Age of AI

We need to recreate our system of governance for an era in which transformative technologies pose catastrophic risks as well as great promise.

  • Cyberscoop
  • May 10, 2023

This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. 

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies…

Can We Build Trustworthy AI?

AI isn't transparent, so we should all be preparing for a world where AI is not trustworthy, write two Harvard researchers.

  • Nathan Sanders and Bruce Schneier
  • Gizmodo
  • May 4, 2023

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?…

Just Wait Until Trump Is a Chatbot

Artificial intelligence is already showing up in political ads. Soon, it will completely change the nature of campaigning.

  • Nathan E. Sanders and Bruce Schneier
  • The Atlantic
  • April 28, 2023

Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

We should expect more of this kind of thing. The applications of AI to political advertising have not escaped campaigners, who are already “pressure testing” possible uses for the technology. In the 2024 presidential-election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign …

How Artificial Intelligence Can Aid Democracy

  • Bruce Schneier, Henry Farrell, and Nathan E. Sanders
  • Slate
  • April 21, 2023

Hebrew Translation

There’s good reason to fear that A.I. systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

These risks may be the fallout of a world where businesses deploy poorly tested A.I. systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isn’t the only possible future. A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an A.I. not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it…

Brace Yourself for a Tidal Wave of ChatGPT Email Scams

Thanks to large language models, a single scammer can run hundreds or thousands of cons in parallel, night and day, in every language under the sun.

  • Bruce Schneier and Barath Raghavan
  • Wired
  • April 4, 2023

Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance …” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams…

How AI Could Write Our Laws

ChatGPT and other AIs could supercharge the influence of lobbyists—but only if we let them

  • Nathan E. Sanders & Bruce Schneier
  • MIT Technology Review
  • March 14, 2023

Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.

But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 …

Sidebar photo of Bruce Schneier by Joe MacInnis.