Latest Essays
Page 2
Build AI by the People, for the People
Washington needs to take AI investment out of the hands of private companies.
Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of U.S. tech companies?
Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 U.S. presidential election and …
Big Tech Isn’t Prepared for A.I.’s Next Chapter
In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of A.I. research has dramatically changed…
Rethinking Democracy for the Age of AI
We need to recreate our system of governance for an era in which transformative technologies pose catastrophic risks as well as great promise.
This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023.
There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.
At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies…
Can We Build Trustworthy AI?
AI isn't transparent, so we should all be preparing for a world where AI is not trustworthy, write two Harvard researchers.
We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.
Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?…
Just Wait Until Trump Is a Chatbot
Artificial intelligence is already showing up in political ads. Soon, it will completely change the nature of campaigning.
Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.
We should expect more of this kind of thing. The applications of AI to political advertising have not escaped campaigners, who are already “pressure testing” possible uses for the technology. In the 2024 presidential-election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign …
How Artificial Intelligence Can Aid Democracy
There’s good reason to fear that A.I. systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.
These risks may be the fallout of a world where businesses deploy poorly tested A.I. systems in a battle for market share, each hoping to establish a monopoly.
But dystopia isn’t the only possible future. A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an A.I. not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it…
Brace Yourself for a Tidal Wave of ChatGPT Email Scams
Thanks to large language models, a single scammer can run hundreds or thousands of cons in parallel, night and day, in every language under the sun.
Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.
But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance ” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams…
How AI Could Write Our Laws
ChatGPT and other AIs could supercharge the influence of lobbyists—but only if we let them
Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.
But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 …
Why the U.S. Should Not Ban TikTok
The ban would hurt Americans—and there are better ways to protect their data.
Congress is currently debating bills that would ban TikTok in the United States. We are here as technologists to tell you that this is a terrible idea and the side effects would be intolerable. Details matter. There are several ways Congress might ban TikTok, each with different efficacies and side effects. In the end, all the effective ones would destroy the free internet as we know it.
There’s no doubt that TikTok and ByteDance, the company that owns it, are shady. They, like most large corporations in China, operate at the pleasure of the Chinese government. They collect extreme levels of information about users. But they’re not alone: Many apps you use do the same, including Facebook and Instagram, along with seemingly innocuous apps that have no need for the data. Your data is bought and sold by data brokers you’ve never heard of who have few scruples about where the data ends up. They have digital dossiers on most people in the United States…
Everything Is Hackable
Every year, an army of hackers takes aim at the tax code.
The tax code is not computer code, but it is a series of rules—supposedly deterministic algorithms—that take data about your income and determine the amount of money you owe. This code has vulnerabilities, more commonly known as loopholes. It has exploits; those are tax avoidance strategies. There is an entire industry of black-hat hackers who exploit vulnerabilities in the tax code: We call them accountants and tax attorneys.
Hacking isn’t limited to computer systems, or even technology. Any system of rules can be hacked. In general terms, a hack is something that a system permits, but that is unanticipated and unwanted by its designers. It’s unplanned: a mistake in the system’s design or coding. It’s clever. It’s a subversion, or an exploitation. It’s a cheat, but only sort of. Just as a computer vulnerability can be exploited over the internet because the code permits it, a tax loophole is "allowed" by the system because it follows the rules, even though it might subvert the intent of those rules…
Sidebar photo of Bruce Schneier by Joe MacInnis.