Latest Essays
Page 8
LLMs’ Data-Control Path Insecurity
Someday, some AI researcher will figure out how to separate the data and control paths. Until then, we’re going to have to think carefully about using LLMs in potentially adversarial situations—like on the Internet.
Back in the 1960s, if you played a 2,600Hz tone into an AT&T pay phone, you could make calls without paying. A phone hacker named John Draper noticed that the plastic whistle that came free in a box of Captain Crunch cereal worked to make the right sound. That became his hacker name, and everyone who knew the trick made free pay-phone calls.
There were all sorts of related hacks, such as faking the tones that signaled coins dropping into a pay phone and faking tones used by repair equipment. AT&T could sometimes change the signaling tones, make them more complicated, or try to keep them secret. But the general class of exploit was impossible to fix because the problem was general: Data and control used the same channel. That is, the commands that told the phone switch what to do were sent along the same path as voices…
AI and Trust
This essay appeared in both English and Korean. The Korean version is available as a PDF.
Trust is essential to society. We trust that our phones will wake us on time, that our food is safe to eat, that other drivers on the road won‘t ram us. We trust many thousands of times a day. Society can’t function without it. And that we don‘t even think about it is a measure of how well it all works.
Trust is a complicated concept, and the word has many meanings. When we say that we trust a friend, it is less about their specific actions and more about them as a person. We trust their intentions, and know that those intentions will inform their actions. This is “interpersonal trust.”…
It’s the End of the Web as We Know It
A great public resource is at risk of being destroyed.
The web has become so interwoven with everyday life that it is easy to forget what an extraordinary accomplishment and treasure it is. In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection.
But all of this is coming to an end. The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.
To understand why, you must understand publishing. Its core task is to connect writers to an audience. Publishers work as gatekeepers, filtering candidates and then amplifying the chosen ones. Hoping to be selected, writers shape their work in various ways. This article might be written very differently in an academic publication, for example, and publishing it here entailed pitching an editor, revising multiple drafts for style and focus, and so on…
Backdoor in XZ Utils That Almost Happened
The recent cybersecurity catastrophe that wasn’t reveals an untenable situation, one being exploited by malicious actors.
Last week, the internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it…
In Memoriam: Ross Anderson, 1956-2024
Ross Anderson unexpectedly passed away in his sleep on March 28th in his home in Cambridge. He was 67.
I can’t remember when I first met Ross. It was well before 2008, when we created the Security and Human Behavior workshop. It was before 2001, when we created the Workshop on Economics and Information Security (okay, he created that one, I just helped). It was before 1998, when we first wrote about the insecurity of key escrow systems. In 1996, I was one of the people he brought to the Newton Institute at Cambridge University, for the six-month cryptography residency program he ran (I made a mistake not staying the whole time)—so it was before then as well…
Public AI as an Alternative to Corporate AI
This essay appeared as part of a round table on “Power and Governance in the Age of AI.”
The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.
To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete…
Let’s Not Make the Same Mistakes with AI That We Made with Social Media
Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.
Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.
Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society…
How Public AI Can Strengthen Democracy
With the world’s focus turning to misinformation, manipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.
Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities…
How the “Frontier” Became the Slogan of Uncontrolled AI
Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.
This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots …
Building a Cyber Insurance Backstop Is Harder Than It Sounds
Insurers argue that a government backstop would help them cover catastrophic cyberattacks, but it’s not so simple.
In the first week of January, the pharmaceutical giant Merck quietly settled its years-long lawsuit over whether or not its property and casualty insurers would cover a $700 million claim filed after the devastating NotPetya cyberattack in 2017. The malware ultimately infected more than 40,000 of Merck’s computers, which significantly disrupted the company’s drug and vaccine production. After Merck filed its $700 million claim, the pharmaceutical giant’s insurers argued that they were not required to cover the malware’s damage because the cyberattack was widely attributed to the Russian government and therefore was excluded from standard property and casualty insurance coverage as a “hostile or warlike act.”…
Sidebar photo of Bruce Schneier by Joe MacInnis.