Essays in the Category "AI and Large Language Models"
Page 4 of 8
Will AI Take Your Job? the Answer Could Hinge on the 4 S’s of the Technology’s Advantages over Humans
Sometimes speed matters – and sometimes it doesn’t.
This essay also appeared in Fast Company, the Philadelphia Inquirer, the Seattle Post-Intelligencer, and Tech Xplore.
If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.
But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce…
AI and Trust
Note: The text in this column is taken, for the most part verbatim, from a talk by Mr. Schneier during the 2025 RSA Conference in San Francisco, CA on April 29, 2025.
This is a discussion about artificial intelligence (AI), trust, power, and integrity. I am going to make four basic arguments:
- There are two kinds of trust—interpersonal and social—and we regularly confuse them. What matters here is social trust, which is about reliability and predictability in society.
- Our confusion will increase with AI, and the corporations controlling AI will use that confusion to take advantage of us…
AI-Generated Law Isn’t Necessarily a Terrible Idea
The UAE joins a stream of other countries using the technology to write legislation.
On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to "regularly suggest updates" to the law and "accelerate the issuance of legislation by up to 70%." AI would create a "comprehensive legislative plan" spanning local and federal law and would be connected to public administration, the courts, and global policy trends.
The plan was widely greeted with astonishment. This sort of AI legislating would be a global "…
Web 3.0 Requires Data Integrity
New integrity-focused standards are necessary to enable the trusted AI services of tomorrow.
If you’ve ever taken a computer security class, you’ve probably learned about the three legs of computer security—confidentiality, integrity, and availability—known as the CIA triad. When we talk about a system being secure, that’s what we’re referring to. All are important, but to different degrees in different contexts. In a world populated by artificial intelligence (AI) systems and artificial intelligent agents, integrity will be paramount.
What is data integrity? It’s ensuring that no one can modify data—that’s the security angle—but it’s much more than that. It encompasses accuracy, completeness, and quality of data—all over both time and space. It’s preventing accidental data loss; the “undo” button is a primitive integrity measure. It’s also making sure that data is accurate when it’s collected—that it comes from a trustworthy source, that nothing important is missing, and that it doesn’t change as it moves from format to format. The ability to restart your computer is another integrity measure…
It’s Time to Worry About DOGE’s AI Plans
Welcome to the end of the human civil servant.
Donald Trump and Elon Musk’s chaotic approach to reform is upending government operations. Critical functions have been halted, tens of thousands of federal staffers are being encouraged to resign, and congressional mandates are being disregarded. The next phase: The Department of Government Efficiency reportedly wants to use AI to cut costs. According to The Washington Post, Musk’s group has started to run sensitive data from government systems through AI programs to analyze spending and determine what could be pruned. This may lead to the elimination of human jobs in favor of automation. As one government official who has been tracking Musk’s DOGE team told the…
AIs and Robots Should Sound Robotic
Here's a simple way to identify who, or what, is talking to us
Most people know that robots no longer sound like tinny trash cans. They sound like Siri, Alexa, and Gemini. They sound like the voices in labyrinthine customer support phone trees. And even those robot voices are being made obsolete by new AI-generated voices that can mimic every vocal nuance and tic of human speech, down to specific regional accents. And with just a few seconds of audio, AI can now clone someone’s specific voice.
This technology will replace humans in many areas. Automated customer support will save money by cutting staffing at …
AI Will Write Complex Laws
AI is poised to help legislators write more intricate laws, exercising increasing control over the executive.
Artificial intelligence (AI) is writing law today. This has required no changes in legislative procedure or the rules of legislative bodies—all it takes is one legislator, or legislative assistant, to use generative AI in the process of drafting a bill.
In fact, the use of AI by legislators is only likely to become more prevalent. There are currently projects in the US House, US Senate, and legislatures around the world to trial the use of AI in various ways: searching databases, drafting text, summarizing meetings, performing policy research and analysis, and more. A Brazilian municipality …
AI Mistakes Are Very Different from Human Mistakes
We need new security systems designed to deal with their weirdness
Humans make mistakes all the time. All of us do, every day, in tasks both new and routine. Some of our mistakes are minor and some are catastrophic. Mistakes can break trust with our friends, lose the confidence of our bosses, and sometimes be the difference between life and death.
Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. These days, casinos rotate their dealers regularly, because they make mistakes if they do the same task for too long. Hospital personnel write on limbs before surgery so that doctors operate on the correct body part, and they count surgical instruments to make sure none were left inside the body. From copyediting to double-entry bookkeeping to appellate courts, we humans have gotten really good at correcting human mistakes…
Trust Issues
The closed corporate ecosystem is the problem.
This essay appeared as a response to Evgeny Morozov in Boston Review‘s forum, “The AI We Deserve.”
For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from the U.S. Department of Defense. But today’s tools are hardly the intentional product of the diverse generations of innovators that came before. We agree with Morozov that the “refuseniks,” as he …
The Apocalypse That Wasn’t: AI Was Everywhere in 2024’s Elections, but Deepfakes and Misinformation Were Only Part of the Picture
This essay also appeared in Cascadia Daily News, Commonwealth Beacon, Fast Company, Gizmodo, and the Seattle Post-Intelligencer.
It’s been the biggest year for elections in human history: 2024 is a “super-cycle” year in which 3.7 billion eligible voters in 72 countries had the chance to go the polls. These are also the first AI elections, where many feared that deepfakes and artificial intelligence-generated misinformation would overwhelm the democratic processes. As 2024 draws to a close, it’s instructive to take stock of how democracy did…
Sidebar photo of Bruce Schneier by Joe MacInnis.