Essays in the Category "AI and Large Language Models"
Page 7 of 8
The A.I. Wars Have Three Factions, and They All Crave Power
There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I. technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.
The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns…
Robots Are Already Killing People
The AI boom only underscores a problem that has existed for years.
The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so 25-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.
At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar …
Nervous About ChatGPT? Try ChatGPT With a Hammer
Once generative AI can use real-world tools, it will become exponentially more capable. Companies and regulators need to get ahead of these rapidly evolving algorithms.
Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date…
Six Ways That AI Could Change Politics
A new era of AI-powered domestic politics may be coming. Watch for these milestones to know when it’s arrived.
This essay also appeared in The Economic Times.
ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.
But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades…
Can You Trust AI? Here’s Why You Shouldn’t
This essay also appeared in CapeTalk, CT Insider, The Daily Star, The Economic Times, ForeignAffairs.co.nz, Fortune, GayNrd, Homeland Security News Wire, Kiowa County Press, MinnPost, Tech Xplore, UPI, and Yahoo News.
If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s silent about its own corporate parent’s misdeeds.
When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output…
AI Microdirectives Could Soon Be Used for Law Enforcement
And they’re terrifying.
Imagine a future in which AIs automatically interpret—and enforce—laws.
All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.
Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow…
Will AI Hack Our Democracy?
View or Download in PDF Format
Back in 2021, I wrote an essay titled “The Coming AI Hackers,” about how AI would hack our political, economic, and social systems. That ended up being a theme of my latest book, A Hacker’s Mind, and is something I have continued to think and write about.
I believe that AI will hack public policy in a way unlike anything that’s come before. It will change the speed, scale, scope, and sophistication of hacking, which in turn will change so many things that we can’t even imagine how it will all shake out. At a minimum, everything about public policy—how it is crafted, how it is implemented, what effects it has on individuals—will change in ways we cannot foresee…
Artificial Intelligence Can’t Work Without Our Data
We should all be paid for it.
For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.
Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds…
AI Could Shore Up Democracy—Here’s One Way
This essay also appeared in ArcaMax, Big News Network, Biloxi Local News & Events, Chicago Sun-Times, Fast Company, GCN, Government Technology, Inkl, Macau Daily Times, MENAFN, Nextgov, and Yahoo.
It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?…
Build AI by the People, for the People
Washington needs to take AI investment out of the hands of private companies.
Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of U.S. tech companies?
Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 U.S. presidential election and …
Sidebar photo of Bruce Schneier by Joe MacInnis.