Like Social Media, AI Requires Difficult Choices

In his 2020 book, “Future Politics,” British barrister Jamie Susskind wrote that the dominant question of the 20th century was “How much of our collective life should be determined by the state, and what should be left to the market and civil society?” But in the early decades of this century, Susskind suggested that we face a different question: “To what extent should our lives be directed and controlled by powerful digital systems—and on what terms?”

Artificial intelligence (AI) forces us to confront this question. It is a technology that in theory amplifies the power of its users: A manager, marketer, political campaigner, or opinionated internet user can utter a single instruction, and see their message—whatever it is—instantly written, personalized, and propagated via email, text, social, or other channels to thousands of people within their organization, or millions around the world. It also allows us to individualize solicitations for political donations, elaborate a grievance into a well-articulated policy position, or tailor a persuasive argument to an identity group, or even a single person.

But even as it offers endless potential, AI is a technology that—like the state—gives others new powers to control our lives and experiences.

We’ve seen this play out before. Social media companies made the same sorts of promises 20 years ago: instant communication enabling individual connection at massive scale. Fast-forward to today, and the technology that was supposed to give individuals power and influence ended up controlling us. Today social media dominates our time and attention, assaults our mental health, and—together with its Big Tech parent companies—captures an unfathomable fraction of our economy, even as it poses risks to our democracy.

The novelty and potential of social media was as present then as it is for AI now, which should make us wary of its potential harmful consequences for society and democracy. We legitimately fear artificial voices and manufactured reality drowning out real people on the internet: on social media, in chat rooms, everywhere we might try to connect with others.

It doesn’t have to be that way. Alongside these evident risks, AI has legitimate potential to transform both everyday life and democratic governance in positive ways. In our new book, “Rewiring Democracy,” we chronicle examples from around the globe of democracies using AI to make regulatory enforcement more efficient, catch tax cheats, speed up judicial processes, synthesize input from constituents to legislatures, and much more. Because democracies distribute power across institutions and individuals, making the right choices about how to shape AI and its uses requires both clarity and alignment across society.

To that end, we spotlight four pivotal choices facing private and public actors. These choices are similar to those we faced during the advent of social media, and in retrospect we can see that we made the wrong decisions back then. Our collective choices in 2025—choices made by tech CEOs, politicians, and citizens alike—may dictate whether AI is applied to positive and pro-democratic, or harmful and civically destructive, ends.

A Choice for the Executive and the Judiciary: Playing by the Rules

The Federal Election Commission (FEC) calls it fraud when a candidate hires an actor to impersonate their opponent. More recently, they had to decide whether doing the same thing with an AI deepfake makes it okay. (They concluded it does not.) Although in this case the FEC made the right decision, this is just one example of how AIs could skirt laws that govern people.

Likewise, courts are having to decide if and when it is okay for an AI to reuse creative materials without compensation or attribution, which might constitute plagiarism or copyright infringement if carried out by a human. (The court outcomes so far are mixed.) Courts are also adjudicating whether corporations are responsible for upholding promises made by AI customer service representatives. (In the case of Air Canada, the answer was yes, and insurers have started covering the liability.)

Social media companies faced many of the same hazards decades ago and have largely been shielded by the combination of Section 230 of the Communications Act of 1994 and the safe harbor offered by the Digital Millennium Copyright Act of 1998. Even in the absence of congressional action to strengthen or add rigor to this law, the Federal Communications Commission (FCC) and the Supreme Court could take action to enhance its effects and to clarify which humans are responsible when technology is used, in effect, to bypass existing law.

A Choice for Congress: Privacy

As AI-enabled products increasingly ask Americans to share yet more of their personal information—their “context“—to use digital services like personal assistants, safeguarding the interests of the American consumer should be a bipartisan cause in Congress.

It has been nearly 10 years since Europe adopted comprehensive data privacy regulation. Today, American companies exert massive efforts to limit data collection, acquire consent for use of data, and hold it confidential under significant financial penalties—but only for their customers and users in the EU.

Regardless, a decade later the U.S. has still failed to make progress on any serious attempts at comprehensive federal privacy legislation written for the 21st century, and there are precious few data privacy protections that apply to narrow slices of the economy and population. This inaction comes in spite of scandal after scandal regarding Big Tech corporations’ irresponsible and harmful use of our personal data: Oracle’s data profiling, Facebook and Cambridge Analytica, Google ignoring data privacy opt-out requests, and many more.

Privacy is just one side of the obligations AI companies should have with respect to our data; the other side is portability—that is, the ability for individuals to choose to migrate and share their data between consumer tools and technology systems. To the extent that knowing our personal context really does enable better and more personalized AI services, it’s critical that consumers have the ability to extract and migrate their personal context between AI solutions. Consumers should own their own data, and with that ownership should come explicit control over who and what platforms it is shared with, as well as withheld from. Regulators could mandate this interoperability. Otherwise, users are locked in and lack freedom of choice between competing AI solutions—much like the time invested to build a following on a social network has locked many users to those platforms.

A Choice for States: Taxing AI Companies

It has become increasingly clear that social media is not a town square in the utopian sense of an open and protected public forum where political ideas are distributed and debated in good faith. If anything, social media has coarsened and degraded our public discourse. Meanwhile, the sole act of Congress designed to substantially reign in the social and political effects of social media platforms—the TikTok ban, which aimed to protect the American public from Chinese influence and data collection, citing it as a national security threat—is one it seems to no longer even acknowledge.

While Congress has waffled, regulation in the U.S. is happening at the state level. Several states have limited children’s and teens’ access to social media. With Congress having rejected—for now—a threatened federal moratorium on state-level regulation of AI, California passed a new slate of AI regulations after mollifying a lobbying onslaught from industry opponents. Perhaps most interesting, Maryland has recently become the first in the nation to levy taxes on digital advertising platform companies.

States now face a choice of whether to apply a similar reparative tax to AI companies to recapture a fraction of the costs they externalize on the public to fund affected public services. State legislators concerned with the potential loss of jobs, cheating in schools, and harm to those with mental health concerns caused by AI have options to combat it. They could extract the funding needed to mitigate these harms to support public services—strengthening job training programs and public employment, public schools, public health services, even public media and technology.

A Choice for All of Us: What Products Do We Use, and How?

A pivotal moment in the social media timeline occurred in 2006, when Facebook opened its service to the public after years of catering to students of select universities. Millions quickly signed up for a free service where the only source of monetization was the extraction of their attention and personal data.

Today, about half of Americans are daily users of AI, mostly via free products from Facebook’s parent company Meta and a handful of other familiar Big Tech giants and venture-backed tech firms such as Google, Microsoft, OpenAI, and Anthropic—with every incentive to follow the same path as the social platforms.

But now, as then, there are alternatives. Some nonprofit initiatives are building open-source AI tools that have transparent foundations and can be run locally and under users’ control, like AllenAI and EleutherAI. Some governments, like Singapore, Indonesia, and Switzerland, are building public alternatives to corporate AI that don’t suffer from the perverse incentives introduced by the profit motive of private entities.

Just as social media users have faced platform choices with a range of value propositions and ideological valences—as diverse as X, Bluesky, and Mastodon—the same will increasingly be true of AI. Those of us who use AI products in our everyday lives as people, workers, and citizens may not have the same power as judges, lawmakers, and state officials. But we can play a small role in influencing the broader AI ecosystem by demonstrating interest in and usage of these alternatives to Big AI. If you’re a regular user of commercial AI apps, consider trying the free-to-use service for Switzerland’s public Apertus model.

None of these choices are really new. They were all present almost 20 years ago, as social media moved from niche to mainstream. They were all policy debates we did not have, choosing instead to view these technologies through rose-colored glasses. Today, though, we can choose a different path and realize a different future. It is critical that we intentionally navigate a path to a positive future for societal use of AI—before the consolidation of power renders it too late to do so.

This post was written with Nathan E. Sanders, and originally appeared in Lawfare.

Posted on December 2, 2025 at 7:03 AM19 Comments

Comments

Clive Robonson December 2, 2025 9:43 AM

@ Bruce,

You say,

“We’ve seen this out play before”

Should it be “play out” rather than “out play”?

Rontea December 2, 2025 11:02 AM

Choice is the architect of our prison. Just like in the Matrix, every path is another corridor of hypotheses. AI, like social media before it, tempts us with freedom but binds us with endless decisions—about regulation, privacy, and control. The system isn’t the enemy; it’s the illusion of choice that keeps us in the loop. Only when we acknowledge the weight of our choices do we start to see the code behind the game.

KC December 2, 2025 11:13 AM

re: State taxes for AI companies

I’m intrigued by Maryland’s tax on digital advertising platforms. So quotes the linked article:

“All is fair in love, war, and exporting your tax burden.”

Brookings further supports the above research: “What is a digital ad tax?

According to Brookings, Maryland collected $93m and $83m in 2022 and 2023 from their Digital Ad Tax (DAT). The revenue is earmarked for the “Blueprint for Maryland’s Future Fund” to provide adequate funding for childhood education. Brookings also looks at how DAT could redirect AI research away from advertising towards services providing further social support.

lurker December 2, 2025 12:19 PM

@Anonymous

Remove the “en” from the url, you can have the whole site in Bahasa. En might have been taken off deliberately to keep out ignorant monolingual english speakers.

Clive Robinson December 2, 2025 12:24 PM

(Got a 429 at 16:10 so wait and resubmit).

@ Bruce, ALL,

For quite a while now I’ve been saying that people with the ability to do so will use AI as an arms length way to avoid responsibility,

I’ve quoted the harms that the Australian RoboDebt has caused, and the harms the UK Government DIY “Connect” system is causing.

You say above,

“The Federal Election Commission (FEC) calls it fraud when a candidate hires an actor to impersonate their opponent. More recently, they had to decide whether doing the same thing with an AI deepfake makes it okay.”

This is an example of “pushing the boundaries” not directly but through “Arms length use of AI”. This time they got told “No you can not do it” rather than as should have been “immediately prosecuted” to the fullest extent of the law.

Which gives rise to a serious failing.

After all the legal definition of fraud is usually fairly clear in any jurisdiction and in effect says,

“To knowingly, with intent, gain or seek to gain advantage, by deception or other falsehood”.

The Current UK Version is a little more verbose,

The offence of fraud under the UK Fraud Act 2006 says,

“Section 1, Fraud

(1) A person is guilty of fraud if he is in breach of any of the sections listed in subsection (2) (which provide for different ways of committing the offence).

(2) The sections are—

(a) section 2 (fraud by false representation),

(b) section 3 (fraud by failing to disclose information), and

(c) section 4 (fraud by abuse of position)..”

https://www.legislation.gov.uk/ukpga/2006/35/section/1

Which is fairly to the point but sparse in details[1].

I suspect most Jurisdictions have similar.

However in the UK even though politicians have committed one or more of these frauds with relation to elections and referendums, prosecutions fail to happen in most cases.

The result is,

“With nero zero risk of prosecution there is effectively nothing to deter such fraudulant attempts being made.”

And they are made way more often than they should be…

Lock a few of them up for the maximum penalty and their attitudes might change…

But it’s to late for that now.

Because AI and those who operate it are not the “directing minds”. And those that are will if they have any sense will almost certainly put in place not just “plausible deniability” but documentation that provides protection for them.

And in turn blame will get passed down the line or diluted to the point a prosecution is unlikely to succeed.

That is our future with Politicians and AI.

[1] Rather than go through listing the three sections it can be more easily read as,

(a) Dishonestly making a false representation (To a person, or to any system or device) : With a view to gain or with intent to cause loss or expose to a risk of loss.

(b) Dishonestly Failing to disclose (To a person, or to any system or device) : With a view to gain or with intent to cause loss or expose to a risk of loss. – Not provide information when under a legal duty to disclose it.

(c) Dishonest abuse of position (To a person, or to any system or device) : With a view to gain or with intent to cause loss or expose to a risk of loss.

In all cases it is irrelevant whether gain, loss or exposure to loss actually occurs.

lurker December 2, 2025 12:49 PM

@Bruce

Today, though, we can choose a different path and realize a different future. It is critical that we intentionally navigate a path to a positive future for societal use of AI—before the consolidation of power renders it too late to do so.

My rose colored glasses tell me the current “AI” bubble will burst before too much societal damage is done. My innate pessimism tells me it’s too late to stop the widespread belief that LLMs are AI.

Domain specific AI, trained on tightly curated material, can be very useful in narrow fields of application. But like it or not, there is a social demand for general purpose chatbots trained on the dregs of the internet that offer what the audience wants to hear. Like sitcom, soap opera, pulp fiction, the idle gossip of 18th century coffe shops. There were calls to ban the coffee shops because of the types of people who gathered there and the topics they chattered about. But one small group of coffe shop denizens who concentrated on certain particular matters became the famous Lloyds group of insurers.

Meanwhile the same judge who said scraping the net is “fair use”[1], has now told Anthropic to pay their lunch money to the authors of those works.[2]

[1] https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/

[2] https://www.bbc.com/news/articles/c5y4jpg922qo

lurker December 2, 2025 2:58 PM

@mark, ALL

re Guardian op-ed

Musk recently said: “AI and robots will replace all jobs. Working will be optional.” Gates predicted that humans “won’t be needed for most things.”

This is an old song, I remember hearing it 70 years ago, along with the one about flying cars. The new part is conflating AI into the theme. Back then they said the wealth created by the robots would pay for our life of leisure. Major industrial nations are discovering unemployment is not that simple.

Somewhere (I can’t put my finger on it right now) I saw a clip of the manager of BYD strolling down an idle production line during a holiday weekend, saying he knew he could run the place as a “dark factory”, but Chinese labour laws required him to employ a minimum number of humans.

Consistency December 3, 2025 4:38 AM

Because

“The Federal Election Commission (FEC) calls it fraud when a candidate hires an actor to impersonate their opponent. More recently, they had to decide whether doing the same thing with an AI deepfake makes it okay. (They concluded it does not.)”

ads that feature actors portraying customers are fraud, by the same reasoning.

Clive Robinson December 3, 2025 6:19 AM

@ Consistency,

You make what would in issolation be a valid point of,

“ads that feature actors portraying customers are fraud, by the same reasoning.”

You could also say comedians and late night show hosts doing impersonations for rather more than laughs.

But that is the point, these things are points on a line between “acceptable entertainment” and “unacceptable fraud”.

How do you decide what is “acceptable” and what is “unacceptable”?

Mankind has reasoned that “all acts” which includes crimes possess a quantity of “premeditation” and “intent” that fall under “state of mind” of a “Directing Mind”, which is not necessarily the same as the mind of the entity carrying out the act, hence we have the notion of “conspiracy” to consider as well.

None of these can be “proved” only “reasoned out” in the eyes of observers looking through both a personal and societal “Point Of View”(POV).

You will occasionally hear of “The man on the Clapham Omnibus” and what he would have thought,

https://en.wikipedia.org/wiki/Man_on_the_Clapham_omnibus

It is of course a legal fiction that has a purpose, and English Common Law over two millennium has picked up many such passengers that like the “Acient Mariner” are cursed to for ever more perform a duty of informing others thinking and states of mind.

ResearcherZero December 3, 2025 11:25 PM

This is how state and federal law enforcement is using AI technology.

Police use Flock like a search engine to identify personal and private information.

‘https://www.houstonchronicle.com/projects/2025/houston-flock-surveillance-explained/

Flock uses gig workers in the Philippines to train its AI surveillance.

The footage from Flock cameras contains audio and captures individuals whose features can also be analyzed and identified. After Flock was asked about the leaked data set showing partially annotated US license plate images, the data set vanished.

‘https://itmagazine.com/2025/12/01/flocks-innovative-use-of-overseas-gig-workers-to-develop-surveillance-ai-technology/

The movements of residents in thousands of communities are being monitored by police.
https://www.nbcnews.com/tech/tech-news/flock-police-cameras-scan-billions-month-sparking-protests-rcna230037

ResearcherZero December 3, 2025 11:34 PM

This is how some individual police officers have been using Flock for their own purposes.

‘https://lookout.co/georgia-police-chief-arrested-for-using-flock-cameras-for-stalking-and-harassment-searched-capitola-data-earlier-this-year/story

A judge has ordered Washington police to release surveillance data collected busing Flock.
https://www.kgw.com/article/news/investigations/judge-orders-washington-police-release-surveillance-camera-data-privacy-questions/281-c2037d52-6afb-4bf7-95ad-0eceaf477864

Other data released regarding the use of Flock cameras for surveillance:

‘https://www.muckrock.com/foi/danville-7714/foia-request-alpr-audit-186112/

‘https://www.muckrock.com/foi/danville-7714/foia-request-alpr-audit-181213/

369 December 4, 2025 4:41 PM

Our New Cognitive Manifesto
https://www.psychologytoday.com/us/blog/the-digital-self/202511/the-cognitive-manifesto-revisited

'AI seems to reshape thought by smoothing the friction our minds rely on to form real ideas.
Constant fluency creates “counterfeit cognition,” blurring real reasoning with imitation.
Our task is not to resist AI, but to preserve the human depth inside the Cognitive Age.

The more we depend on systems that never hesitate, the more foreign our own (critical and essential) hesitation begins to feel. And hesitation, as uncomfortable and problematic as it is, is where real thought lives. Simply put, it marks the moment when a mind encounters itself. Beyond that, the line between authentic reasoning and its imitation becomes obscured.

AI doesn’t lie; it just doesn’t care. And this is its defining feature and risk.

AI isn’t trying to help you or mislead you. It simply generates the most plausible continuation of your query. That neutrality often gets framed as objectivity, but neutrality can mask a different force: indifference.’

ResearcherZero December 4, 2025 4:59 PM

An AI model for monitoring conversations looks for signs of “thinking” about crime.

‘https://www.technologyreview.com/2025/12/01/1128591/an-ai-model-trained-on-prison-phone-calls-is-now-being-used-to-surveil-inmates/

wumpus December 7, 2025 1:48 PM

In America that is easy: whatever the big corporations and billionaire owners want, they get. They might get a fine in the EU and may or may not eventually pay it, but they can do whatever they want in the USA.

If the citizens want to make any other decision on anything, we’d have to create an entire new political party or two from the ground up with a majority/plurality as well as replace the entire Roberts Court, which is determined to hold the free speech of talking money above all other rights of citizens.

There seems to be zero political will to do this, so expect business as usual, so the idea of “something to be decided” by anyone other than “how can I make the most money from this situation” is absolutely absurd. Perhaps a few citizens may attempt to limit their interaction with AI and social media, but don’t expect any significant differences.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.