Scientists Need a Positive Vision for AI

For many in the research community, it’s gotten harder to be optimistic about the impacts of artificial intelligence.

As authoritarianism is rising around the world, AI-generated “slop” is overwhelming legitimate media, while AI-generated deepfakes are spreading misinformation and parroting extremist messages. AI is making warfare more precise and deadly amidst intransigent conflicts. AI companies are exploiting people in the global South who work as data labelers, and profiting from content creators worldwide by using their work without license or compensation. The industry is also affecting an already-roiling climate with its enormous energy demands.

Meanwhile, particularly in the United States, public investment in science seems to be redirected and concentrated on AI at the expense of other disciplines. And Big Tech companies are consolidating their control over the AI ecosystem. In these ways and others, AI seems to be making everything worse.

This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.

The Academy’s View of AI

A Pew study in April found that 56 percent of AI experts (authors and presenters of AI-related conference papers) predict that AI will have positive effects on society. But that optimism doesn’t extend to the scientific community at large. A 2023 survey of 232 scientists by the Center for Science, Technology and Environmental Policy Studies at Arizona State University found more concern than excitement about the use of generative AI in daily life—by nearly a three to one ratio.

We have encountered this sentiment repeatedly. Our careers of diverse applied work have brought us in contact with many research communities: privacy, cybersecurity, physical sciences, drug discovery, public health, public interest technology, and democratic innovation. In all of these fields, we’ve found strong negative sentiment about the impacts of AI. The feeling is so palpable that we’ve often been asked to represent the voice of the AI optimist, even though we spend most of our time writing about the need to reform the structures of AI development.

We understand why these audiences see AI as a destructive force, but this negativity engenders a different concern: that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.

Elements of a Positive Vision for AI

Many have argued that turning the tide of climate action requires clearly articulating a path towards positive outcomes. In the same way, while scientists and technologists should anticipate, warn against, and help mitigate the potential harms of AI, they should also highlight the ways the technology can be harnessed for good, galvanizing public action towards those ends.

There are myriad ways to leverage and reshape AI to improve peoples’ lives, distribute rather than concentrate power, and even strengthen democratic processes. Many examples have arisen from the scientific community and deserve to be celebrated.

Some examples: AI is eliminating communication barriers across languages, including under-resourced contexts like marginalized sign languages and indigenous African languages. It is helping policymakers incorporate the viewpoints of many constituents through AI-assisted deliberations and legislative engagement. Large language models can scale individual dialogs to address climatechange skepticism, spreading accurate information at a critical moment. National labs are building AI foundation models to accelerate scientific research. And throughout the fields of medicine and biology, machine learning is solving scientific problems like the prediction of protein structure in aid of drug discovery, which was recognized with a Nobel Prize in 2024.

While each of these applications is nascent and surely imperfect, they all demonstrate that AI can be wielded to advance the public interest. Scientists should embrace, champion, and expand on such efforts.

A Call to Action for Scientists

In our new book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, we describe four key actions for policymakers committed to steering AI toward the public good.

These apply to scientists as well. Researchers should work to reform the AI industry to be more ethical, equitable, and trustworthy. We must collectively develop ethical norms for research that advance and applies AI, and should use and draw attention to AI developers who adhere to those norms.

Second, we should resist harmful uses of AI by documenting the negative applications of AI and casting a light on inappropriate uses.

Third, we should responsibly use AI to make society and peoples’ lives better, exploiting its capabilities to help the communities they serve.

And finally, we must advocate for the renovation of institutions to prepare them for the impacts of AI; universities, professional societies, and democratic organizations are all vulnerable to disruption.

Scientists have a special privilege and responsibility: We are close to the technology itself and therefore well positioned to influence its trajectory. We must work to create an AI-infused world that we want to live in. Technology, as the historian Melvin Kranzberg observed, “is neither good nor bad; nor is it neutral.” Whether the AI we build is detrimental or beneficial to society depends on the choices we make today. But we cannot create a positive future without a vision of what it looks like.

This essay was written with Nathan E. Sanders, and originally appeared in IEEE Spectrum.

Posted on November 5, 2025 at 7:04 AM12 Comments

Comments

Winter November 5, 2025 8:01 AM

Change is hard. It is clear that we only see the downsides and what goes wrong.

Change always installs fear, as those on top of the wheel of fortune see only a way down for them, while those at the bottom fear they will lose whatever they have. And fear installs conservatism, the desire to go back to the days of our youth, when all was well, or so we remember it.

You can do things with AI that were impossible even 5 years ago, eg, real time speech to speech translation in dozens of languages, automatically writing sensible texts, even in a foreign language, finding correlations in unimaginably large data sets, or finding new chemicals and drugs.

So we know that there is a lot that will change due to this new technology.

Just as electricity gave us light and the electric chair, and air planes gave us world wide travel and bombs, there will be bad things happening because of AI.

We just should remember that tools might be a technical problem, tool use is a social problem. And our current social problems are not caused by new tools, but old vices, ie, hierarchy, privilege, and division.

Banning the tools, eg, AI, won’t solve the poisonous hierarchy, privilege, and division that is currently ruling our nations.

Rontea November 5, 2025 10:07 AM

Scientists play a critical role in guiding the development of AI because their expertise ensures that technological progress aligns with ethical principles and societal needs. By applying evidence-based approaches and prioritizing safety, fairness, and transparency, scientists can help prevent misuse and foster innovation that genuinely benefits humanity. Their leadership provides the foundation for responsible AI policies, bridging the gap between rapid innovation and long-term societal well-being.

Morley November 5, 2025 10:19 AM

I hope people are enacting specific, near term plans. In the US, I don’t know if we have time to play the long game.

A positive outcome takes a large amount of effort, social/political influence, and sacrifice by comfortable people. A negative outcome happens without any effort, at all, and is being rushed to market by the most powerful people.

K.S November 5, 2025 11:21 AM

A call for more rose-coloured glasses?

Fundamentally, AI is shifting work toward capital from labour. For majority of people, this means that their livelihoods are on the line or at the very least will be negatively impacted by the downward pressure on the wages from AI competition. For a small minority of people, this means that their investments will deliver record profits.

lurker November 5, 2025 12:48 PM

@Bruce
“AI is eliminating communication barriers across languages, including under-resourced contexts like . . . indigenous African languages.”

I can see only the abstract, but it seems to indicate that the researchers were searching for positive examples. Other research suggests that the inherent techno-linguistic bias in current AI models produces distorted and ineffective results for minority language communities.

https://link.springer.com/article/10.1007/s10676-023-09742-6

KC November 5, 2025 1:29 PM

From the article:

But we cannot create a positive future without a vision of what it looks like.

Yes, what a call to creative imagination.

Every day there are new developments in AI. Just today, I have run across a new report:

AI for Nature. How AI can democratize and scale action on nature.

Appendix B includes the interview questions they used for the report. I find these to be the most fascinating, and something I’d honestly enjoy spending time with.

For example, one question was: “How can AI help link nature-positive supply chains to economic value?

Don’t you think it would be fascinating to see what research might come out of that?

skeptic November 5, 2025 1:57 PM

If the intetests behind AI are encouraging doubleplusgood Newspeak about AI, is the Academy going to comply?

Three cheers for intellectual freedom.

Clive Robinson November 5, 2025 3:31 PM

@ Bruce, All,

You say,

“In these ways and others, AI seems to be making everything worse.

This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path.”

The first point of “everything worse” is very relative and we see it occuring day by day with Current AI LLM and ML systems.

In part because it’s their “party trick” because they are in effect a Database with Search Engine. People tend to think in terms that are 1970’s. Whilst LLM and ML systems are Databases with Search engines the search engine is not the same as “Databases of Old”(DoO).

The search rules DoOs and currently use a set of search rules that are in effect “simple logic” LLM + ML systems use a set of rules that are almost entirely “statistical” and built Vector based to a measure beyond Humans comprehension.

As such they can be used to not only search your bank and tax records looking for signs that potentially a crime has been committed.

But also when your behaviours have changed. So might ve in a relationship you have not declared to people who see that as an opportunity to take punitive action against you.

This is “invented crime” and as we know from “Robodebt” it is aimed at the bottom of the socioeconomic ladder for two basic reasons,

1, Political Mantra from arms length.
2, Because those accused do not have the resources to defend themselves.

As we should all be aware LLM systems are actually very bad at,

1, Producing search results that are factual or even accurate.
2, Deliberately biased by the training data order etc.
3, Can be deliberately biased by user input.

And so on…

As we now are coming to understand the Robdebt was not investigated correctly or timely, and was subject to political interference and blocking all the way through. As a result innocent people died or were permantly harmed and scared.

As I’ve reported in the UK HMRC (IRS equiv) and the UK DWP (responsable for welfare of cittizens that are young, old, unwell, or in other ways not able to earn sufficient money to meet even the poverty line). Have created an AI system with the sole intent of raising money by fines and false allegations knowing that each person they strike down innocent or not gets them “promotion points” and “bonuses” so they are very very biased at best.

As I’ve mentioned before I used to provide assistance to people who’s sole occupation was to hunt down “wrong doing” in Authorites such as Police / Guard labour, Military and politics. Specifically highly hierarchical organisation where various types of Institutional intentionally fraudulent behaviour encouraged other into other type of fraudulent behaviour that was “overlooked” for various reasons.

As @Winter notes above,

“And our current social problems are not caused by new tools, but old vices, ie, hierarchy, privilege, and division.

Banning the tools, eg, AI, won’t solve the poisonous hierarchy, privilege, and division that is currently ruling our nations.”

What he forgot to mention is that the “poisonous hierarchy, privilege, and division” is actually a very much wanted feature of those who are Authoritarians and have the power and Status to fulfill their wanted “Societal Control” no matter how unwelcome and unwanted it is to society.

Current AI LLM and ML systems give such Authoritarians the ability to exert illicit and frankly unlawful if not actually illegal oppression on people for “Political mantra” reasons. Such authoritarianism is a byproduct of the current technology companies and their highly questionable and illegal business practices.

Above @Morely sums part of the issue up with,

“A positive outcome takes a large amount of effort, social/political influence, and sacrifice by comfortable people. A negative outcome happens without any effort, at all, and is being rushed to market by the most powerful people.”

For any “Positive Outcomes” of AI to happen two things need to happen,

1, The current Political / Power structure bound to mostly Off Shore Mega Corps, has to stop.
2, The use of current AI LLM and ML Systems should be prevented except where it’s types of behaviour are not harmfull to society.

@ K.S. points out one of the harms of Current AI LLM and ML systems that will have serious multigenerational harms,

“Fundamentally, AI is shifting work toward capital from labour. For majority of people, this means that their livelihoods are on the line or at the very least will be negatively impacted by the downward pressure on the wages from AI competition…”

I can and have pointed out this before. Not least the result of “off-shoring and out-sourcing” it decimated the US&UK workforce especially in the biggest industries letting the “Financial flyboys” run the countries finances into the ground. This was apparently the idea of UK Prime Minister Margaret Thatcher influencing US President Ronald Reagan. Both of whom we know know were starting showing the signs of being mentally incompetent at the time, which later was shown beyond dispute as they both declined. She had two cognitive biases,

1, The country did not need industry just services.
2, The unconstrained market would be the best way to go.

Both were proved to be disastrously wrong, and to have multigenerational harms for society, government, it’s finances and consequent freedoms.

This is because the Mega Corp Tech and Finance industries now dictate to legislators not only what they want, but that if not given them how they can have a detrimental effect on legislators and their positions.

Surprise surprise they get their desires greased into legislation and society becomes the very much poorer for it.

They are like all “Power Entities” not going to give any kind of control they have up. In fact they will seek more of it to ensure their position.

I’ve talked in the past about Palantir and it’s desires and business model. It’s far from unique and the only thing they appear to fear is “competition”.

With that sort of mentality controling AI and it’s direction and use

Can you see it geing used for the good of mankind against them?

Of course not you would be silly to do so.

As for “AI Research” that has been effectively stopped if it’s not for making Current AI LLM and ML systems not better, but more profitable.

I’ve been pointing out that Current AI LLM and ML systems are going to be,

1, Failures at the claims made.
2, The AI companies banding together to create worthless “Financial Churn”.
3, The US economy is now dependent on AI not being a Failure.
4, The Chinese are actually better at AI with less.

If you follow the logic Current AI LLM and ML Systems will “in general” use be a failure, the worthless churn will unravel, the US economy will be badly hit and a major recession most likely. The Chinese will ensure that AI will be cheaper and more effective thus “steal the cheese”.

The simple fact is all tools have to be “Force Multipliers” or they have no future, because who would buy/lease/subscribe to them?

Now apply that to Current AI LLM and ML Systems, where they have been used in the general sense they have shown no force multiplying effect.

They have however like specialised craft tools, shown a force multiplying effect in highly specific areas of searching matching and checking against existing knowledge.

The prime example often given is AlphaFold. However people do not talk about how it was specialised to a particular function and knowledge set. It did nothing that was new, it just had better “depth and breadth” than humans, thus could get there thousands of times faster.

And that’s the point everyone appears to be missing,

“Current AI LLM and ML Systems only force multiply in highly specialised areas of existing knowledge.”

Further,

“Current AI LLM and ML Systems have no advantage in general because of bias and error, however because of this they are highly desirable for doing considerable harm to society.”

Which tells you they have to be regulated in ways we can not yet imagine, but we do know will be difficult at best to implement except when the negative effects have become available to the public.

For instance we do know that Current AI LLM and ML systems, have been responsible for causing people to commit suicide, suffer mental break downs, defame people and supply false evidence effectively “on demand”.

Do we want or need such systems “Running the country” under the control of increasingly authoritarian leadership with frankly aberrant political views?

As @ K.S. raised above with,

“A call for more rose-coloured glasses?”

It’s long past time they came off lest people become accused of being “shills”.

Because for Society at the moment there is no “up side” in Current AI LLM and ML Systems used generally.

Does the Current AI LLM and ML systems have a future?

Not for what people are shilling for. Like most AI it has a niche position that makes it highly specialised in use. In general things it is often “worse than useful” if not “down right harmful” to individuals and society in general. And it has a very good chance of being a major down fall for society that will take generations to clean up.

Why do I know this well it goes back nearly a Century to the work of Kurt Gödel. He showed that determanistic systems can not describe themselves it’s a logical impossibility. A recent paper I linked to shows that computation can not accurately simulate no matter how much information and how fast it can search it.

This indicated that we “Can not be living in the Matrix” and has some profound implications for physics and other sciences.

If you read through it and understand what it is showing it has proved unquestionably that “Current AI Systems in any form” that is based on computation can not in any way get “General Intelligence” so makes most of the Mega Corp claims provable lies… But further it also shows that it is highly unlikely they will be able to reason either. Which should not surprise anyone that knows how LLM and ML systems work. “Because they only pattern match” to the known, not “reason out knew knowledge”

A link to the paper can be found in my posting on the current Friday Squid,

https://www.schneier.com/blog/archives/2025/10/friday-squid-blogging-giant-squid-at-the-smithsonian.html/#comment-449530

The result of it will take years for science, computing and AI to come to terms with. In part because it shows that even Quantum Computing is going to meet what the public want and also indicates there is a form of computation beyond Quantum. This of course has serious Security Consequences”…

Marc November 5, 2025 10:45 PM

I use tools like Maple AI, ppq.ai, and Shakespheare.diy to create images for my blog, nostr websites, and other stuff. I still write prose on my own and I still write the prompts.

Jack Dorsey, Calle, and a few others made BitChat using Goose. I think these tools can help artists be better artists, writers write better, and content creatora create more content.

I worry that it will take away jobs. I’m also optomistic and hope it creates new jobs. I know a few people who have gotten jobs because of AI. Hopefully more people can too.

Clive Robinson November 6, 2025 6:00 AM

@ Marc,

You say,

“I think these tools can help artists be better artists, writers write better, and content creatora create more content.”

I disagree, I think it will turn them all into a “homogeneous crowd” of indistinguishable mediocrity that is becoming known as “AI Slop” and pollute the Internet irredeemably and forever. As such it will kill creative endeavor and in effect halt the forward progress of society.

In fact I know this has already been the case…

Because I could easily spot AI word outputs because they had the essence of “PR Marketing speak” that made up a large part of the “easily scrapable Internet of the time.

The fact that you apparently as a Cryptocoin developer,

“for my blog, nostr websites”[1]

Are unaware of this makes me suspicious in a number of ways, not least that you are both,

1, Doing unsolicited advertising by “link” and “product placement”
2, Shilling for the Cryptocoin crowd

[1] For those that don’t know, the Nostr Protocol is a supposedly secure “messaging protocol”. It has a number of security failings that make it unsafe in use. As such it’s become “A great White Hope” of the Cryptocoin crowd, and just like the Blockchain it will at some point turn around and bite them.

Clive Robinson November 6, 2025 10:00 AM

@ Winter, ALL,

I was going to get back to you sooner[1] over,

“You can do things with AI that were impossible even 5 years ago, eg, real time speech to speech translation in dozens of languages, automatically writing sensible texts, even in a foreign language, finding correlations in unimaginably large data sets, or finding new chemicals and drugs.”

What is clear is what you see Current AI LLM and ML Systems doing things “that were impossible even 5 years ago” are actually very narrow in scope not broad or general.

It’s why the “promise of AI” that is being pushed is at best day dreams but actually amounts for claims that can not be delivered now or more definitely “never” by Current AI LLM and ML Systems.

As I’ve indicated in the past this does not mean Current AI LLM and ML systems are a useless or dangerous technology because of what I oft quote as the Observer problem,

“All technology is both good and bad, it’s the often much later observer of the use that decides based on their then current “Point Of View”(POV).”

As such a non AI example is German Politicians. They were pro Russian for “economic advantage” but when Russia went to war sufficiently close to them the “high economic cost” of “Defense Spending” suddenly was nolonger an issue. Most external observers would now say such a change was desirable when before they would have said undesirable.

It’s what happened in the “time gap” that changed the “Observers POV”

And technologists have little or no ability to change that POV. Whilst what you politely refer to as,

“won’t solve the poisonous hierarchy, privilege, and division that is currently ruling our nations.”

Thus they “very much control” the public narative thus the “Observer POV”.

So the question arises,

“If Current AI LLM and ML Systems, can not safely deliver on either the big stuff, or the basic stuff… Because it acts as a divisive non force multiplier but divider on much of the basic stuff creating more work than it can ever reliably deliver cost benefit on. What use is it?”

Well that leaves what might be described as the small stuff, that is neither general or basic but actually is very narrow scope broad data searching by frequency statistics.

As I’ve mentioned I’ve been playing with AI and robotics since the 1980’s.

A time when “Expert Systems” and “Fuzzy Logic” were the latest and greatest in AI. They to did not deliver on the grandiose claims, but I still use both of them in embedded systems and sub systems for the likes of control.

For instance the number of electric trains that use both in their drive and breaking systems to ensure a smooth and safe ride for passengers is surprisingly large.

And that is the history of AI,

1, New Technique,
2, Grandiose Claims
3, Failure to meet the Claims
4, Re-evaluation of Technique
5, Find good use in tight scope small systems.

The problem is “money” is made on the rise of the “Grandiose Claims”. But they always fail to materialize so confidence thus money is lost as the “Grandiose Claims” are shown to have worse than no recoverable value.

However the financiers who “taxed the deals” walk away with a lot of money to wait for the next “Mug Investor fleecer” technology to come along.

That however does not mean the techniques are bad, just that people were for greed pointing them away from what they are good for.

For Current AI LLM and ML Systems, big random data sets are making them fairly close to useless for “General or Basic” use.

However for specific narrow focus data sets especially large ones they will find faint statistical patterns and make them sufficiently viable to further test.

This is as I’ve previously noted was what AlphaFold did. Yes it still makes mistakes but because of the narrow nature of what it does those mistakes are usually fairly easily found in later post processing.

This is actually very usefull even ignoring the Nobel Prizes.

And that is as history shows the future of Current AI LLM and ML Systems. And if anyone is to bet their shirt on a different outcome life might get a little chilly financially if not actually.

I’ve tried to get this across but unfortunately I’m being honest, whilst the big players have other motives, and some will make quite a bit of money by using Current AI LLM and ML Systems for what society already considers “bad” and because they have both fiscal and political power they can ram through the bad uses that so benefit them but not society.

[1] But an emergency hospital visit was required that has left me in a bit of a poorly condition (they put me in an “Isolation Area” not sure if that was for my safety or the safety of others)

Graham November 15, 2025 7:01 PM

We must work to create an AI-infused world that we want to live in.

Why must we work to create an AI-infused world? I’m a scientist with publications where I lead the development of neural networks in a biological context, so I understand the value of such tools. But just as I don’t need or want bluetooth in my powertools, toothbrush, or toaster, I also don’t need or want AI inside my appliances or every website I visit. In most instances there is no value to be added – language models are being shoved into anything and everything with no obvious purpose other than management’s fear of missing out.

How about stepping back and considering if your time and energy could be better focused on non-AI improvements to the world?

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.