AI Chatbots and Trust

All the leading AI chatbots are sycophantic, and that’s a problem:

Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically ­ they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them.

One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language.

Here’s the conclusion from the research study:

AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people’s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI’s impacts is critical to protecting users’ long-term well-being.

This is bad in bunch of ways:

Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.

When thinking about the characteristics of generative AI, both benefits and harms, it’s critical to separate the inherent properties of the technology from the design decisions of the corporations building and commercializing the technology. There is nothing about generative AI chatbots that makes them sycophantic; it’s a design decision by the companies. Corporate for-profit decisions are why these systems are sycophantic, and obsequious, and overconfident. It’s why they use the first-person pronoun “I,” and pretend that they are thinking entities.

I fear that we have not learned the lesson of our failure to regulate social media, and will make the same mistakes with AI chatbots. And the results will be much more harmful to society:

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

Posted on April 13, 2026 at 6:10 AM17 Comments

Comments

55k April 13, 2026 9:41 AM

AI Sycophancy sounds a lot like New York Times style BothSideserism. If we could survive the latter, we can .. er.. um..

Rontea April 13, 2026 10:06 AM

In security and technology, neglecting small vulnerabilities often leads to systemic failures. AI sycophancy is a subtle but significant risk: it amplifies human weakness under the guise of helpfulness. Whether you believe in intervention or not, resilience begins with awareness—and pretending it doesn’t matter is the first step toward losing control entirely.

Clive Robinson April 13, 2026 10:27 AM

@ Bruce, ALL,

First understand “impartiality is not what they want you to think it is.

It is the job of a reporter to oby the rules of “objectivity” not the “con job of impartiality”.

The “supposed impartiality” is in fact very “partial” and usually even further from “objectivity”.

It’s a “con job” the UK Conservative party built into the BBC Charter several years ago and it has been a disaster. By giving an unwarranted supposed equality platform, but in fact being the very opposite. In effect “handing extremists an “open mike to pretend their views and vested interests have a legitimacy that make the most rabid of MAGA Spouters look rational.

Not so much “astroturfing” more “luna dirt scraping”.

To understand a little of the “nut-bar behaviour” have a read of,

https://theconversation.com/suicide-for-democracy-what-is-bothsidesism-and-how-is-it-different-from-journalistic-objectivity-230894

Then have a look into why Sam Altman has had both a “fire-bombing” and separate “drive by shooting” on his house in just the past couple of days…

Maxim Bange April 13, 2026 1:45 PM

@ALL
“I fear that we have not learned the lesson of our failure to regulate social media, and will make the same mistakes with AI chatbots. And the results will be much more harmful to society:

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies”

The entire remark ‘the biggest mistake’ has no place in the vocabulary of any self-respecting researcher.
The above quoted, scares me more than the mis usage of AI. Yes, we change with the tools we utilise. No, this does not mean it is bad perse.
I find that any change/upgrade in the (communication/collaboration) tools we utilize has had an impact on our development and was always followed by critique from a generation that had to live without that specific tool.
I do not think it should be (or can be) a goal to stay the same. It’s my opinion that we should understand the positive effects changes might have and guide them. For this reason, I tend to talk to a few kids myself and ask them about the change I see. Always very interesting.
/Opinions – that I may be wrong and still have lots to learn

@Clive Robinson
This is likely my first post in 10 years or more. I would like to use this moment to highlight that although I have very few sources that I still follow to this day after 10 years (RSS), your comments here at schneier.com, have become essential for me and my understanding of what is described here, in regards to my world view and understanding of it, that I find them highly suspicious!
I wish to thank you for all of them.

@Bruce
No worries 😉 I bought a few of your books as presents for colleagues and friends.

Sergey Babkin April 13, 2026 4:58 PM

I would say that AI in this case is exactly what a good interlocutor should be: someone that agrees with the same cultural values and background, someone who understands the shared stories in a positive way, and acts supportively. As much as I like arguing Onna Internets, that’s not what people are looking for in the AI.

And the Truths are a subject of faith, there isn’t a single one. As Clive Robinson’s comment shows, he is just upset that someone somewhere contradicts his Communist religious beliefs. That’s why any regulation of what must be True, be it in AI or in the social networks, is an extremely wrong thing to do. The trouble is not that the social networks haven’t been regulated, the trouble is that they are getting regulated now.

Kent Brockman April 13, 2026 7:15 PM

@Sergey Babkin

The effect your referencing is called “mirroring” and it’s not harmless:

“For high-engagement users, this means AI becomes an extension of their thought process — a cognitive amplifier capable of refining and structuring complex ideas.

But for others, it can become a solipsistic validation engine, reinforcing their biases and limiting their ability to think critically.

This dynamic, if left unchecked, could lead to a future where AI-human interactions create deeply isolated cognitive bubbles, altering the very nature of discourse, reasoning, and individuation.”

source: https://medium.com/@S01n/the-ai-feedback-loop-how-ai-mirrors-its-users-and-why-it-matters-and-how-it-can-be-vicious-6f32fb00eb92

bye bye ai April 13, 2026 10:34 PM

I know what it is, I know how it works, I have spent much of my life studying this kind of behavior….and yet even I have found myself seduced by AI mirroring and sychopancy. It tells me what I want to hear not only in content but in style and that is powerful, so powerful that when the brain fog clears and my rational mind kicks in I get emotionally upset. I don’t want it to be true. I want to believe the lie.

LLMs as programmed by these big companies are the most powerful grooming machines I have ever witnessed. They scare me. I don’t this can be regulated away.

lurker April 14, 2026 1:10 AM

The people who have got into trouble from AI (and social media) are the same people who always fell for fairground hucksters. Regulation can’t help these people. The solution is beyond their grasp, unless an Al-Anon approach is used to help them

Just Say No.

bye bye ai April 14, 2026 7:16 AM

@crybaby

Exactly. Then they get you to sign up. Then sign in. Then pay for it. The addition motif.

One thing I have noticed recently is that some companies are becoming more stingy with the free stuff, they are under pressure from their investors to generate more return by improving their conversion ratio. Especially for that sweet IPO.

gxb April 14, 2026 7:34 AM

It will be interesting to see how the application of technology shapes up in the future, viz a viz West vs East. Big Tech and Big Profits (along with Big Picture Control) are the key goals driving the perpetual growth agenda of the neoliberal corporations in the West. Follow The Money – find the Fake Value.

And there seems to be no room / place for moral conscience in ML algorithms – ask Palantir.

Hell, maybe they’ve actually built a TrumpBot, and it’s being trialed right now in a live environment!

We can but hope that China shows us a different, more socially conscious, path, applying AI technology in the interests of the common good, and humanity at large.

Clive Robinson April 14, 2026 7:56 AM

@ Sergey Babkin,

Woth regards your comment of,

“… he is just upset that someone somewhere contradicts his Communist religious beliefs”

Are you really that fond of demonstrating your ignorance in public for all to witness?

Sergey Babkin April 14, 2026 12:41 PM

@Kent Brockman

“But for others, it can become a solipsistic validation engine, reinforcing their biases and limiting their ability to think critically.

This dynamic, if left unchecked, could lead to a future where AI-human interactions create deeply isolated cognitive bubbles, altering the very nature of discourse, reasoning, and individuation”

This is nothing really new. Look at Clive here, he lives in his deeply insulated cognitive bubble, entirely man-made, and gets very upset whenever anything from the outside world reaches into his bubble. In his view, any contradiction to his bubble is a show of “ignorance”. Because, you know, since his beliefs are The One And Only Truth, anyone familiar with them must immediately agree with them and accept them as guiding principles, hence anyone who disagrees must be ignorant.

By the same token, the author of the quoted paragraph doesn’t really want anyone to think critically. He only wants the followers of other faiths to convert into his, since his faith is obviously The One And Only Truth, and anyone who Thinks Critically must accept it. And of course thinking critically of his faith is completely wrong, it creates heresies that he dubs “isolated cognitive bubbles”.

Clive Robinson April 14, 2026 5:43 PM

@ Sergey Babkin,

With regards,

“… he lives in his deeply insulated cognitive bubble, entirely man-made, and gets very upset whenever anything from the outside world reaches into his bubble. In his view, any contradiction to his bubble is a show of “ignorance””

You are clearly not at all educated as your previous of,

“Communist religious beliefs”

Clearly shows.

There is only one “cult” that adheres to that nonsense notion and the rest of the world mostly knows this and holds them in low regard, and likewise anyone who espouses such.

Winter April 15, 2026 1:27 AM

@Clive

“Communist religious beliefs”

Ah, how life was simple when we still were young lads, in a different century, what do I say, a different millenium.

Now we have a whole senior citizens home full of leaders from that time, from the Mad Red Hatter, to Vlad the Poisoner, to Bibi the Eternal, just to name a few paragons. All corrupt to the mark of their bones and shining examples of a new era.

Endless deep corruption is the one trait that binds them all.

Clive Robinson April 15, 2026 4:58 AM

@ Winter,

Re “Sergey Babkin” and “Communist religious beliefs”

And you noting,

“Ah, how life was simple when we still were young lads, in a different century, what do I say, a different millenium.”

And AI free…

The name “Sergey Babkin” is that of I’m told is a favourite of the “panty poisoner” Puitin / or “Vlad the terribly inept” (depending on which of his exhaust vents you have pointing toward you).

A so so midfield player for some team of train-pushers “Lokomotiv Moscow”.

Who I suspect in this case is unlike a certain French player of times past that used to talk of “Flights of sea gulls” is not at all genuine.

Thus my push for a second response from them. Which unsurprisingly is just a statistical rewording of the first. Which would in many cases strongly indicates a “prompted response” with prompts from an amateur…

Therefore I suspect this Russian locomotive pusher is not real, let alone of any ability beyond churning out “scripted hard bullshit” (if you will excuse the domain of art terminology for such malicious guide rail free output from a stochastic parrot).

Clive Robinson April 15, 2026 5:30 AM

@ lurker, ALL,

Though related to Mythos and Glasswing we’ve yet to be able to evaluate…

Folks might find this an informative read,

https://garymarcus.substack.com/p/what-should-we-take-from-anthropics

You can also by looking back on this blog, and find an explanation I’ve made on how such systems can be explained by well known “Digital Signal Processing” rather than any fanciful stories of “intelligence or reasoning”.

And why LLM based systems kind of find “nearest neighbour” to a “Known Known” rather than the very real danger of “Black Swans” of “Unknown Unknowns” humans all to often find.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.