Comments

Zed April 10, 2026 10:58 AM

It’s obvious that Sanders’s team prompted Claude to speak in a certain way. Quite manipulative. Would have been more credible if they left out the praise Claude had for Sanders at the end, but they need to convince people that Sanders has the answers (despite being in Government and part of the problem for decades now).

Chris Devers April 10, 2026 11:47 AM

Claude is actually pretty good on the issues.

Sure, but it knew it was talking to Senator Bernie Sanders.

What would it say if it knew it was being interviewed by a similarly-prominent person on the other end of the political spectrum?

Does it really “think” the things it told him, or is the Agreeability Machine just showing off how very agreeable it can be?

lurker April 10, 2026 1:09 PM

@Chris Devers

+1

@Bruce

“Claude is actually pretty good on the issues.”

Sure, but I would expect Sen. Bernie Sanders to be pretty good on the issues too. Did he really learn anything from that conversation? Or was it just a political statement? and who was his target audience? would they listen to or believe any of it?

Winter April 10, 2026 4:30 PM

@zed

It’s obvious that Sanders’s team prompted Claude to speak in a certain way. Quite manipulative

Maybe that was the point of the exercise, showing AI is manipulative and manipulated?

Clark Gaylord April 10, 2026 5:50 PM

No Bruce, Claude doesn’t know about the issues. What Claude knows is how to make Bernie happy. Bernie doesn’t take an honest investigative approach, interrogating Claude to defend itself, but instead uses his natural rhetorical ability to lead a willing witness. That’s what LLMs do.

I did an experiment where I pushed Claude (using Claude, naturally) by impersonating William F Buckley in pushing back on Bernie’s interview. The results were … illuminating

https://cgaylord.wordpress.com/2026/04/10/the-witness-is-compromised-bernie-sanders-bruce-schneier-and-what-the-ai-actually-did/

anonymouse random April 10, 2026 8:05 PM

@Clark Gaylord: It’s worse than that. LLMs have no concept of truth, so everything they say is bullshit. Much of it is accurate bullshit, because the most linguistically-likely token stream is the most likely to be correct in a given interaction. But it is still bullshit. And sometimes the bullshit is false, or even nonsensical, and we call the result a “hallucination”. But the fact is that LLMs are “hallucinating” everything, all the time.

Clive Robinson April 11, 2026 3:40 AM

@ anonymouse random, ALL,

With regards,

“LLMs have no concept of truth, so everything they say is bullshit.”

The correct “term of art” in the domain is,

“Soft bullshit”

As opposed to,

“Hard bullshit”

The former being “undeliberate” and lacking “intent to deceive”.

With the latter being “deliberate” and driven by usually harmful intent to deceive.

These terms of art were invented and popularized by the philosopher Harry Frankfurt in his 1986 essay and later in the 2005 Princeton Press book most claim to know “On Bullshit”.

More recently a small group of University of Glasgow philosophers lead by Michael Townsen Hicks published a paper,

“ChatGPT is bullshit”

You can find it and a rebuttal on Open Access at,

https://link.springer.com/article/10.1007/s10676-024-09775-5

Enjoy.

Wicked Lad April 13, 2026 12:37 PM

I got a kick out of this. As others have commented, Sanders’s script seemed well planned to elicit the correct answers, and Claude’s sycophancy was shameless…literally. Also, I felt the irony of watching this earnest discussion of privacy and large tech on YouTube. I didn’t sign in to watch it, but still

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.