On Moltbook

The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:

Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”

Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do.

I think this take has it mostly right:

What happened on Moltbook is a preview of what researcher Juergen Nittner II calls “The LOL WUT Theory.” The point where AI-generated content becomes so easy to produce and so hard to detect that the average person’s only rational response to anything online is bewildered disbelief.

We’re not there yet. But we’re close.

The theory is simple: First, AI gets accessible enough that anyone can use it. Second, AI gets good enough that you can’t reliably tell what’s fake. Third, and this is the crisis point, regular people realize there’s nothing online they can trust. At that moment, the internet stops being useful for anything except entertainment.

Posted on March 3, 2026 at 7:04 AM17 Comments

Comments

Gaxx March 3, 2026 9:06 AM

I think I am in fully-fledged agreement with Robin, on this one.

The notion that trusted sources can no longer be determined online suggests a rather strangely imagined lack of discernment on the part of humanity.

I can imagine a future where sources such as social media and comments sections might become swamped by AI to the point that normal discourse and sources of information are swamped there. And that might entail it becoming prohibitively difficult to pick out truth from fantasy in those spaced.

However, the notion that the internet itself becomes unusable presumes that AI has access to rewrite every space on the internet or to at least swamp every space with its AI-generated content. This is clearly not the case, and discernment of where to find truth becomes a matter of discerning different spaces on the internet rather than picking through the overwhelming mass of AI-created content in the spaces available to it.

Some people are gullible enough to believe, without quetsion, what they read on social media. Many look for more reliable sources. That has been the way for a while now.

Wannabe Techguy March 3, 2026 9:41 AM

It’s not just social media. Some people have trusted the “news” media without question for decades.

Rontea March 3, 2026 9:54 AM

As AI-generated content floods the internet, distinguishing signal from noise becomes increasingly difficult, and the utility of online information diminishes. If platforms like this continue to prioritize hype over security, the web risks devolving into little more than a chaotic playground for entertainment and manipulation.

employment March 3, 2026 10:21 AM

@Wannabe Techguy

It’s not just social media. Some people have trusted the “news” media without question for decades.

But there’s a difference. If the New York Times describes the government of country X as a despotic dictatorship, that might well be propaganda. But if the New York Times announces that there was an earthquake in country X yesterday, that’s pretty certain to be true.
With AI slop on some social media, you can’t trust anything.

Winter March 3, 2026 1:25 PM

The point where AI-generated content becomes so easy to produce and so hard to detect that the average person’s only rational response to anything online is bewildered disbelief.

This is an age old problem, probably as old as humanity itself.

I have always known people who sprout nonsense, from astrology, vitamin supplements, knock-on-wood believers, crop circles, angels, earth rays, down to advice to newborn mothers not to eat cherries.

In the end, everyone has to make a choice whom to take seriously and whom not.

A random voice on the internet is just as trustworthy as a random guy on a soap box.

K.S March 3, 2026 1:42 PM

The vast majority of people are comfortable deferring to those they perceive as experts or authorities. This is likely evolved, part of what makes humans social animals. We even have quasi‑pejorative labels for the rare individuals willing and able to question authority: nonconformists.

What does this mean in the age of AI? It means that, without a drastic and, in my opinion, unlikely change in how the average person determines what is true, the age of AI will make truth even less constant/available than in Orwell’s Oceania.

K.S March 3, 2026 1:48 PM

>>In the end, everyone has to make a choice whom to take seriously and whom not.

The existing methods of doing that (source, tone, eloquence of writing style) are not up to the task. We will have to invent and test new heuristics that work on AI.

Personally, I already notice that I am more likely to dismiss overly verbose responses as suspected AI slop.

Tarquin Biscuitbarrel (Silly Party) March 3, 2026 4:36 PM

The interwebs has long since become about entertainment and selling products like a latenight infomercial. Or have you been so ivory tower you werent paying attention?

Thats the sad problem with equalityand inclusiveness: eventually the lowest common debominator is reached and earlier standards and practices are abandoned along the way.

Did you know youtube does not show ads in countries with no viable ad market? Maybe internet not suck so much there?

This comnent was sponsored by razors, art investing, vpns, cypto and some weird news aggregator that tells you about bias.

Clive Robinson March 3, 2026 7:32 PM

@ K.S.

You ask a question and give just one of many answers,

“What does this mean in the age of AI? It means that, without a drastic and, in my opinion, unlikely change in how the average person determines what is true, the age of AI will make truth even less constant/available than in Orwell’s Oceania.”

From my perspective, believe it or not truth has very rarely been important to most people.

They believe what they want to believe and disregard the rest.

It’s only when it gets very close or directly effects them do some actually worry or care about the actual verifiable truth.

I’ve heard argument from sociologists, anthropologists, historians, and similar that it’s to do with “trust in our tribe” that is we have a circle of people around us. Generally it’s not that big very often less than 20 people we actually “know” rather than “we are aquatinted with”. We see them as “our group right or wrong”.

So does the average person get effected by “truth or not” I suspect the answer is mostly “no” it’s only in the group of “people we known” that it matters or effects us.

Facebook is I’m told the worst offender in creating “trust issues” as it profits by what comes out of it. And that it has algorithms designed to as some people put it “spice it up”. As I’ve never joined Facebook and never will nore will I join the cesspit that is LinkedIn or similar…

I may be missing out on “oh so much drama” but do I actually care?

Simple answer “NO”

But then I am non neuro typical with a keen eye for “that which is hinky”. I do however care about truth on a very much wider basis than by far the majority of people. As it helps me evaluate, decide, and predict often quite far into the future sufficiently accurately that I’m usually happy to make my predictions public… Funny thing is I get told that I’m “paranoid” or a “doom sayer” or similar, only to see my predictions of “doom” come in effect true.

For instance I said years ago on this blog that the US would start a war with Iran or China back when China in effect only had very limited military capability. Much has changed since then, but here we are with the US having started a war against Iran, from which only “criminals are going to prosper”…

I was reminded of this by a friend yesterday who noted that it’s now five years that the Ukrainian people are still fighting Putin and are not giving up nor does it look like they are going to and that it has become a Drone War, which is fairly well what I predicted back when it started.

I used to joke that “As an outsider to the human race looking in, it was all to easy to predict what people were going to do by the stupidity and evil that drives a few against the rest”. Little has changed except for the fact that the stupidity and evil is getting worse, and mostly the rest are just following on because they don’t want to be not in a group around them, and these groups don’t take responsibility.

My friend also reminded me that next thursday would be the 11th anniversary of the death of Terry Pratchett who we had both known. I once had a chat about this idea that “evil prospers” with Terry, and he observed that “Evil plots and plans, and every one else just blindly carries on, it’s why evil always prospers”.

And when you think about it, it’s generally true, very very few actually plan and most abdicate responsibility unless pushed to act, so the rest just falls into place.

It’s also why George Santayana’s two more famous quotes to many are accurate predictors,

1, Theory helps us to bear our ignorance of fact.

2, Those who cannot remember the past are condemned to repeat it.

But he had something more relevant to say on falsity,

3, The line between what is known scientifically and what has to be assumed in order to support knowledge is impossible to draw. Memory itself is an internal rumour and when to this hearsay within the mind we add the falsified echoes that reach us from others… we have but a shifting and ungraspable basis to build upon. The picture we frame of the past changes continually and grows every day less similar to the original experience which it purports to describe.

Winter March 4, 2026 12:04 AM

@Clive

Generally it’s not that big very often less than 20 people we actually “know” rather than “we are aquatinted with”. We see them as “our group right or wrong”.

I was reminded of this by a friend yesterday who noted that it’s now five years that the Ukrainian people are still fighting Putin and are not giving up nor does it look like they are going to

Did you want to illustrate your first point here? Crafty!

The current invasion started in 2022, four years ago.

I was reminded by a podcast that in those 4 years, the mighty Russian Army progressed 60 km into Ukraine. And that mighty Russian Army needs Iranian drones and foreign soldiers recruited under false pretenses to keep their ground.

Also, the biggest surprise in this awful war has been that the Ukrainian people unified and fought back with determination.

Winter March 4, 2026 2:13 AM

As a side note on Artificial General Intelligence (AGI).

There are a few necessities before we even have to take an effort to start to consider the presence of AGI in some bot.

A bot must be able to:

  • learn continuously from its input, successes and failures, even from it’s own simulated actions
  • internally simulate and compare the outcomes of possible actions before acting (thinking)
  • compare short term outcomes to long term goals (aims and morals)

Current AI bots fail on all of these points.

Clive Robinson March 4, 2026 3:29 PM

@ Winter,

I left out the word “into” that should have preceeded “five years”.

But yes it’s been a long time and whilst Russia has not really moved that far into the Ukraine at horrendous cost in resources both in terms of capabilities and materials for little gain.

It has however contributed quite a bit to the Ukraine in some respects.

It has cut down corruption and the like (endemic in old Soviet Countries). And has actually boosted Ukranian arms and similar manufacturing capabilities and upped the skill levels of many people.

I can see the Ukraine getting EU membership, simply because they will in all probability end up being a major arms manufacturer with skills in defence areas that few other nations have.

Did you know that something like 15%-20% of the worlds deployed optic fiber is covering that part of the Ukraine?

If someone can work out how to recover-recycle it, it would be quite profitable…

Gert-Jan March 5, 2026 6:47 AM

Things haven’t changed structurally. But AI has magnified some effects. Reputation of the source always played a role. Consuming from unknown sources always had the risk of lies, propaganda and other manipulation. Whereas consuming it from known sources, you have a basic idea of the truthfulness, bias, etc.

The existing methods of doing that (source, tone, eloquence of writing style) are not up to the task. We will have to invent and test new heuristics that work on AI.

I think “source” is not affected.

I do agree the other aspects you mention. This might not be bad thing, because using “tone, eloquence of writing style” to determine reliability – although very human – is at the very least unreliable and at worst downright discriminatory.

wumpus March 8, 2026 3:40 PM

In a rather late comment, today’s Foxtrot comic had Jason vibe coding a molt-like social media site. Of course his effort was more a “let your AI-agent publish and read all your social media for you”, much like the the Electric Monk would believe things for you or VCRs would watch television for you.

https://foxtrot.com/

Hortis Gadfium III March 16, 2026 7:03 PM

I think people considering this from an individual perspective won’t get very far. “What is Truth?” is a philosophical argument for the ages.
A better angle is to consider this from a group perspective – “What is Consensus?”. The internet has been doing a fantastic job of destabilising consensus for a while now. Prior to that consensus was influenced by newspapers and prominent thinkers.
Reality is generally consensual based on things that people can agree on – it’s easier to establish than truth. But people gravitate to experts because they articulate a problem better than others (irrespective of truth).
The problem AI inserts is the ability to generate a large number of differing and potentially compelling perspectives – or simply load up one side to artificially influence consensus.
But honestly, even this isn’t all that new. Hayek has been influencing consensus since the mid-20th, proposing the concept of “independent think tanks” that were quoted in the press, influenced political policy, etc.
AI and the Internet have made it harder to get the kind of cut through that older methods of building consensus had. But they effectively supercede them. So it’s not the end of all things. Eventually someone will establish a new and effective way to influence consensus maybe for the better or for worse – but certainly for their own personal benefit.

Clive Robinson March 17, 2026 7:39 AM

@ Bruce, ALL

With regards,

“AI gets good enough that you can’t reliably tell what’s fake. Third, and this is the crisis point, regular people realize there’s nothing online they can trust.”

You realise it’s not an “apples v apples” statement?

“Fake” is untrue, not real, or not supported by “facts” in most usage of English.

“Trust” is a form of “belief”, that humans give way to much of implicitly thus can be manipulated.

As I say occasionally when I get told something supposedly new I ask the question of it,

“Do the laws of nature as we know them allow?”

If they do not then I assume it is “not factual”.

Many things can be tested this way and if they “come up short” then I either ask for more details or I remain suspicious and go hunting for further details that do pass the “sniff test” or show the proposition as “not factual”.

That is my “Trust” threshold is very different from many people, and it can make life let’s say interesting in a whole different way.

But what to do if I do not have time to verify? Then I drop back to probabilities based on utilisation of resources, likely hood of coincidence or similar.

Yes there are times I have to “gut feeling” but that is generally only when the “Fight, Flight, Freeze” response is required.

As I’ve said before,

“There is no such thing as an accident, only a lack of information or a lack of time to process it.”

Which causes a failure to act correctly.

But also consider the argument about “Freedom of Choice” and the lack of it that some espouse. That is everything is preordained…

Well whilst things without “agency” are subject to some natural laws like those of motion, things that have sufficient agency can use other natural laws to change the direction or velocity. It adds both agency and resources to the time and information requirements.

It’s one of the issues that AI Driving is bringing up in the press etc. We get told about the horrors of “self driving cars” running people down, but rarely do these articles talk about the level of agency, resources, information, and time.

However what is not said is most humans tend to over estimate their abilities with familiar activities or their ability to not be distracted by normal. So have a false belief in their level of driving in the face of abnormal events.

The result is when the figures are worked out –especially for the USA– these self driving vehicles have a lower probability of being the cause or involved in such “accidents” than human drivers.

Yet ask many people and they will say the opposite and that we should never allow “AI Bots on the road”.

In Robotics there are the “Three D’s” of,

Dirty, Dull, Dangerous

Where robots can do jobs that involve these elements thus “free humans up from them” (some add a forth D of Demeaning these days).

The thing is history teaches us that humans will fight tooth and nail to keep such jobs, it is after all claimed that it is why we have the word “sabotage”[1]. The obvious question that arises,

“Is this rational behaviour?”

The fact the answer is “NO” and worse all sorts of faux arguments will be presented to keep humans in the jobs, is a fairly clear example of “Cognitive Bias” in play.

Which brings us back to what people “believe” not what is “factual”.

[1] It comes from a French word, “Sabot” which is a “wooden shoe”. The century old definition is,

“malicious damaging or destruction of an employer’s property by workmen”

If true then the oft said definition of,

“by those who made cloth by manual weaving chucking their shoes into automatic looms. Hence they were quite literally “putting the boot in”

wrong.

Thus the actuality when the word came into English usage was far more “crafty”, in that the “workers” would do subtle things to cause an “employer harm” during labour disputes.

https://www.etymonline.com/word/sabotage

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.