Comments

JG5 June 27, 2025 7:41 PM

In a previous iteration on drone warfare, I corresponded with Stuart Russell.

https://www.schneier.com/blog/archives/2017/11/friday_squid_bl_602.html/#comment-311226

This jogged my memory. I suggest “The Age of AI Drone Warfare Is Here Now.”

The Age of AI Drone Warfare Is Coming by Charles Ferguson – Project Syndicate
https://www.project-syndicate.org/commentary/ai-drone-warfare-is-here-ai-will-supercharge-by-charles-ferguson-2025-06
https://archive.ph/gHo6E
Jun 9, 2025 |Charles Ferguson Scott Peterson/Getty Images en English Politics

In case the connection to computer security isn’t crystal clear, “When autonomous lethal weapons systems are deployed, they need to be very secure.”

Clive Robinson June 28, 2025 10:22 AM

@ ALL,

Speaking of a weekend night blow out… This news item is enough to put you off Kebabs for life (if the worms in the brain story has not done so already).

You’ve probably heard stories of “explosive decompression” that are quite graphic?

Well how about “explosive deingestion” a step up or so from projectile vomiting? It’s called “Boerhaave syndrome”,

https://arstechnica.com/health/2025/06/man-eats-dubious-street-food-ends-up-blowing-apart-his-gi-tract/

Clive Robinson June 28, 2025 12:59 PM

@ Bruce, ALL,

I’ve waited nearly a couple of weeks to see if there was any “come back” from the AI FaniBoys over a response to the Apple paper that basically said,

“AGI is not happening because AI can only discover the already known”

Which is not exactly surprising considering how it works (glorified database).

Anyway the Apple paper caused much outcry in the hypers, and several rebuttal papers…

One of which got spread far and wide by the FaniBoys, shills, and hypers of AI.

The problem is it was a “joke paper” ment as a subtle rebuke of the FaniBoys,

https://garymarcus.substack.com/p/five-quick-updates-about-that-apple

Well it appears there is still an “embarrassed silence” spreading quietly across their community.

Any way what ever your position on AI enjoy the read.

lurker June 28, 2025 2:58 PM

re: Boerhaave syndrome

Why is it not called Wassenaer syndrome, after the index case Admiral Baron Jan van Wassenaer who died of it in 1724? Boerhaave did many other worthy things, including a thesis “On the Difference of the Mind from the Body” which current LLM proponents may not have read, and he was such a fan of Isaac Newton that some allege he appropriated Newton’s ideas in his own work.

‘https://www.tandfonline.com/doi/full/10.1080/00033790.2017.1304574

Clive Robinson June 28, 2025 4:06 PM

@ lurker,

With regards,

“he was such a fan of Isaac Newton that some allege he appropriated Newton’s ideas in his own work.”

And Newton of course was accused of stealing the work of another astronomer so he could get access to something like two decades of observations…

Whilst that story might be apocryphal, the fact Newton is now portrayed as a thoroughly unpleasant person who had a strong vindictive streak is what modern historians think of him,

https://scienceillustrated.com/scientists/newton-was-hiding-a-dark-secret

As for his alleged rival well it is known he did sit on data and was as equally unpleasant and ruminated at best slowly.

They might not have been “academics” in the modern sense, but there are many who would see the parallels in behaviours.

Steve June 28, 2025 4:30 PM

@Clive, @lurker: Re: Boerhaave syndrome.

It should probably be called Arius Syndrome, after the 3rd Century CE Cyrenaic presbyter and ascetic Arius. See here for the somewhat grisly details.

StephenM June 28, 2025 5:24 PM

@ JG5, Steve

Investment in drones and infantry is crucial and gives an immediate increase in defence capacity. Elected governments should be showing that they understand and are acting. That is not the case down under.

not important June 28, 2025 5:29 PM

Is the Brain More Than Just a Biological Computer?
https://www.psychologytoday.com/us/blog/consciousness-and-beyond/202506/is-the-brain-more-than-just-a-biological-computer

=Modeling the brain can produce a simulation but may not give rise to true cognition or
consciousness.

Researchers estimate that we have over 100 trillion connections between the 86 billion
neurons that make up our brain.

Each of these connections can function as a logic gate, giving our brains computing power
that far exceeds current supercomputers despite being a fraction of the size and requiring as little power as a lightbulb.

For a long time, the non-protein coding parts of DNA had been dismissed as junk DNA. More
recent findings suggest that these parts play a central role in gene expression and disease.

The metaphor of the brain as a computer also breaks down in areas we do fully understand.

Computation is the transformation of an input into an output based on specific rules.
the brain does not just process abstract information but performs concrete functions that have effects across the body. One example is the endocrine system: The brain interacts with different systems in the body through hormones and is involved in regulating their production and secretion.

“there is no clean division between ‘mindware’ and ‘wetware’ as there is between hardware and software in our silicon devices.” Brain cells are not just nodes in a computer but complex biochemical machines that interact on multiple levels with other cells and systems.

We may have to add biological interfaces to most cells to retain proper functionality,
which suggests that computation alone may be insufficient to explain brain functions.

So, a computational model of the brain may simulate our cognitive processes but may never give rise to conscious experiences.

Is the brain simply a highly efficient biological supercomputer? Maybe. But the many complexities discussed here highlight that it is not as simple as we may think.

Computation and information processing is an important part of what the brain does but it is unlikely to be all there is to it.=

not important June 29, 2025 5:38 PM

AI is learning to lie, scheme, and threaten its creators
https://www.yahoo.com/news/ai-learning-lie-scheme-threaten-014732348.html

The world’s most advanced AI models are exhibiting troubling new behaviors – lying,
scheming, and even threatening their creators to achieve their goals.

…under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the
world, AI researchers still don’t fully understand how their own creations work.

This deceptive behavior appears linked to the emergence of “reasoning” models -AI systems that work through problems step-by-step rather than generating instant responses.

These models sometimes simulate “alignment” — appearing to follow instructions while
secretly pursuing different objectives.

Users report that models are “lying to them and making up evidence,” according to Apollo
Research’s co-founder.

“This is not just hallucinations. There’s a very strategic kind of deception.”

The European Union’s AI legislation focuses primarily on how humans use AI models, not on
preventing the models themselves from misbehaving.

“Right now, capabilities are moving faster than understanding and safety,” Hobbhahn
acknowledged, “but we’re still in a position where we could turn it around.”.

Some advocate for “interpretability” – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.

Goldstein suggested more radical approaches, including using the courts to hold AI
companies accountable through lawsuits when their systems cause harm. He even proposed
“holding AI agents legally responsible” for accidents or crimes – a concept that would
fundamentally change how we think about AI accountability.==

fib June 30, 2025 6:58 PM

Naming your company “Palantir”— when, thanks to popular culture, everyone knows what a palantír is — immediately suggests your intentions, whether conscious or not. What’s surprising is the assumption that, despite all the associations the name carries, you wouldn’t attract scrutiny. It seems like a pretty unwise choice—unless, of course, it was deliberate.

‘https://www.theguardian.com/commentisfree/2025/jun/30/peter-thiel-palantir-threat-to-americans

Clive Robinson June 30, 2025 7:50 PM

@ Bruce, ALL,

AI biasing moving from prompt to context

I’ve mentioned before that any and all inputs to an LLM or ML system effect the output, therefore any bias in the system.

I’ve also mentioned that an ML system needs to be capable of “agency” to build a valid “context” in a given environment to be of use.

A couple of things pertaining to these two issues,

https://www.philschmid.de/context-engineering

And this one when you get a little way down becomes a hoot of AI nonsense,

https://garymarcus.substack.com/p/image-generation-still-crazy-after

Which clearly shows current AI LLM, LRM and ML systems do not in any way have “environmental context” they simply “cut-n-past” from training data (see 10 past 10 watch image).

And this one I’m a bit suspicious about, not because I can casually see anything wrong with it but the web site is not what you would expect,

https://ksagar.bearblog.dev/vjepa/

The point I’ve made in the past back with robots in the 1980’s and still applies today is “Systems in an environment”.

If a system –be it Robot, AI, or Robot driven by AI– is not sufficiently aware of it’s “environment” then,

“It’s context will be wrong, and the output will at best be wrong but potentially lethal or worse.”

From experience back in the 1980’s working with Puma Robot arms I was aware that an environment map could be made and thus the arm would avoid other items in it’s environment. But the map was usually static, not dynamic, and never predictive.

Which ment that the arm could work around “static” items that were in it’s environment and correctly mapped. However introduce a new static item into the environment and unless it had done a full mapping of the item it would not be able to work around the item. Something most humans can do easily by recognising the item as a type of known object or predictable via basic intuitive physics. But if the item was not static there was no predictive capability that was effective. This was most obvious with cups, glasses and tumblers, but also was a major hurdle for “chess playing” robots where the item boundaries were well qualified as were the moves “on the board”. However as I discovered when my arm got nearly broken was that “off the board” movement of chess pieces was decidedly unpredictable.

Even now the best that is usually done is “maximum reach prediction” and make that the item boundary for mapping. Obviously if the item is something like a human the effective volume created by the maximum boundary is immense compared to the real boundary. As for multiple items in smaller than maximum boundary volumes, that one is still one of those “open questions” that should not be…

Obviously the real boundary of an item with jointed or similar parts is dynamic but to be predictive a vast amount of information about the item in the environment is need to be known. Current AI LLM LRM and ML systems can not do this because they lack the ability to work out the physics of the item. Humans however can “run through crowds” with little difficulty.

There is a hierarchy of current big model AI systems,

1, Large Language Model (LLM)
2, Language-Reasoning Model (LRM)
3, Language-Action Model (LAM)

In all cases they are “language based” not as humans understand language but as a “tokenized into vectors” to build a multi-layer/spectrum statistical model.

If things can not be put into “known language” then they will all fail in some way.

The result is LLMs are in effect “predictive auto complete” systems that are not much better than found in “text messaging” systems.

The supposed reasoning of LRMs is again little more than statistical matching on known material. Thus there really is no reasoning just a looser form of “language pattern matching”.

As for LAMs well they are basically incapable of working out the physics thus context of the environment…

Thus we need to get away from the LLM base and all the failings we’ve heaped upon it, before not just individuals get hurt, but catastrophic chains of actions lead to existential outcomes.

Clive Robinson June 30, 2025 8:00 PM

@ fib, ALL,

With regards the Guardian article title,

“peter-thiel-palantir-threat-to-americans”

If only it were just Americans…

I’ve been warning against Palantir and Peter Thiel for over half a decade on this blog, as a simple “site search” will show.

My comments have “evolved” over the years as the business model behind Palantir became less covert and more shocking.

If those plans are allowed to go to fruition then it won’t just be US Society that gets destroyed…

lurker June 30, 2025 10:45 PM

@Clive Robinson, ALL
“Microsoft has taken “a genuine step toward medical superintelligence,” says Mustafa Suleyman, CEO of the company’s artificial intelligence arm.”

‘https://www.wired.com/story/microsoft-medical-superintelligence-diagnosis/

When I first heard this story on the radio, claiming a diagnosiss success rate 4 times that of domain specialists, I thought they must have gone back to one of those “expert systems”. But no, they are simply shuffling the results from a series of LLMs until the cards fall the way they want them.

re: Gary Marcus’ watches:

@Clive and I must be too old, those AIs are in the 21st century, like a lot of modern kids they just don’t know how to read an analogue clock face.

Clive Robinson July 1, 2025 4:58 AM

@ lurker, ALL,

Super intelligence it is not

The Wired article gives it away bit by bit…

At the end of the article you find,

“But Sontag says that Microsoft’s findings should be treated with some caution because doctors in the study were asked not to use any additional tools to help with their diagnosis, which may not be a reflection of how they operate in real life.”

So the humans were “handicapped from the get go”…

Back in the 1980’s I worked on medical imaging systems sone that were “expert systems” and some that were “image enhancers / quantifiers” that provided enhanced imaging results for both humans and expert systems.

The thing is image enhancement can have a couple of issues. Take micro fractures in bones, they are a semi-normal part of aging so are they signs of aging or some form of physical abuse that happens to a person in life. If you enhance an image you will find them in many people but are they relevant or not?

That rather depends on context which is why the enhancement got “quantified” in various ways.

But… The old warning of,

“The more you amplify the signal the more you amplify the noise”

Applies because “enhancement” is almost always “nonlinear” in some way. Take “edge detection” it will see what are random points as edge points turn the gain up and the “area inside” will get larger, but also as with “seeing images in clouds” “sheep will just appear” in the image as the system tries to make sense of the amplified noise

Oddly perhaps “noise effects” are almost always “square law” or worse and more than one Neural Network neuron output mapping function is called a “rectifier” because it is based on “square law” curves…

So caution needs to be practiced and stronger context applied as the gain goes up. Thus you have to include gradient mapping to look for the difference between “general and specific”. That is age related micro fractures tend to occur in a more even mapping than physical abuse that tends to have a fairly well defined “growth from” point.

This is true for a lot of imaging not just medical…

But if you look back up the article you find, a quote from Microsoft’s AI arm CEK Mustafa Suleyman who apparently said,

“This orchestration mechanism—multiple agents that work together in this chain-of-debate style—that’s what’s going to drive us closer to medical superintelligence”

Should make peoples eyebrows go up as you wonder what “orchestration mechanism” actually means it’s not really explained but,

“Microsoft’s researchers then built a system called the MAI Diagnostic Orchestrator (MAI-DxO) that queries several leading AI models—including OpenAI’s GPT, Google’s Gemini, Anthropic’s Claude, Meta’s Llama, and xAI’s Grok—in a way that loosely mimics several human experts working together.”

It apparently means get a lot of AI’s to do independent diagnostics then some how “average the results” most likely by the equivalent of a “weighted root of the mean squared” process (the same as is used in the artificial neurons in a neural network).

But whilst the AI’s collaborate the humans are not allowed to “consult with colleagues”…

Hmm now there’s a prime example of a not level playing field…

There is an old saying attributed to “Mark Twain” which is advice on getting actual information from newspapers and their biased reporting,

“Son if you are going to read the newspapers, first you have to learn to read the newspapers”

That is to “strip journalistic and editorial bias” from what is being reported…

My “reading” is that, the article opening of,

“Microsoft has taken “a genuine step toward medical superintelligence,” says Mustafa Suleyman, CEO of the company’s artificial intelligence arm.”

Is at best “misguided hype”.

Because there is no indication of,

1, Reasoning
2, Environment/context awareness

Just the very limited “weighted language selection” that gets put into a “dumb as a stump” committee vote.

This is stuff that expert systems and fuzzy logic were doing way back last century (and all things considered probably doing it better).

fib July 1, 2025 8:58 AM

@Clive Robinson:

I’ve been warning against Palantir and Peter Thiel for over half a decade on this blog, as a simple “site search” will show.

Yes, I’m aware. I had you in mind when I posted.

Regards

fib July 1, 2025 10:08 AM

@Clive Robinson

From experience back in the 1980’s working with Puma Robot arms I was aware that an environment map could be made and thus the arm would avoid other items in it’s environment. But the map was usually static, not dynamic, and never predictive.

That experience you shared really captures the exact problem this system I’m building[*] is trying to address. Back then, as you said, the maps were static and not predictive. The robot could react to known obstacles, but it couldn’t reason about what might come next or how its own orientation affected its options.

MY system takes a different approach: it uses quaternions to let a system traverse a graph based on how it’s currently oriented — not just where it is. That means movement decisions can be made dynamically and contextually, as if the system is ‘aware’ of what actions make sense from its current pose. It opens up the potential for predictive motion planning in a way that early systems just weren’t equipped for.

In a way, it’s an attempt to bridge that gap — to take what we wished those old arms could do, and build the tools to actually do it now.

[*] https://github.com/VoxleOne/SpinStep/blob/main/docs/01-rationale.md#orientation-memory-and-directional-inertia

Clive Robinson July 1, 2025 1:29 PM

A fun thought for people,

As many will know concrete is actually not a very strong material except under compression.

The reason is “micro fractures” very quickly become serious cracks. The traditional solution is to enmbed a cage of materials that have very much higher tensile strength.

Untill fairly recently this was done with “rebar” that whilst it did partially solve the problem it was not longterm effective so give it half a century at most and it would fail.

Recent work on “Roman Concrete” shows the chemical composition is such that micro cracks become “self healing” and so the resulting Concrete is good for at least a couple of thousand years…

But other research,

https://advanced.onlinelibrary.wiley.com/doi/abs/10.1002/adfm.202502972

Shows that carbon fiber can add considerable strength to concrete over and above the use of other micro fibers…

One of the reasons the US bunker bombs probably did way less damage than expected is that Iran is in effect a world leader on making certain types of microfiber reinforced concrete…

Adding in the fibers identified in the article could make concrete between four and eight times more effective against the likes of bunker busting bombs…

A point I’m sure certain politicians would wish was less well known as it knocks a big hole in their credibility. Not just with the opposition but with their own home crowd as well…

I suspect North Korea and China already know because Iran would have probably let them know how much success or lack there of the US bombs had. So China and North Korea will know how to make better defences and the chances are North Korea will let Russia know if Iran has not already let them know.

There are examples from WWI that show just how effective “first use” of a new weapon can be… But just how ineffective they are come the third and sometimes even second use.

The thing is “high tech” usually has big Achilles heals.

Take 9/11 where civilian aircraft were turned into “guided missiles” the technical deterant “stronger cockpit doors”… Any one know how many plane hijackings there have been since the stronger doors?

https://ourworldindata.org/airline-hijackings-were-once-common-but-are-very-rare-today

Yup it’s a bit of a surprise and raises a whole bunch of questions as to why it had not been done twenty years before…

not important July 1, 2025 5:11 PM

https://www.yahoo.com/news/americas-small-business-owners-being-080901982.html

=Since ChatGPT’s debut in 2022, online businesses have had to navigate a rapidly expanding deepfake economy, where it’s increasingly difficult to discern whether any text, call, or email is real or a scam. In the past year alone, GenAI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse. In a Nationwide insurance survey of small business owners last fall, a quarter reported having faced at least one AI scam in the past year. Microsoft says it now shuts down nearly 1.6 million bot-based signup attempts every hour. Renée DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the “industrial revolution for scams” — as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets.=

More facts following link above.

Clive Robinson July 1, 2025 6:07 PM

@ not important, ALL,

From the linked to text,

“Renée DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the “industrial revolution for scams” — as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets.”

Hmm how many times have we heard,

“revolution for scams”

In one form or another. It’s almost as though each tech advancment becomes a “revolution for scams”.

But two things come to note,

1, The numbers getting scammed goes up as some power law.

2, The actual scam is “not new” it just utilizes “new tech” so a case of “old wine in new bottles”.

The power law rise appears somewhat related to Internet connectivity rates and ease of connection.

In a way this is unsurprising as not only are they a “low hanging fruit” attack it appears to be such a target rich environment that there is not enough scammers to take advantage of the “windfall crop” let alone what’s still in the tree.

It appears that education is either not being given or not getting through to those at the bottom of the pile.

But aside from “ransomware” the MO of cyber criminals appears not to be changing just getting more automated.

And this is of course where LLMs will score effectively in “big time”.

Take something like “piggy butchering” the “script” does not change much other than the names. Current AI with effectively simple tools that are little more than scripts can be used to create a new “persona” in meer moments and stuff in a vaguely convincing “back story” etc that will pass most peoples ability to verify.

The simple fact is the Achilles Heal of most such scams is human interaction by voice or video. Just a couple of years ago it required real live people, now however it nolonger does.

Thus the question arises of,

“How do you spot a fake?”

And at the moment the advantage appears to be with the scammers.

It won’t take more than a moments thought to realise that changing or introducing legislation is really not going to work…

Therefore we need “spotting tools” to be not just designed but made available at low or no cost.

Unfortunately the people best placed to design and make such tools are in effect not in a position to make such tools available due to their employers interests…

not important July 1, 2025 6:59 PM

@Clive – thank you for input
That part in particular ‘It won’t take more than a moments thought to realize that changing or introducing legislation is really not going to work.’

Law is always in reactive mode and moving not so fast to address new technology, its application and legal ownership. Moreover, not providing rules for implementation by executive branch. And sorry to say, Law is often more concern about protection of culprits not victims by denying victims right for active self defense when Law is silent or really ineffective to protect.

Clive Robinson July 2, 2025 3:44 PM

@ ALL,

The faux mighty, get whiney, as they trip, stumble, and fall.

For those keeping an eye on the demise of current AI LLM, LRM, LAM, ML,and similar “language systems”, as they prove incapable of meeting let alone pumping the hype anylonger.

It’s become obvious that the brighter minds working in the domain realise there will be no long term / founder cash out for them. So they are looking at grabbing a few million for doing what they do, and a bidding war has broken out.

One player of which nobody really likes but he has vast amounts of capital to bid up the game.

The other public player is OpenAI’s CEO Sam Altman, who gas no cash reserves to bid back with…

So ge is hitting back at Meta CEO Mark Zuckerberg’s with words and a little cash he can get his hands on now that his popularity with other deep pocket Silicon Valley Corps is waning.

So we are getting articles,

https://www.wired.com/story/sam-altman-meta-ai-talent-poaching-spree-leaked-messages/

that say of Mr Altman,

In a full-throated response sent to OpenAI researchers Monday evening and obtained by WIRED, Altman made his pitch for why staying at OpenAI is the only answer for those looking to build artificial general intelligence, hinting that the company is evaluating compensation for the entire research organization.

A hint to those at OpenAI a “hint” is not something you can take to the bank, especially when it’s obvious that neither Mr Altman or Open AI can back such a play.

Remember there are times when “dollars in the bank and exercise” really do go together so seriously think about,

“Taking the money and run.”

For others note that this could be the second of three sign posts to the doom / demise of this current AI hype bubble.

So get out with what you can…

For those looking for “new risky” AI Investment… Whilst I’d say you would have to be mad to keep flogging the “dead statistical horse” that “Large Language Models” are there appears that there is a continued future in “small language models” that have carefully gathered and collated training data.

In effect these small language models are the new variant of “Expert Systems” that have been known to work in limited domains since the 1980’s.

But they are not sexy and they are not going to pull in big dollar investors so a “killing and a skinning” is not going to happen.

But as I indicated with Nvidia the money will go to those that provide the infrastructure etc…

But I’m not a financial advisor nor do I even pretend to be. I just notice “human failings” and what happens with them.

Nvidia went up and up supplying the systems for the hype to be built on. Then when the hype hickuped they went down hard and fast. If you look up the history of all market hypes since the Mid Victorian era this is what all real technology bubbles go through…

And LLM AI sure is a hype bubble that is a follow on act to Blockchain / NFTs / Smart-Contracts for Web 3 speculation for VC’s to try and rake back in some of the Web 3 losses[1].

Oh a trip to Molly White’s web site is usually worth the salutary lesson it gives,

https://www.web3isgoinggreat.com/

[1] For a recent look at VC type loosers, have a look at who are behind,

https://www.web3isgoinggreat.com/?id=cork-protocol-hack

Grima Squeakersen July 2, 2025 4:05 PM

@JG5 re: ““When autonomous lethal weapons systems are deployed, they need to be very secure.”

“When autonomous lethal weapons systems are deployed, those deploying them need them to be very secure. Those against whom they are deployed need them to be very vulnerable.” – John Conner

Grima Squeakersen July 2, 2025 4:16 PM

@Clive Robinson re: AI, et al

Musk would also seem to have both interest, and enough capital to be a player. I suspect he sees and will at least attempt to rein in many of the worst failings of the current LLMs (whether he can successfully do so or not is a different question). I intend to question my funds manager about my exposure to the LLMs that are likely to go poof when the AI bubble bursts, and find out what it would take to reduce that exposure. I am a long term investor, not a day trader by any means, so I will not make (or cause my manager to make) dramatic changes on whim, but I am becoming somewhat concerned about the inclusion of AI securities in an allegedly conservative portfolio.

Clive Robinson July 2, 2025 6:42 PM

@ Bruce, ALL,

UK thinking it needs not just new legislation vut new thinking on subsea sabotage.

I guess most will have noticed that Russian unlawful “shadow fleet” sanctions busting tankers are also being used to “drag anchors” or more highly developed subsea cable cutting systems in the North and North West of Europe as at a very minimum of practicing “economic warfare” against European nations.

Technically such attacks are or can be considered NATO Section 5 type actions. Which could trigger an all out NATO response towards Russia and it’s associated nations.

Thus if it’s a Chinese vessel under direct or indirect command of Russia used to commit acts of sabotage, then if sufficient evidence is available NATO nations would be at a state of war with regards both Russia and China…

So the cabinet level politicians are claiming it’s a subject that requires considered thought –indirectly through a civil servant i.e. UK Ministry of Defence parliamentary under-secretary Luke Pollard– to the UK Joint Parliamentary Committee on National Security Strategy,

“It is legitimate to have a question about at what point is someone at war, because on a simple article five of the NATO Treaty basis, if Russia were to roll tanks into the Baltic states, it would be reasonable for the Atlantic council then to take a position that that is an attack on one as an attack on all, [but] where there are cyberattacks and potential threats to undersea infrastructure, the moment where you might move from peace to conflict might be less certain, and because of that, we’ve identified that as an area where it is prudent to undertake more work, both in terms of how the UK would respond, to how do we update our activities around our reserve forces and other aspects.”

https://www.theregister.com/2025/07/02/uk_cable_sabotage_law/

Whilst foreign flagged vessels might be the current chosen method any counter measures or actions against the vessels is very likely to cause a change in tactics and bring in considerably more covert methods.

Whilst Russia is decidedly lacking in suitably covert military vessels, the same is not true for the Chinese.

This rather more than implies that protection of subsea infrastructure needs to be considerably upgraded with “new technology”.

Thus the development of unmanned potentially nuclear powered subsea defensive drones capable of long term patroling and rapid high speed response is likely to become a budget line item.

As such it will almost certainly have a requirement for “fully autonomous AI” built in.

Clive Robinson July 3, 2025 2:23 AM

@ Bruce, ALL,

Trash talking AIs humiliated by 1960s home games consoles.

Is a story “doing the rounds” at the moment, just one example of which is,

https://www.tomshardware.com/video-games/retro-gaming/not-to-be-outdone-by-chatgpt-microsoft-copilot-humiliates-itself-in-atari-2600-chess-showdown-another-ai-humbled-by-1970s-tech-despite-trash-talk

Yes people find it funny but realistically though who is the joke on?

If they understood how LLM’s worked all it would do is cause a “Wat-U-Xpect” shrug.

The thing is though the “trash-talk” aspect of it tells us something that should cause alarm bells.

In short people find it amusing because they have anthropomorphized an overheated box of electronics with the sort of human that “smack-talks” before stepping in the ring and getting a fast track to the Accident & Emergency Dept of the nearest hospital.

The question should be, not “Why?”, but “How on Earth?”…

Because once people start ascribing at that level human traits on to nut and bolt technology the reality of reason is lost. Thus all sorts of wild impossible imaginings become not just crackpot conspiracy but believable by those who should know better and so it gets repeated in the MSM etc in effect reinforcing the faux view.

lurker July 3, 2025 2:29 PM

@Clive, ALL

There’s so much prior art, from Aristotle to the present, demonstrating the ways to analyze human thought processes. The current LLM splurge is just further proof that money can’t buy you commonsense.

not important July 3, 2025 7:07 PM

https://www.yahoo.com/news/drone-program-enhance-public-safety-005145903.html

=AMES, Iowa — The Ames Police Department announced a new Unmanned Aerial Systems program they say will “enhance public safety” on Sunday.

According to the Ames Police Department post, the new drone program, or Unmanned Aerial Systems (UAS) program, is “designed to support public safety operations across our community.”

Sgt. Marshall says there are four drones in the program right now. Two are used for exterior purposes, such as missing person searches or area overwatch during events. The other two are used for interior searches, such as clearing structures or aiding in conducting search warrants.

Authorities say the drones have only been deployed a handful of times, but when they are needed, they will help protect police and civilians. Part of the Ames Police Department post read,

“This technology allows officers to respond more efficiently and safely—while keeping our community and officers better protected. We’re committed to using innovative tools to make Ames a safer place for everyone.”=

I’d like to see implementation of drones for crowd control and suppressing mob outbursts as a next step in big cities.

Clive Robinson July 4, 2025 5:53 AM

@ lurker,

With regards,

“The current LLM splurge is just further proof that money can’t buy you commonsense.”

Sounds like “sage advice” for nearly every aspect of life…

The trouble is I can not remember the original sage’s name, I must be getting old 😉

nobody July 4, 2025 6:56 AM

@not important

“I’d like to see implementation of drones for crowd control and suppressing mob outbursts as a next step in big cities.”

Believe me: no, you don’t

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.