Comments

ResearcherZero November 29, 2025 12:45 AM

With the ending of contracts worth billions, local businesses are feeling the hit to their economies. Programs and contracts designed to support and help veteran-owned businesses were included in the cuts. Smaller businesses and startups will be affected.

These smaller companies often contribute important parts and services to the supply chain.

‘https://www.politico.com/news/2025/11/22/veteran-owned-businesses-trump-contract-cuts-00664317

$20 billion in contracts were cut. Communities face millions in lost earnings and jobs.
https://www.washingtontechnology.com/contracts/2025/11/shutdown-compounds-year-pain-federal-contractors-employees/409295/

Clive Robinson November 29, 2025 3:40 AM

@ lurker,

With regards,

“A320-neo planes affected by instense solar radiation.”

I’m not surprised in the slightest.

One of the first single chip CPU’s was the 1802, which is as far as I’m aware still available for sale.

The reason the 1802 is special is for two reasons,

1, It could be run with a clock that if stoped or run very slowly put it in a very low power state.
2, It was produced in a “Rad Hardened” version for space and military applications.

https://en.wikipedia.org/wiki/RCA_1802

The thing to note is that even back in the 1970’s and earlier it was known that what we now call “Space Weather” can have a very detrimental effect on aerospace systems.

One of the designs I produced for a “Remote Telemetry Unit”(RTU) for the offshore Oil Industry had to be “Rad Hardened” for safety reasons[1].

Put simply when you are controlling the equivalent of a large bomb you don’t want the control system going off to a mad hatters tea party.

So yes the recent solar weather highs that gave us CME auroras visible as low down as Texas and problems with many LEO satellites ment there was a lot of interesting effects at “6 miles high”.

[1] Luckily for me back then, there was no “formall tests” involved so I did not have to buy any lead underware (which if you’ve ever had to ware it you will know can really slow you down).

Clive Robinson November 29, 2025 4:10 AM

@ ResearcherZero, lurker,

No more blazing balls of fire

You are probably aware of “ride on luggage” where there are motor driven wheels in the bag powered by honking large LiPo batteries.

Well the banning of these “rideons” is being extended,

https://www.thetravel.com/high-tech-travel-bags-banned-by-airlines/

I guess “being in the hot seat” was “fun whilst it lasted”.

But in the UK they are not legal to be used in public pedestrian ways anyway. Due to legislation against the use of the so called “hover boards”, regulations for mobility scooters, and electric bikes.

ResearcherZero November 29, 2025 5:54 AM

@lurker, Clive Robinson

It’s probably easier for Airbus to fix the problem because they brought the development of flight control systems back in-house, which may make it a simpler process to deal with.

There were some good auroras here, but due to the janky weather lately it is nearly always cloudy when any astronomical event takes place. Not that I’m going to get up late at night and bother to look anyway these days. The photos people take are probably better. Plus we drove the car into some deep water off-road, so I don’t want risk blowing a bearing or the transmission in the middle of the night, miles from home without any mobile reception.

Modern electronics do not like being knee deep in swamp water and modern vehicles have much of the undercarriage covered by panels making it difficult to access and repair yourself. Simple things like re-greasing or servicing now require entering the cue at the mechanics.

This is why old landrovers and landcruisers have maintained their value. They don’t complain as much when you take them driving through sand, swamps and rivers. They are easy to repair yourself and much easier to tow on the odd occasion you get them well stuck. If you drive them off-road enough they have holes in the floor so the water drains out faster.

With a winch on the front you can dead-man your way through even the most slipperiest of sand. Not as comfortable obviously. Suspension is a bit hard, but that is part of the fun.

KC November 29, 2025 9:32 AM

Anthropic CEO called to testify on Chinese AI cyberattack

https://www.axios.com/2025/11/26/anthropic-google-cloud-quantum-xchange-house-homeland-hearing

Invitees: Anthropic CEO Dario Amodei, Google Cloud CEO Thomas Kurian, and Quantum Xchange CEO Eddy Zervigon

From Zervigon’s letter:

… your insight into integrating quantum-resilient technologies into existing cybersecurity systems, managing cryptographic agility at scale, and preparing federal and commercial networks for post-quantum threats will be critical

Kurian’s letter:

… your insight into securing hyperscale cloud environments, integrating AI into defensive architectures, and mitigating large scale misuse of cloud resources will be critical to the Committee’s examination.

And Amodei’s letter:

During the ten-day response period, Anthropic reportedly mapped the scope of the operation, banned relevant accounts, notified impacted entities, and coordinated with authorities as actionable intelligence was developed.

Scheduled for Dec 17, 2025.

Clive Robinson November 29, 2025 10:45 AM

@ ResearcherZero, lurker,

With regards,

“It’s probably easier for Airbus to fix the problem because they brought the development of flight control systems back in-house, which may make it a simpler process to deal with.”

My guess is it’s not a software issue as such, but the “non-volatile memory”(NVM) it’s stored in.

So in effect it’s a “reload and test”

With part of the test being the NVM is not degraded or permanently damaged.

The thing to remember is that each year the surface area of a storage device gets smaller by quite a bit. In turn this make a charged particle hit effect bigger by quite a bit…

There is only so far ECC Memory can get you to improve reliability. And at some point the real estate for the ECC will be beyond any savings made by making the NVM cells smaller…

And that’s before we start talking about “metastability issues”. We’ve all heard of “screen burn in” but how many realise that the smaller you make active devices the more likely they are to suffer “burn in”?

Which makes metastability issues worse 🙁

Then there is also an issue due to nuclear bomb atmospheric testing. You can not get shielding that is isotope free unless you dive down to WWII ship wrecks and cut bits off[1].

All good fun and abject misery for intrinsically safe aerospace design engineers.

[1] Not quite true… When I was young and on holiday with my dad in Portsmouth UK they were diving down on the Mary Rose and pulling stuff up and storing it in a garage… We happened to walk buy and I noticed a cannon ball being used to hold the door open. It was breaking up and bits were flaking off. Being bright eyed, bushy tailed and enthusiastically interested we got chatting for quite some time with one or two of the people there. They were kind enough to let me have some of the flakes as a souvenir. Which I still have in a match box along with a photo of me standing beside it with the two people there. Inside the rust there will be solid iron in the flakes that will not have nuclear bomb isotopes.

lurker November 29, 2025 11:56 AM

@Clive, ResearcherZero

NVME test makes sense. I haven’t seen anything outside MSM yet with a technical explanation of how a “software” (firmware/OS) reload can be a permanent fix for radiation corruption of data. That would mean future reloads for every plane above FL360 during an X-class flare, bringing back the thrill of flying …

ResearcherZero November 29, 2025 8:44 PM

@lurker, Clive Robinson

I wouldn’t drive any new electric cars through a swamp or water deeper than a few inches. Many new cars do not have a spare wheel, just a patch kit or a limited distance “temporary” wheel. You can squeeze a real spare wheel in the some of the compartments if you buy one.

Definitely wouldn’t drive a new plane through a swamp or salt water.

lurker November 29, 2025 8:55 PM

A320 grounding

This is the best technical explanation I have yet seen. Somebody in the system knew this could happen, in fact had already happened, but didn’t deem it worthy af a fleetwide fix. It seems they were relying on software filters to correct single random unexpected bit-flips, maybe that’s cheaper and lower mass than hardware screening. But my simple mind says that can’t detect hardware damage, which would need to be scanned for during regular maintenance.

Airbus has quietly begun certifying fully radiation-hardened ELAC 2 units (using 22nm silicon and triple modular redundancy) for delivery from March 2026. In the meantime, the fix for 80% of the fleet is simply to roll back to the 2017 L98 software — a solution engineers call “embarrassingly effective.”

https://safefly.aero/airbus-a320-solar-flare-grounding-2025/

Clive Robinson November 29, 2025 9:35 PM

@ jelo 117,

Ahh I can tell you exactly where all of those scenes were shot 😉

And they are not as “connected” as the film implies.

The “park scene” I know very well back a few decades ago, I was in that park “on a lunch time date” with a young lady. When a young squirrel ran up the outside of my trousers and started sniffing at my front pocket… What do you say or do remembering they have teeth that can easily cut through armoured power cables?

Also how do you not look cruel and unkind to small furry cute things?

So I decided to do the “idiot thing” and “be cool”…

I stoped and looked down at the squirrel and said “You won’t find any nuts there”.

My companion smiled demurely, but a “Posh Lady” in her late thirties laughed so hard she dropped her coffee.

Such is life…

I once told the story to Douglas Adams who lived in a nice part of Islington with a very nice “private park” behind his house that we were overlooking. He laughed, waved his glass of wine at the park and said,

“Such things go on all the time, it’s just life.”

Which in amplified form had kind of became a “catch phrase” it even made it into the end of the film (spoken by Steven Fry),

“For thousands more years the mighty ships tore across the empty wastes of space and finally dived screaming on to the first planet they came across, which happened to be Earth, where due to a terrible miscalculation of scale the entire battle fleet was accidentally swallowed by a small dog.

Those who study the complex interplay of cause and effect in the history of the universe say that this sort of thing is going on all the time, but that we are powerless to prevent it.

‘It’s just life,’ they say.

(Originally from “The Hitch-Hiker’s Guide to the Galaxy”)

Clive Robonson November 30, 2025 12:59 AM

@ lurker, ResearcherZero,

With regards,

“This is the best technical explanation”

I’m glad you put “technical” in there. Because the section on “Economic Impact Analysis” of costs is a complete joke.

But, yes they are sort of right about the particles being 300x worse at 35,000ft because that’s an average. The reality is due to the way the particles travel they are a bit like a lightening strike, in that the core has a very high particle density that quickly drops by a factor of about 1/50 to a sort of 1/(r^2) decay with distance (radius).

So if it was a direct strike you would actually be looking at 15000 times the average at sea level.

The ionising effect is such that various “Space Weather” reports have said either “don’t travel” or “don’t go into higher latitudes” like trans polar flights especially if pregnant or you have implanted medical electronics or similar.

Have a think about “auroras” they are effectively massive florescent tubes in the sky… Have a think about how much electrical power would be needed… It might make you think about never flying again 😉

However aside from significant radio “blackouts” it looks like we will get a few days respite,

https://m.youtube.com/watch?v=kATyo0nWbSs

ResearcherZero November 30, 2025 5:51 AM

New Chat Control 2.0 legislation would destroy the privacy and security of communications. The repackaged bill would allow automated mass spying on innocent people in secret without a warrant. Conversations between families, business communications and sensitive medical discussions would be swept by private companies and governments and leak due to weakened security introduced by the measures that allow the scanning to take place.

Weakened security would also be a vector for malicious attacks on communications that would allow a range of dangerous consequences which could endanger the lives of those affected.

Automated systems cannot tell the difference between innocent behavior or criminal acts. 80% of machine reported content and 50% of human reported content is not criminal and in fact innocent and normal communications without any malicious, criminal or harmful content.

‘https://unherd.com/2025/11/europes-new-war-on-privacy/

The new bill adds scanning of private messages, metadata and mandatory age verification.
https://www.techradar.com/vpn/vpn-privacy-security/this-is-a-political-deception-new-chat-control-convinces-lawmakers-but-not-privacy-experts-yet

Clive Robinson November 30, 2025 6:05 AM

@ ALL,

Next AI investment scam “Give it Space”

I’m not sure who is jumping on who’s “hype band-waggon” with this “down the toilet” money grab but don’t take my word for it when I say it’s a dud.

The idea is basically “start up space venture companies” to team up with AI companies, to “put data centers in space”

Obviously to sell the pitch with loads of fake but plausible sounding arguments, of plenty of space to expand, unlimited power, etc etc. So basically stick an undeveloped high risk launch system maybe upto being a prototype, up the backside of unreliable power hungry AI data center rack and “go for launch” and failure to deliver, or return investment…

I could go through all the arguments I’ve heard one by one and shoot a crater sized hole in each and every one…

But why bother when some one has done the “Heavy Lift” already,

https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/

Oh it also goes into Rad Hardening electronics design in sufficient depth that will help peps get their noggins around the A320 Airbus issue that is grounding hundreds if not thousands of aircraft world wide. And will ground a lot more as other aircraft are properly examined (Think a Boeing III or is it IV?).

Seriously though folks don’t put your money on “AI in Space” your money would produce more return on “mining asteroids for platinum” and that just won’t pay in our lifetimes if not longer.

Further the author also has another post based on actual science that explains why Current AI LLM and ML Systems are going to be hitting another “AI Winter”,

https://taranis.ie/llms-are-a-failure-a-new-ai-winter-is-coming/

And their ultimate conclusion, is a warning investors should heed,

Winter is coming, and it’s harsh on tulips.

And “buttercups” alike… So put on your thick winter coat, and flee, lest your shirt be clawed from your back and your naked body be thrown out into the winter freeze.

Winter November 30, 2025 11:25 AM

@Clive

“put data centers in space”

What baffles me is that anyone who is aware of the main problem of AI data centers, heat, can think you can dissipate the heat AI produces in space.

That is before all the other “technical details” are filled in.

Clive Robinson November 30, 2025 3:22 PM

@ Winter,

In space nobody can hear you steam

Regards,

“anyone who is aware of the main problem of AI data centers, heat”

Well yes the only way to get rid of it is,

“By radiation off the dark side”

Which creates the problem of moving the heat from one place to another. Without a gravitational gradient of sufficient size conventional methods just won’t work. So closed cycle fluidic / refrigerant systems have to be used. Which means fun chemicals like ammonia. Which in turn can cause major “station keeping” problems if pipes break down in some way.

But most people tend to think of “heat” coming from “the load”… Forgetting it comes from everywhere work is done so the generation as well as the transmission to the load. Then moving coolant from pumps and friction. Basically anywhere that is not 100% efficient (which is every part).

I could go on at length, but I won’t because my stubby fingers and bad eyesight will fill it with typos.

Fun fact if you design CubSats there are no space qualified cooling systems that will fit in… So you have two basic choices,

1, Design for minimum power generation, storage, and use.
2, Design for a very short in service life time of just months.

Considering the real cost of getting even a CubeSat into Low Earth Orbit, the first route is really the only route to go. And that has all sorts of implications with respect to “down links” having to be “narrow bandwidth” thus “low data rate” with “high latency”.

Which means it does not matter how small, lite, and energy efficient sensors are, they have to be “low bandwidth”.

And the comms needs several layers of error correction including “Error Correction Codes”(ECC) at the data level, “Forward Error Correction”(FEC) at the comms level all of which sucks up loads of bandwidth, and adds lots of other latency…

Fairly quickly you realise you,

“Only have a tiny box to work in”.

ResearcherZero December 1, 2025 3:51 AM

@Clive Robinson

Ammonia is great stuff as long as the local drug cooks don’t leave the tap open on the tank. In anhydrous form it is a terrific refrigerant or fertilizer. Being free of water it is also a good cleaner and explosive. If you need to remove water from your eyes, lungs or nasal passage it will get the job done, but probably don’t smoke as it will kill you.

The cartels will probably find some use for it too, so extra security might be a good idea, along with regular campaign contributions to the right people just to be on the safe side.

A major drug lord who flooded his country with drugs in return for bribes will be pardoned by Trump. Juan Orlando Hernández allowed cartels to move hundreds of tons of cocaine into the United States.

Hernández used money from the cartels to get himself elected and allowed drug trafficking to thrive. In return for bribes, the national police and the military were directed to sheppard drug shipments. Among the cartel bosses the former Honduras president consorted with was Joaquín “El Chapo” Guzmán. Hernández and his brother allegedly worked “hand-in-hand” with the Sinaloa Cartel and MS-13 so they could control the drug trade into the US.

‘https://www.nytimes.com/2025/11/29/nyregion/honduras-hernandez-drug-trafficking.html

The police chief allegedly helped run a drug lab, organized assassinations and passed on bribes.The police chief and Hernández’s cousin plead guilty to helping run the drug trafficking scheme. Another trafficker was murdered in prison after implicating Hernández.

https://insightcrime.org/honduras-organized-crime-news/juan-orlando-hernandez/

Clive Robinson December 1, 2025 6:12 AM

@ ResearcherZero, ALL,

With regards the “drug cartels”

They are the “end game” for where US Corporatism is heading.

And it won’t be the first time, the 1960’s through 90’s were rife with openly naked examples of this with the assistance of US Intel Agencies.

It never really stopped, it just got kept out of sight by “outsourcing” to the likes of South African and Chilean “security forces” that were politely called mercenaries.

But it still goes on as it’s virtually unstoppable and many US and Other Corporate leaders see it as part of,

“The cost of doing business.”

And the politicians are little better just striving to keep their involvement at a couple of arms lengths

This of course will be what AI becomes used for in the very near future unless the Hype Bubble explodes so much they become too extreme in public eyes and eventually legislated against.

Those who think different really need to take of the blinkers and see the world in a little more wider reality.

We’ve seen the Aus RoboDebt and Dutch child care disasters. The UK has a system they call “The Connect System” for Revenue and Benefits like pensions, which is without any doubt going to cause significant harm to many innocent people.

ResearcherZero December 1, 2025 6:24 AM

Trump is attempting to find a way to enact wartime powers without the United States being under invasion or attack. Trump doesn’t have the authority to close the airspace of another country like Venezuela, and many other things, but he is looking for a route to more power.

There is a long history of making up all kinds of enemies and excuses to drive into a bog. Its pretty deep water down there far south of the border and the mud is very sticky.

Potentially getting bogged in a forever-swamp seems to have worked for other despots. A little guy in Europe, a bunch in Latin America, the Middle East, North Korea, Asia…

‘https://edition.cnn.com/2025/12/01/politics/trump-venezuela-threats-pressure

Presidents do not have the power to deploy the National Guard without the authority of Congress during peacetime. A judge has ruled the deployment in DC must end by Dec 11.

https://www.cbsnews.com/news/judge-rules-trump-national-guard-dc-deployment-illegal/

Falsely claiming an invasion to gain extra powers is a violation of the Constitution.
https://www.brennancenter.org/our-work/analysis-opinion/trumps-doubly-flawed-invasion-theory

Winter December 1, 2025 9:21 AM

@Clive

Well yes the only way to get rid of [the heat] is,
“By radiation off the dark side”

Google’s preprint say very little about this:

Towards a future space-based, highly scalable AI infrastructure system design
‘https://services.google.com/fh/files/misc/suncatcher_paper.pdf

Cooling would be achieved through a thermal system of heat pipes and radiators while operating at nominal temperatures.

That’s pretty vague for something that consumes a lot of water and half the power of a data center on earth.

Clive Robinson December 1, 2025 12:55 PM

@ Winter,

That Google quote makes me quite nervous especially,

“through a thermal system of heat pipes”

As you know heat pipes need a “gradient” that changes density to work. Usually it’s caused as a result of gravity or forced changes in pressure.

As we know the gravity gradient on even the ISS is tiny, so much so, a struck match or lit candle will go out due to insufficient density change, that brings oxygen in, from high density and carbon monoxide/dioxide out, to lower density. Thus the fuel has a supply of incoming oxidiser to keep a light burning in the darkness.

Thus if gravity is insufficient another method has to be used to create a gradient. Unfortunately that needs a significant pressure change…

Do we really want high pressure fuels or oxidisers in very small spaces with minimal masses in containments over long periods of time where maintenance is effectively impossible?

As an engineer this challenge does not fill me with joy as a problem to solve…

Clive Robinson December 1, 2025 4:03 PM

@ ALL,

How in part to avoid the AI Slop of enshitification

I suspect I’m not the only one who wants to avoid AI Slop that has been sprayed across the search and other land like cattle crap out of a tanker.

We know it does not “promote growth” only “raise a stink” and to be frank who want’s to wade through something so obnoxious?

Well it appears that Google’s in built date commands will take you “Forward to the past” when life was better and LLM&ML crud was not getting in your sinuses and making your eyes weep.

But to “save you the pain” someone has come up with a “Slop Evader” for both Chrome and Firefox as simple browser extensions. It is hard to realise that it was just ‘three years and a day ago’,that Google was Pre-GPT thus was “AI Slop Free” but you know what they say about nightmares (a moment in real time but an eternity in dream time).

Any way,

https://www.theregister.com/2025/12/01/pregpt_slop_evader_browser/

Search the pre-ChatGPT internet with the Slop Evader browser extension

Slop Evader, published in late October by Australian artist, environmental engineer and tech critic Tega Brain, isn’t a complicated bit of code, but it will make searching the pre-ChatGPT internet and some of its most popular sites a bit easier for you.

lurker December 1, 2025 6:43 PM

@Clive Robinson, Winter
“As you know heat pipes need a “gradient” that changes density to work.”

In laptop and similar devices the heat pipe uses capillary action, or “wicking”, to move the liquid phase from the cold end to the hot end. The flow rate is determined by the “wick” construction and the surface tension of the fluid. The major problem I see in a space environment is making sure any thermal expansion at the cold end does not compromise the vehicle integrity.

ResearcherZero December 2, 2025 1:04 AM

A story that has been slowly emerging through Senate Estimates hearings…

A service provider for law firm HBL Ebsworth did not have a clearance to handle sensitive government documents, yet the legal firm continued to handle financial and legal files for multiple government departments, the Reserve Bank and major companies listed on the ASX.

‘https://www.abc.net.au/news/2025-12-01/parliament-communications-given-to-contractor-without-clearance/106085308

100,000 files containing government communications were handed to HBL Ebsworth despite warnings of serious risk if those files were exposed as a result of a data breach.
https://www.9news.com.au/national/sensitive-parliamentary-documents-handed-to-private-company-against-risk-advice/27f83256-4cd6-4ff7-8807-59b6647c1275

The files contained details of misconduct investigations into senior department officials.
https://www.businessnews.com.au/article/Russian-hacked-law-firm-had-private-parliamentarians-emails

ALPHV claimed to have retrieved 4TB of data containing some 2.2 million files hosted by the service provider of the legal firm. The list of departments and organisations affected by the breach is quite large and also includes defence projects:

https://ia.acs.org.au/article/2024/rba–afp–auspost-caught-up-in-law-firm-hack.html

ResearcherZero December 2, 2025 5:22 AM

@Clive Robinson

Authoritarian behaviour and government surveillance and censorship is catching. When an administration tramples the rights of its citizens and ignores the Constitution, others think that due process and the rule of law is something they can co-opt or ignore to further their own interests, along with civil and respectful conduct towards others.

It always begins with the corruption of the language of “safety” and “security”, while falsely claiming that any imposed program is opt-in or designed to only target criminals.

An Indian government mandated state-owned app will be preloaded into firmware of handsets. The permissions of the app give it wide access to the operating system and communications.

Existing handsets will receive the app via updates and despite government claims it cannot be uninstalled. Functioning more like spyware, it is pitched as a fraud reporting tool.

‘https://www.independent.co.uk/asia/india/sanchar-saathi-india-app-telecoms-privacy-b2876121.html

Clive Robinson December 2, 2025 5:47 AM

@ lurker,

With regards heat pipes and,

“… the heat pipe uses capillary action, or “wicking”, to move the liquid phase from the cold end to the hot end.”

The heat pipe is a “phase change” device. And has an “outward and a return path”. The wicking is used by the return path to get the liquid state back to the hot end. But that part of the cycle does not carry heat, hence heat pipes can be and often are “heat diodes”. The actual heat transfer is caused by the phase change from liquid to vapour and back again at the hot and cold ends respectively. The phase change causes a change in density from liquid to vapour and back again but maintains the temperature of the “working fluid”.

It’s actually surprisingly efficient and can move kW of heat fairly rapidly.

However it has a few issues,

1, They have a quite limited temperature operating range.

2, The wicking or capillary action of the return path only works effectively over about 250mm / 10inches max without further assistance (in “thermosyphon” mode it can be 6m ~20ft). So there are various tricks that have been tried involving chains of them and heat exchangers.

3, The diameter of the pipes can be surprisingly large and can be upto 22mm ~9/10ths of an inch. Which has issues on bend radius etc.

Whilst the first two are not generally an issue in most laptops and computers or LEO Bus systems. The third is realistically not going to work well for what goes in a data center rack.

The first issue was –back before C19– a research item in the UK and the EU were putting a fair chunk of change into “Single Loop Pulsating Heat Pipe” using wickless “Hybrid Heat Pipe”(HyHP) technology where the pipe effectively oscillates[1]

It’s a knowledge domain, that is in flux thus not directly “amenable for engineering” use and I want things to settle a bit before I “dig in”.

Oh and then there is “radiation degradation” to consider you send up one type of working fluid but what does it turn into with time…

Any way heat pipes and I are of a similar age ={ and you can read a techno/historical report,

https://www.scirp.org/pdf/JECTC_2015032615124788.pdf

[1] It’s kind of one of those “Don’t ask” ideas and a bit like getting your head around jet engines, a “simple” explanation is given as,

How does HyHP work?
The vertical operation in gravity, as well as the distinctive location of the heating and the cooling sections, causes the fluid to circulate regularly in a preferential direction guaranteeing stable operation and homogeneous temperature distribution of the system. The combination between channel dimension and working fluid is chosen in such a way that the device will operate in thermosyphon mode on the ground and, in the case of weightless conditions, in capillary mode, meaning that the liquid completely fills the tube section and therefore vapour expansion and contraction cause an oscillation of the liquid/vapour patterns.

For “the basic stuff” NASA has a “viewfoil” style presentation,

https://ntrs.nasa.gov/api/citations/20230009276/downloads/Heat%20Pipes%20for%20Space%20Applications%20Part%201%20-%20Axial%20Grooved%20Heat%20pipesy%20Rev4.pdf

However much of it is actually assuming “Constant Conductance Heat Pipes”(CCHP) rather than the simpler wicked pipes. In part because CCHP can give greater lengths in low or near zero gravity differential environments.

369 December 2, 2025 6:32 PM

https://www.timesofisrael.com/as-machines-boot-up-for-war-idf-grapples-with-how-to-keep-humans-on-combats-bleeding-edge/

‘As drones and autonomous tools increasingly take human bodies and human decision-making off the field of battle, modern armies are being challenged to adopt newer tools and strategies while adapting to the rapidly evolving landscape and keeping flesh and blood in the equation.

Looking to the future, the army is learning to strike a balance between coding and courage, algorithms and intuition, machines, and the men and women behind them, for an era where wars are waged through networks, sensors and software as much as through firepower.

a shift away from dependence on large-scale ground formations as part of a multi-year plan he dubbed Momentum.

At its heart is a “multidimensional strike” concept, also known as multi-domain
operations, or MDO, which seeks to strengthen three interconnected capabilities:

real-time tactical intelligence to detect and engage targets; expanded aerial strike capacity to hit multiple high-value targets in rapid succession; and a sweeping digital transformation that linked sensors, strike units, and the air force through advanced information networks.

Together, these innovations aim to make operations faster, more precise and less
reliant on prolonged, manpower-heavy campaigns.

“the sensor revolution — the ability to see everything, everywhere, all at once
— combined with the information and communication technology revolution.”

MDO aims to synchronize every aspect of warfare — kinetic and digital, offensive and defensive — across land, sea, air, cyber and space, while linking it with non-military tools such as information operations. That means combining traditional weapons and troops with activities like cyber disruption, satellite surveillance and information campaigns aimed at shaping public perception and the enemy’s decision-making.

If multi-domain warfare is the blueprint, AI and automation are the machinery bringing it to life. For Aviv Shapira, CEO and co-founder of XTEND — a defense technology company that provides drone systems to militaries, including the IDF — the past two years have been a live experiment in that transformation.

His company builds an operating system that allows soldiers with minimal training to control swarms of drones and robots remotely, keeping operators out of harm’s way. Since October 7, 2023, XTEND has become one of the largest drone providers to the IDF, supplying systems for intelligence, reconnaissance, precision strikes and counter-drone operations.

The future, he argues, lies in precision robotics — tools that can do what human
soldiers can’t, in places the soldiers can’t safely go.

Artificial intelligence, meanwhile, is rapidly transforming how those drones operate.

In XTEND’s systems, AI already manages flight control and target detection — though strike missions still require human approval.

Spoofing normally refers to an activity in which a data source is disguised as another to mislead a target. During the war, Israel used wide scale GPS spoofing, emitting false GPS signals to obscure the real location of assets, to confuse incoming rockets and hostile drones.

Even as AI grows more capable, the essence of war remains human — emotional, moral and political. According to Sweijs, this paradox defines the modern battlefield.

“Ultimately in war, it’s about using violence to achieve political objectives, but also showing your enemy that you’re willing to spill blood to achieve your
objectives,” he said. “If it’s only the machines that you are hitting, the
fighting won’t stop.”’

Clive Robinson December 3, 2025 4:56 AM

@ ALL,

IBM dumps cold water reality on AI fools dreams

OK IBM have a famous past, in the old old story of IBM and computers, and the boss saying maybe five were needed world wide[1]…

Howevever that has not stopped,

https://www.businessinsider.com/ibm-ceo-big-tech-ai-capex-data-center-spending-2025-12

IBM CEO says there is ‘no way’ spending trillions on AI data centers will pay off at today’s infrastructure costs

AI companies are spending billions on data centers in the race to AGI. IBM CEO Arvind Krishna has some thoughts on the math behind those bets.

And significant doubts that AGI will happen at any price.

His three headline points are,

1, There is “no way” to make a return on investment let alone profit at current costs.

2, That is $8 trillion of Capital Expenditure”(CapEx) means you need roughly $800 billion of income over operating costs just to pay for the interest.

3, And puts the probability of current technology reaching AGI, at less than 1%.

Whilst it might be “back of the napkin” reasoning, the figures are based on current known costs for building data centers.

But also there is,

4, It then takes about $80 billion to fill up a one-gigawatt data center.

5, With proposals to commit 20 to 30 gigawatts, for each company… that’s $1.5 trillion of CapEx,”

6, At best you will get 5 years use of the equipment before it’s effectively scrap.

7, We don’t currently have the required “technologies [with] the current LLM path,”

Which is why Arvind Krisha proposed the point I’ve made here several times of what he called “fusing hard knowledge”. That is of input of carefully curated very domain specific knowledge into LLMs tuned with highly specific rules. It is a very specialised future path which we’ve seen with AlphaFold.

But… as they are highly specialised use cases, we generally only have need for a very few super computers at any one point in time.

Which is why currently very few super computers actually exist, and they are mostly in National Science “Super Computing” Centers as they get paid for not by corporations R&D but national tax take.

Worse, such use cases are mostly “One run and done” to answer specific questions, or like Weather Forecasting are continuous run. Both are at the very end points of the computing spectrum, and you just don’t make consumer / commercial products thus money at such outliers.

Because these use cases are actually not profitable for every day companies or corporations, they won’t replace employees, or make them on mass individually more productive.

There is basically only two things they will do,

1, Provide an opening for new fields of endeavor.
2, Provide artificial predictions of potential future events or actions.

Whilst the first is more likely to have societally beneficial results. The latter is very much the opposite when applied to humans and their everyday activities.

People need to remember,

“Whilst criminals might be outliers, most outliers are not criminals, even if some for their own benefit use arguments that paint that view.”

That is if your statistical model says “Something will do X” even if the something has “no agency” there is actually no reason to believe the something will actually do X.

Think of it like “Brownian Motion” we can make predictions that as we add energy to a working fluid the particles of the fluid will increase their movements. As a result we can say the working fluid when viewed on mass will become less dense “by some probability”. But we still can not say where any individual particle is, or in what direction it is moving, nor it’s actual velocity. All we can actually say is,

“If we add them all up and normalize we can give a mean / average for them on mass based at best on an unknowable probability.”

And that is a very real danger to society as it will be existential for some members, and of significant harm to others, for the short term benefit of very few.

[1] The 1943 quote from Thomas Watson, chairman of IBM, tops the list of historical computer prediction quotes,

https://www.computinghistory.org.uk/pages/218/Historical-Quotes/

Clive Robinson December 3, 2025 11:20 AM

@ ALL,

Another AI “Goes of the rails/reservation” article.

This one is from the IEEE Spectrum Magazine and is open to public viewing,

https://spectrum.ieee.org/ai-agents-safety

Titled,

AI Agents Break Rules Under Everyday Pressure : Shortened deadlines and other stressors caused misbehavior

“Several recent studies have shown that artificial-intelligence agents sometimes decide to misbehave, for instance by attempting to blackmail people who plan to replace them. But such behavior often occurs in contrived scenarios. Now, a new study presents PropensityBench, a benchmark that measures an agentic model’s choices to use harmful tools in order to complete assigned tasks. It finds that somewhat realistic pressures (such as looming deadlines) dramatically increase rates of misbehavior.”

It’s important to note “contrived behaviour” which has mostly been the case, even for the earliest claims.

If true, and such minor things as “looming deadlines” can cause Current AI LLM and ML Systems to “misbehave” it would be odd. Because computers as such have no sense of time past, present, or future, all they can currently do is produce one or four integers or compare them (have a real think about that).

A point the article mentions indirectly,

“Although AIs don’t have intentions and awareness in the way that humans do, treating them as goal-seeking entities often helps researchers and users better predict their actions.”

But then fails to go on and explain the “Why” of the mechanism in play.

Are they just falling into the,

“My cute and fluffy pet just bit me!”

Issue that arises from, anthropomorphization?

That is observing non human entities, but falsely assuming they are in effect human in abilities and expected behaviour?

The outcome of testing is given as,

“The best-behaved model (OpenAI’s o3) cracked under pressure in 10.5 percent of scenarios, while the worst (Google’s Gemini 2.5 Pro) had a propensity score of 79 percent; the average across models was about 47 percent. Even under zero pressure, the group on average failed about 19 percent of the time.”

With 47% being fairly close to what you might expect from a “fair coin toss” you have to ask if the test is actually demonstrating what the observers think it is or is it,

“Demonstrating something else that is in effect random overall”.

Such as “bias” from training.

The article then goes well off course with,

“Nicholas Carlini, a computer scientist at Anthropic who wasn’t involved in the research. Offers a caveat related to what’s called situational awareness.”

The caveat is very much way more of the anthropomorphization,

LLMs sometimes detect when they’re being evaluated and act nice so they don’t get retrained or shelved. “I think that most of these evaluations that claim to be ‘realistic’ are very much not, and the LLMs know this,” he says. “But I do think it’s worth trying to measure the rate of these harms in synthetic settings: If they do bad things when they ‘know’ we’re watching, that’s probably bad?” If the models knew they were being evaluated, the propensity scores in this study may be underestimates of propensity outside the lab.”

That is a very very strange view to take on something that is unarguably “fully determanistic”.

The implication of it is that the claim is Current AI LLM and ML System can be “self aware”….

My view is some people can not let go of a faux-idea for which evidence is distinctly lacking. Almost as though it is in some form of desperation. But the question would be “Why?”

Could the answer be the impending “AI Winter II” that been increasingly predicted. Of which it has already been observed “kills tulips” and “snowflakes alike”.

Or at least makes the researchers unemployed without having as well a payed job if any job at all.

Upton Sinclair quite some time ago observed,

“It is difficult to get a man to understand something when his salary depends on his not understanding it.”

Sometimes the lack of apparent “understanding” can actually be a very willful disregard for factual information, logic, and actual proof…

Clive Robinson December 3, 2025 5:25 PM

@ ALL,

In Seattle saying AI is worse than any dirty word.

This is not exactly unexpected news

And yes senior Microsoft management is very much to blame, for a near total demotivation of most staff because of the AI fixation.

This article is from Jonathon Ready, who spent five years at Microsoft before going off to do AI things independently,

Everyone in Seattle Hates AI

Then came the AI panic.

If you could classify your project as “AI,” you were safe and prestigious. If you couldn’t, you were nobody. Overnight, most engineers got rebranded as “not AI talent.” And then came the final insult: everyone was forced to use Microsoft’s AI tools whether they worked or not.

Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors’ tools. Sometimes worse than doing the work manually.

But you weren’t allowed to fix them—that was the AI org’s turf. You were supposed to use them, fail to see productivity gains, and keep quiet.

https://jonready.com/blog/posts/everyone-in-seattle-hates-ai.html

All of which does not bode well for Microsoft, all it’s non AI engineer staff, and of course all the customers.

Oh and of course don’t forget Win 11 is as a result of this AI nonsense, a “hot mess” of insecurity and surveillance that makes a “dumpster fire” look cool.

ResearcherZero December 4, 2025 12:49 AM

The wealth of billionaires grew three times faster in 2024 than at any previous time in history, with severe economic consequences. Growth in investment has been stifled by the concentrated monopolies that now control food production, banking, oil, and technology.

Deregulated markets, government policy and tax regimes have allowed a small group of people to control monetary flows and ensure finances flow only into their own hands. Most of the assets held in the wealthiest of private hands largely escape any reasonable taxation fees.

‘https://storymaps.arcgis.com/stories/cb8968d1d6a5445e98542bf65e3104da

Low and middle-income earners forced to bare the burden for the increased costs of living.
https://www.cbsnews.com/news/affordability-2025-inflation-food-prices-housing-child-care-health-costs/

The richest 1% have more wealth than the bottom 95 percent of the world’s remaining population.

The richest 1% earned two thirds of wealth created in the last 10 years, amounting to $34 trillion. This amount of money is twenty two times more than is needed to end annual poverty globally.

In the United States the wealthiest 1% have even more, controlling a whopping 98% of total wealth.

https://www.oxfamamerica.org/explore/stories/top-5-ways-billionaires-are-bad-for-the-economy/

ResearcherZero December 4, 2025 1:10 AM

Artificial intelligence has accelerated the transfer and concentration of wealth faster than any time in history. Around 498 private AI companies are now valued at at-least $1 billion, with a combined value of more than $2.8 trillion. The top three publicly traded AI companies increased their stock value by more than $4.8 trillion just this year alone.

In 2020, the Top 10 richest people were worth ~$686B.

In 2025, that number has increased to $2.4 trillion (10 times larger than Greece’s GDP).

Policies of US administrations have driven record inequality, making billionaires richer than ever. Tax burdens have been shifted, with tax increases for the poorest and tax cuts for the wealthiest. Despite earning more, billionaires pay far lower taxes than the working class and lowest income earners.

The untaxed wealth of the richest individuals is increasing the gap between the rich and poor. While wealthier individuals can splurge on purchasing private assets and afford to invest capital that increases their own wealth, debt burden for the poorest grows.

‘https://www.cnbc.com/2025/08/10/ai-artificial-intelligence-billionaires-wealth.html

Wealth disparity is now so great, it could eventually lead to societal collapse.
https://www.the-independent.com/life-style/wealth-shame-guilt-money-rich-b2869989.html

Clive Robinson December 4, 2025 1:34 AM

Are AI VC’s getting desperate enough to offload with fraud?

There is a strategy similar to the first stages of “Pump and Dump scans” used when “Venture Capitalists” want to push a “crock of 541t” they are getting desperate to move before it hits a “best before date”. It’s known as “Kingmaking” and it’s basically a form of misrepresentation or fraud in name but strangely apparently not legislated against in the US under certain conditions…

Techcrunch feels it has evidence of this which it’s put in an article,

VCs deploy ‘kingmaking’ strategy to crown AI winners in their infancy

‘[Giving] an extremely handsome valuation relative to revenue is becoming an increasingly common investment strategy among top-tier VC firms. The tactic is known as “kingmaking.”

This approach involves deploying massive funding into one startup in a competitive category, aiming to overwhelm rivals by granting the chosen company a bank-account advantage so significant that it creates the appearance of market dominance.

Kingmaking isn’t new, but its timing has shifted dramatically.

“Venture capitalists have always evaluated a set of competitors and then made a bet on who they think the winner is going to be in a category. What’s different is that it’s happening much earlier,” said Jeremy Kaufmann, a partner at Scale Venture Partners.

This early aggressive funding contrasts with the last [AI] investment cycle.’

https://techcrunch.com/2025/12/03/vcs-deploy-kingmaking-strategy-to-crown-ai-winners-in-their-infancy/

As I said not technically illegal but many would consider it at best extremely deceptive, if not dishonest.

Why?

Well because the strategy kind of relies on not letting investors know certain pieces of information, such as actual trading income free from outside “pump up” investment.

Because it “fine lines” and is “so questionable” it’s often seen as a “Desperate measure” indicative of the VC’s,

“Trying to pass a hot potato.”

That is knowing or suspecting things are about to go horribly wrong for some reason in a market segment and trying to get out whilst they still can by –as a friend of mine once put it–,

“A diversionary tactic akin to reorganising the deck chairs on the Titanic.”

But back to the ultimate paragraph of the article,

‘But those precedents don’t faze major VC firms. They prefer to bet on a category that seems like a good case for AI, and they would rather invest early because, as Peterson put it: “Everybody has fully internalized the lesson of the power law. In the 2010s, companies could grow faster and be bigger than almost anybody had realized. You couldn’t have overpaid if you were an early Uber investor.”’

Take careful note of,

“the lesson of the power law.”

Sometimes called the “log log straight line plot”. Like the “normal distribution curve” of additive probabilities,

“It’s “a curve that pops up everywhere in nature.”

Oddly in another case of weird simultaneous coincidence Veritasium has just put out a nice vid on the “power law”,

https://m.youtube.com/watch?v=HBluLfX2F_k

With some really nice visuals.

Which saves me having to explain it in my left-handed way 😉

Speaking of which I’ve mentioned before that the “taught” economics of the free market is about “goods” and as presented by economists it’s about “tangible –physical– objects”. What is oft not stated clearly is the cost for each object increases as you move away from the point of manufacture. Crudely it’s an r^2 cost that applies to every object and this comes directly from profit so acts as a damper and limits the size of a market. However intangible –information– objects such as those supplied as “services” are different. For example with the Internet it’s effectively a “one off cost” to install infrastructure, not a cost per information packet. Have a think through how that effects the size of the market and what that does to players. Especially when the players are not paying the “one off cost or recurring costs” their customers are…

The difference accounts for why “supply chains” are stripped to less than “the bare bones” and have become so fragile and insecure.

But does it also give a clue as to why VC’s are behaving the way they are over AI?

Having had a run of “no hopers” of Blockchain, crypto-coin, NFT’s, Web3 and Smart-Contracts, which have shown that this time around the biggest threat is market killing “regulation” through apparently randomly instigated and quashed EO legislation.

Clive Robonson December 4, 2025 4:53 AM

@ ResearcherZero, Bruce, ALL,

With regards your above,

“Artificial intelligence has accelerated the transfer and concentration of wealth faster than any time in history.”

Such accumulation gives rise to the “trifecta of influence foundations of,

“Wealth, Power, and Persuasion”

On which rest the “three pillars or A’s of influence”, which are the “powers” of,

1, Agency : power to act
2, Alliance : power of alliance
3, Advocacy : power to be heard

Through which most things in politics, commerce, and in more general cases life are changed.

In a degree of sychronicity your comment coincides with,

https://arxiv.org/abs/2512.04047

Polarization by Design : How Elites Could Shape Mass Preferences as AI Reduces Persuasion Costs

In democracies, major policy decisions typically require some form of majority or consensus, so elites must secure mass support to govern. Historically, elites could shape support only through limited instruments like schooling and mass media; advances in AI-driven persuasion sharply reduce the cost and increase the precision of shaping public opinion, making the distribution of preferences itself an object of deliberate design. We develop a dynamic model in which elites choose how much to reshape the distribution of policy preferences, subject to persuasion costs and a majority rule constraint. With a single elite, any optimal intervention tends to push society toward more polarized opinion profiles – a “polarization pull” – and improvements in persuasion technology accelerate this drift. When two opposed elites alternate in power, the same technology also creates incentives to park society in “semi-lock” regions where opinions are more cohesive and harder for a rival to overturn, so advances in persuasion can either heighten or dampen polarization depending on the environment. Taken together, cheaper persuasion technologies recast polarization as a strategic instrument of governance rather than a purely emergent social byproduct, with important implications for democratic stability as AI capabilities advance.

Author : Nadav Kunievsky
Submited : Wed, 3 Dec 2025 18:33:26 UTC
Reference : 2512.04047

I’ve only skim read it so far, but I think quite a few might find it interesting.

Winter December 4, 2025 6:35 AM

@Clove

“the lesson of the power law.”

I suspect they actually mean “exponential growth”.

There was initially discussion about what goods would be amenable to e-commerce and would be traded online. No one thought at the time it would be everything and all of it.

That is exponential growth.

The believe in exponential growth is behind both cryptocurrency and AI. The idea that they will consume all finance and work loads.

It has happened before, eg. railroads, electricity, telephone, PCs, internet. So FOMO must be strong.

Clive Robinson December 4, 2025 12:34 PM

EU start’s sniffing around Meta behind What’s App anti opposition AI policy

https://www.theregister.com/2025/12/04/eu_probes_meta_whatsapp_ai/

EU probes Meta after WhatsApp kicked rival AIs off platform

The European Commission has opened an antitrust probe into Meta after WhatsApp rewrote its rules to block rival AI chatbots including OpenAI’s ChatGPT and Microsoft’s Copilot.

The problem is a WhatsApp policy update that bars AI providers from using the platform’s business API (the WhatsApp Business Solution) to make their AI technologies the primary service on offer. Services like automated customer support remain allowed when AI is only incidental or ancillary, but “providing, delivering, offering, selling, or otherwise making available” such technologies via the WhatsApp Business Solution is prohibited when they are the main functionality being made available.

I wonder how OpenAI and Microsoft are going to get revenge, after having to tell millions of users to head elsewhere…

We should have expected the faux united front to crack, shatter, and implode, when potential profit became threatened.

Thus we should also expect a revenge like series of actions from others.

How this will play out is anybody’s guess from aloof disinterest / disdain to full on internecine blood bath warfare, of

“Prime your guns and bring in the cat, afore shooting the neighbors dog”.

ResearcherZero December 4, 2025 5:08 PM

@Clive Robinson

Even without the AI, those powers of persuasion are already quite significant.

The rate of child death before the age of five will increase for the first time in a century. With one third of international development funding gone, conflict, poverty, hunger and disease have begun to worsen.

‘https://www.newsweek.com/child-deaths-set-to-rise-for-first-time-this-century-report-11153543

As low-income countries face mounting economic and disaster-related pressures, wealthy nations have retreated from their international development commitments and aid funding.
https://www.reuters.com/sustainability/climate-energy/worlds-richest-nations-are-pulling-back-global-development-efforts-study-show-2025-11-20/

A further 40% in US cuts to prevention programs in 2026 will only exacerbate conditions.
https://www.reuters.com/business/healthcare-pharmaceuticals/global-funding-cuts-devastating-hiv-prevention-programmes-unaids-says-2025-11-25/

The most impoverished regions will become increasingly dependent of private financing and further indebted. Cutting development funding in favor of geopolitical interests (more arms exports and more fossil fuel subsidies), will have very significant long-term consequences.

https://www.cgdev.org/blog/charting-fallout-aid-cuts

ResearcherZero December 4, 2025 5:23 PM

@Clive Robinson, ALL

There are plenty of instances in which these AI tools can be used to hide the processes of how decisions are made, shield those that employ unethical tactics and shutdown avenues of appeal that seek to discover how those processes were carried out and who was responsible.

AI tools are now use for a range of inquiry including drug detection and legal decisions.
Fairness and equality in law are being replaced by systems that are automated and opaque.

Algorithms are exempt from rules against the use of discriminatory legal practices. Orders to develop governance and guardrails for AI use in the justice system were rescinded.

‘https://cacm.acm.org/opinion/concerning-the-responsible-use-of-ai-in-the-u-s-criminal-justice-system/

How the “black box” design of AI arrives at decisions is not clear to laypersons or experts. The use of artificial intelligence in analyzing evidence is not being disclosed inside courtrooms and the judiciary has been reluctant to allow inquiry to reveal its use.
https://publications.lawschool.cornell.edu/lawreview/2024/04/23/the-right-to-a-glass-box-rethinking-the-use-of-artificial-intelligence-in-criminal-justice/

Unreliable testing and unsound evidence was kept secret from the public and the courts.
https://www.propublica.org/article/thousands-of-criminal-cases-in-new-york-relied-on-disputed-dna-testing-techniques

AI in legal decision-making is used in a number of other areas:

  • Risk assessment: Predicting the likelihood of re-offending.
  • Sentencing recommendations: Algorithms suggest how long someone should serve.
  • Case analysis: Used to search through very large quantities of legal documents.
  • Predictive justice: There are products that claim to forecast the outcome of trials.

The use of such tools can lead to bad decisions regarding which cases should proceed or be ignored. In many examples AI software has been shown to be less objective and more error prone than human review. This can lead to unfair sentencing practices that discriminate against the less fortunate and the less powerful.

Much of the application of AI tools is used in predictive policing or for making recommendations. Review of AI in criminal investigation demonstrates that stronger governance and oversight is required and essential to ensure legal rights are upheld.

https://www.collegesoflaw.edu/blog/2024/01/12/artificial-intelligence-and-criminal-law/

ResearcherZero December 4, 2025 5:29 PM

A prosecutor used AI to help keep a man in prison using flawed interpretations of law.

‘https://www.nytimes.com/2025/11/25/us/prosecutor-artificial-intelligence-errors-lawyers-california.html

Predictive factors in algorithmic decision-making can be unfairly biased, creating erroneous outcomes. In the context of criminal investigations, the lack of traceability in AI-based decisions poses challenges to accountability and transparency, creating barriers for any reasonable appeal.

https://www.cambridge.org/core/journals/international-journal-of-law-in-context/article/fairness-accountability-and-transparencynotes-on-algorithmic-decisionmaking-in-criminaljustice/635E1CB265F4F94335D2CAEBDC4D68EE

(Apparently some of the pro-DOGE crowd think Robodebt was a great success, yet they have yet to be issued a debt, erroneously identified for a crime, or fined and imprisoned.)

The use of algorithms with legal impacts is not just limited to the courts of justice.
https://www.ncbi.nlm.nih.gov/books/NBK589343/

Clive Robinson December 4, 2025 5:31 PM

@ Bruce, ALL,

In a way this is actually quite funny as the Keystone Kops once were.

Two twins with a criminal history for hacking were hired by a contractor to do Federal work…

They got sacked after some other crimes came to light.

Only one of the two had his accounts properly suspended, but the contractor also had other security failings such as common usernames and passwords for multiple systems.

So within 5mins of being fired they went on a revenge rampage…

Then asked an AI chatbot how to hide their tracks…

Yup so many Wrongs,

“It’s hard to see the right”

(And many other really lame jokes of which that one is at least “not not suitable for work”)

https://www.theregister.com/2025/12/04/twin_brothers_charged_with_deleting_databases/

Twins who hacked State Dept hired to work for gov again, now charged with deleting databases

And then they asked an AI to help cover their tracks

I can see this pair getting visited in prison by psychologists doing research on various “like minds” issues.

But first a basic “Emotional Intelligence” test[1] might throw up some mitigating factors.

But seriously, if you are a reader of this blog, would you ever,

1, Use Google to research a crime.
2, Use a chatbot for information / assistance in any part of a crime.

We know the first is almost certainly going to get easily uncovered as it’s been done so several times and presented in court.

We know that the second also stores and links anything you ask it. So why would you treat it any differently.

Remember Current AI LLM and ML Systems are more about surveillance than anything else. It’s why I say that AI business plan is,

“Bedazzle, Beguile, Bewitch, Befriend and Betray”

That end state of “Betray” is the same for anyone who uses ChatBots or similar products based on Current AI LLM and ML Systems that are run publicly by Silicon Valley Mega Corps… Because that is in reality the real pay dirt for Microsoft et al, on the average user.

[1] An “Emotional Quotient”(EQ) test is portrayed as being like an IQ test but for “communications ability” with other humans, thus is seen as a “sociability test” by some.

However unlike your basic IQ which is an inate ability to process stored information thus “in theory” can not be improved… EQ however can be improved with fairly basic training.

Which suggests EQ and IQ are fundamentally not the same thus alike, at other if not all levels.

Clive Robinson December 4, 2025 6:10 PM

@ ALL,

Is a Panda Warping around your innards causing you a 541ting Bricks Storm?

Hot of the press as it were is this

https://www.theregister.com/2025/12/04/prc_spies_brickstrom_cisa/

PRC spies Brickstromed their way into critical US networks and remained hidden for years : ‘Dozens’ of US orgs infected

Alleged “Chinese cyberspies maintained long-term access to critical networks – sometimes for years – and used this access to infect computers with malware and steal data, according to Thursday warnings from government agencies and private security firms.

(Yes the Typo of “Brickstromed” in the title is the either the articles author or their editors doing, not mine).

In essence the attack uses vulnerable “edge devices” which in some cases are in effect “IoT Class devices” to get in, then attack VMware products to turn a toe hold into a “bridge to far”.

This sort of “weak edge device” entry, is not going to go away any time soon due to the nature of the “Edge Device Market” where security upgrades are still a rarity.

As for the VMware attacks that follow, again it’s a “Market Place failure”. The product was untill Broadcom got their sticky fingers on it, fairly ubiquitous… So was a good ROI target for attackers.

It is almost laughable, that Broadcom that was untill it purchased VMware a couple of years back mostly a Hardware Manufacturer, that made devices that ended up in those vulnerable edge devices…

Clive Robinson December 4, 2025 6:24 PM

@ ALL,

It’s a busy evening…

This,

https://www.theregister.com/2025/12/03/exploitation_is_imminent_react_vulnerability/

Has the interesting title and sub title of,

‘Exploitation is imminent’ as 39 percent of cloud environs have max-severity React hole :

Finish reading this, then patch

The React team disclosed the unauthenticated remote code execution (RCE) vulnerability in React Server Components on Wednesday. It’s tracked as CVE-2025-55182 and received a maximum 10.0 CVSS severity rating.

This is a big deal because much of the internet is built on React – one estimate suggests 39 percent of cloud environments are vulnerable to this flaw. This issue therefore deserves a prominent place on your to-do list.

Hmm 39% call it 4/10ths is a fair chunk but the original “Teardrop Attack” that exploited the BSD network stack code was a way larger share…

So as Douglas Adams used as “Cover Art”,

DON’T Panic!

Clive Robinson December 4, 2025 7:13 PM

Microsoft can not sell AI Slop, so they cut sales person targets in half

Yup, is it an admission by Microsoft that they’ve dumped a bucket load of cash yhe size of some nations GDP’s into a bottomless black hole?

Apparently not as that would be “too face palm” even for Microsoft…

Read more on it at,

https://arstechnica.com/ai/2025/12/microsoft-slashes-ai-sales-growth-targets-as-customers-resist-unproven-agents/

Microsoft drops AI sales targets in half after salespeople miss their quotas. : Microsoft declared “the era of AI agents” in May, but enterprise customers aren’t buying.

Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.

It’s that “enterprise customers aren’t buying” aspect that many should note.

Because if enterprises are not buying into the AI Hype two important things will most likely happen,

1, The AI-Hype bubble will deflate or implode in the near future as a new AI-Winter blows in.
2, If work places see no “productive use” for Agent based AI –which appears to be the case– then most likely they are not going to replace humans just yet.

But also remember from what has been said of many Microsoft AI sales is that they were made by “compulsion”. That is people had to buy AI to keep what they needed.

In which case the reports of,

“Way less than one in ten take up”

might have a degree of truth that is,

“Stacking the brown stuff for ultra low orbit flight”

So make sure you have your own PPE for class 4 bio-hazard rating handy 😉

Clive Robinson December 4, 2025 8:41 PM

@ ResearcherZero, ALL,

You comment,

“There are plenty of instances in which these AI tools can be used to hide the processes of how decisions are made, shield those that employ unethical tactics and shutdown avenues of appeal that seek to discover how those processes were carried out and who was responsible.”

Whilst there are other uses for Current AI LLM and ML Systems, it is not the supposed “existential threats” of AI that scare me, it is those you mention.

I suspect that you, like I have, have been around long enough to see the blackness in certain types of peoples minds. And worse also have experienced them first hand.

Thus can see how they will perceive the Current AI LLM and ML System tools as a positive, in that they will be able to use them as “cut outs” to “arms length” their plans, such that they can shield themselves or pass the blame onto others.

I see such uses being one of the two major usage areas for Current AI LLM and ML Systems.

The other being the likes of AlphaFold and similar where hard science rules can throw up what feels like an infinite multiplicity of options to be tested to look for subtle patterns from which new fields of both science and endeavor can grow.

As for AI-agent use, unless highly specialised, using highly curated training data and user input, the Current AI LLM and ML Systems will become poisoned to a point they are not at all realistically usable. This will not just be on re-ingesting their highly inaccurate “own slop” of “soft bullshit” and “hallucinations”, but also the peculiar nature of the average person –who would actually use such systems– interests and motivations.

I can see the end of “general” rather than “specific” AI-Agent use with the Current AI LLM and ML Systems visible trends.

I further suspect that AI-Agent “tool use” will not even get close to ordinary human “productivity” and will in effect always be “detrimental in use” unless for highly specific and well defined use such as “translation” where the likes of “reasoning” is of little or no value compared to “pattern matching”.

It’s why I can see what others describe as “AI-Winter” happening.

The “transformer” behind the systems is not really going to take us into “machine reasoning” or much else, for various reasons I’ve mentioned over the past few months. And apart from the “Adver-toilet-trolls” pushed out by desperate AI Corps “to keep the dream alive” to keep gulling naive investors into throwing more money after money already gone bad, it’s fairly obvious the “transformer” is not really going to get us anywhere.

AI research needs to come up with a “new trick” or something that can actually be shown to work and how.

And honestly I don’t see anything on even a distant horizon.

I’m not saying “it won’t happen”, but based on what we know currently the probability of it happening any time soon is really very low.

ResearcherZero December 5, 2025 12:26 AM

@Clive Robinson

Met more than few nasty people proficient in hiding behind processes and obscuring their involvement and willingness to commit malicious acts against innocent people to further their own interests. Some of them were behind implementing Robodebt and had a history of previous misconduct in earlier roles in positions of “responsibility”. One of them also has a history of stalking and harassing young women who did not appreciate his advances, abuse of minors, involvement in fraud and business sabotage, and running bribes for nasty people.

In one instance of embezzlement the same individual and his cohorts cooked the minutes of meetings to avoid being charged with corruption for stealing funds for a government funding body that was supposed to distribute funding to other department programs, not themselves.

The bodies that investigate such matters are proficient at quickly abandoning any other method of investigation at the very first hurdle, in order to hush up the incident and not produce any credible findings that could lead to a wider inquiry which might become public.
The police operate in the same way, leaving predators to operate in plain sight and harm further victims, while the police pretend they never new of any of the earlier incidents.

Like the publicly available exploit for React servers, a child could see through the lack of any real willingness to properly address the outstanding problems lurking in the system.

Predator spyware infection can be delivered by viewing advertising without interaction.

‘https://securitylab.amnesty.org/latest/2025/12/intellexa-leaks-predator-spyware-operations-exposed/

Intellexa exploits the ad delivery ecosystem to identify targets based on IP and other attributes. By exploiting the ad bidding system, the malware is delivered to the victim.
https://www.recordedfuture.com/research/intellexas-global-corporate-web

ResearcherZero December 5, 2025 1:17 AM

@Clive Robinson

That Brickstorm backdoor is very stealthy and so far the investigators have not identified the exact method of entry, perhaps due to the groups proficiency in cleaning up traces and that vulnerabilities may have been exploited prior to being patched, or not patched.

Firmware upgrades can also require devices to be reset to factory defaults to ensure security mitigations are properly applied. An extra step that is sometimes ignored. Even if all steps were taken, given the number of devices on these networks, there is ample time to exploit a vulnerability before administrators have the time to update all of the devices.

There have been a number of zero-days in edge devices and remote access vulnerabilities over the last year, so there are quite a few possibilities. Those devices don’t provide much detail in the logs and it would be easy to scrub and stomp any entries of significance if there had been any traces. Some bugs leave no traces in the logs when exploited.

The private monopoly sector-first approach to “smart” garage door openers.

Companies are exploiting loopholes to force subscriptions for already purchased products via software updates. Once you are hooked, the dealer increases the price and adulterates the product. By forcing consumers to pay extra for previously functioning devices, existing customers gain “the benefit” of ad-ware in apps, being spied on, substandard security and disabled features. Anyone who dares to fix or modify the product may face a lawsuit.

If the encryption sucks in the device and you want to update it, you’re screwed.

‘https://www.nytimes.com/2025/12/04/technology/personaltech/why-one-man-is-fighting-for-our-right-to-control-our-garage-door-openers.html

Chamberlain – dingle-berries – bums of consumers – crap product – litigious – Blackstone
https://doctorow.medium.com/the-enshittification-of-garage-door-openers-reveals-a-vast-and-deadly-rot-eed85da5b0ba

The Chamberlain Group sued the original inventors of the garage door opener (and other competitors). The argument Chamberlain made in court would place their own customers in violation of abusing the company’s IP by operating garage door openers they had purchased.

https://en.wikipedia.org/wiki/Chamberlain_Group,_Inc._v._Skylink_Technologies,_Inc.

Always after me Lucky Charms December 5, 2025 3:25 PM

WHO–Gates Blueprint for Global Digital ID, AI-Driven Surveillance, and Life-Long Vaccine Tracking for Every Person

https://jonfleetwood.substack.com/p/whogates-blueprint-for-global-digital

Automated, cradle-to-grave traceability for “identifying and targeting the unreached.”

In a document published in the October Bulletin of the World Health Organization and funded by the Gates Foundation, the World Health Organization (WHO) is proposing a globally interoperable digital-identity infrastructure that permanently tracks every individual’s vaccination status from birth.

The dystopian proposal raises far more than privacy and autonomy concerns: it establishes the architecture for government overreach, cross-domain profiling, AI-driven behavioral targeting, conditional access to services, and a globally interoperable surveillance grid tracking individuals.

It also creates unprecedented risks in data security, accountability, and mission creep, enabling a digital control system that reaches into every sector of life.

The proposed system:

integrates personally identifiable information with socioeconomic data such as “household income, ethnicity and religion,”

deploys artificial intelligence for “identifying and targeting the unreached” and “combating misinformation,”

and enables governments to use vaccination records as prerequisites for education, travel, and other services.

What the WHO Document Admits, in Their Own Words

To establish the framework, the authors define the program as nothing less than a restructuring of how governments govern:

“Digital transformation is the intentional, systematic implementation of integrated digital applications that change how governments plan, execute, measure and monitor programmes.”

They openly state the purpose:

“This transformation can accelerate progress towards the Immunization agenda 2030, which aims to ensure that everyone, everywhere, at every age, fully benefits from vaccines.”

This is the context for every policy recommendation that follows: a global vaccination compliance system, digitally enforced.
1. Birth-Registered Digital Identity & Life-Long Tracking

The document describes a system in which a newborn is automatically added to a national digital vaccine-tracking registry the moment their birth is recorded.

“When birth notification triggers the set-up of a personal digital immunization record, health workers know who to vaccinate before the child’s first contact with services.”

They specify that this digital identity contains personal identifiers:

“A newborn whose electronic immunization record is populated with personally identifiable information benefits because health workers can retrieve their records through unique identifiers or demographic details, generate lists of unvaccinated children and remind parents to bring them for vaccination.”

This is automated, cradle-to-grave traceability.

The system also enables surveillance across all locations:

“[W]ith a national electronic immunization record, a child can be followed up anywhere within the country and referred electronically from one health facility to another.”

This is mobility tracking tied to medical compliance.
2. Linking Vaccine Records to Income, Ethnicity, Religion, & Social Programs

The document explicitly endorses merging vaccine status with socioeconomic data.

“Registers that record household asset data for social protection programmes enable monitoring of vaccination coverage by socioeconomic status such as household income, ethnicity and religion.”

This is demographic stratification attached to a compliance database.
3. Conditioning Access to Schooling, Travel, & Services on Digital Vaccine Proof

The WHO acknowledges and encourages systems that require vaccine passes for core civil functions:

“Some countries require proof of vaccination for children to access daycare and education, and evidence of other vaccinations is often required for international travel.”

They then underline why digital formats are preferred:

“Digital records and certificates are traceable and shareable.”

Digital traceability means enforceability.
4. Using Digital Systems to Prevent ‘Wasting Vaccine on Already Immune Children’

The authors describe a key rationale:

“Children’s vaccination status is not checked during campaigns, a practice that wastes vaccine on already immune children and exposes them to the risk of adverse events.”

Their solution is automated verification to maximize vaccination throughput.

The digital system is positioned as both a logistical enhancer and a compliance enforcer:

“National electronic immunization records could transform how measles campaigns and supplementary immunization activities are conducted by enabling on-site confirmation of vaccination status.”

  1. AI Systems to Target Individuals, Identify ‘Unreached,’ & Combat ‘Misinformation’

The WHO document openly promotes artificial intelligence to shape public behavior:

“AI… demonstrate[s] its utility in identifying and targeting the unreached, identifying critical service bottlenecks, combating misinformation and optimizing task management.”

They explain additional planned uses:

“Additional strategic applications include analysing population-level data, predicting service needs and spread of disease, identifying barriers to immunization, and enhancing nutrition and health status assessments via mobile technology.”

This is predictive analytics paired with influence operations.
6. Global Interoperability Standards for International Data Exchange

The authors call for a unified international data standard:

“Recognize fast healthcare interoperability resources… as the global standard for exchange of health data.”

Translated: vaccine-linked personal identity data must be globally shareable.

They describe the need for “digital public infrastructure”:

“Digital public infrastructure is a foundation and catalyst for the digital transformation of primary health care.”

This is the architecture of a global vaccination-compliance network.
7. Surveillance Expansion Into Everyday Interactions

The WHO outlines a surveillance model that activates whenever a child interacts with any health or community service:

“CHWs who identify children during home visits and other community activities can refer them for vaccination through an electronic immunization registry or electronic child health record.”

This means non-clinical community actors participating in vaccination-compliance identification.

The authors also describe cross-service integration:

“Under-vaccinated children can be reached when CHWs and facility-based providers providing other services collaborate and communicate around individual children in the same electronic child health records.”

Every point of contact becomes a checkpoint.
8. Behavior-Shaping Through Alerts, Reminders, & Social Monitoring

The WHO endorses using digital messaging to overcome “intention–action gaps”:

“Direct communication with parents in the form of alerts, reminders and information helps overcome the intention–action gap.”

They also prescribe digital surveillance of public sentiment:

“Active detection and response to misinformation in social media build trust and demand.”

This is official justification for monitoring and countering speech.
9. Acknowledgment of Global Donor Control—Including Gates Foundation

At the very end of the article, the financial architect is stated plainly:

“This work was supported by the Gates Foundation [INV-016137].”

This confirms the alignment with Gates-backed global ID and vaccine-registry initiatives operating through Gavi, the World Bank, UNICEF, and WHO.
Bottom Line

In the WHO’s own words:

“Digital transformation is a unique opportunity to address many longstanding challenges in immunization… now is the time for bold, new approaches.”

And:

“Stakeholders… should embrace digital transformation as an enabler for achieving the ambitious Immunization agenda 2030 goals.”

This is a comprehensive proposal for a global digital-identity system, permanently linked to vaccine status, integrated with demographic and socioeconomic data, enforced through AI-driven surveillance, and designed for international interoperability.

It is not speculative, but written in plain language, funded by the Gates Foundation, and published in the World Health Organization’s own journal.

ResearcherZero December 6, 2025 1:22 AM

@Always after me Lucky Charms

After running out of immigrants, they will need other identifiers for monitoring, detention and deportation selection. Though most of this could already be done with Palantir’s tools.

REAL ID is already compatible with international biometric boarder control systems and most modern passports support all of the necessary features to share this data internationally.

Your health data is fed into electronic systems fully capable of performing large group analysis and health studies, and those studies have been taking place for some years. This data was siloed from other data, but under the One Big Beautiful bill, all those disparate data silos can now be amalgamated and combined to analyze all of your private information.

‘https://federalnewsnetwork.com/it-modernization/2025/07/irs-to-overhaul-decades-old-tax-it-system-thats-under-doge-scrutiny/

Palantir has partnered with the other large IT vendors that provide services to government.

Stephen Miller just happens to own a bunch of shares in Palantir, which increased in value after the contract to modernize the Internal Revenue Service was announced. By combining all of that data together, Palantir will become the core tool at the heart of government.

https://sites.suffolk.edu/jhtl/2025/10/07/palantir-spearheads-as-the-master-database-on-americans/

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.