Comments

WorthSharing February 7, 2026 9:05 AM

Waymo Reveals Remote Workers in Philippines Help Guide Its Driverless Cars

Probably more often than previously admitted. Because full honesty would reduce investments in their “advanced technology”…

Clive Robinson February 7, 2026 6:44 PM

@ lurker,

With regards,

“If Large Language Models are trained on words and their significance in context, how come ChatGPT cannot play Scrabble?”

The quick and overly simplified answer is,

1, Because nobody has put games in the LLM ML training data.

But that is just the begining of the problems…,

2, LLMs don’t know what letters and words are, they only know tokens.

This is why they can not answer questions like

“How many Rs are there in strawberries?”

Statistically “st” / “raw” are much more likely than “straw” which in turn is much more likely than “strawberries”. But to make it worse “berries” is quite a low probability thus it is also likely to have been “tokenized” into more than one part such as “be” / “ber” / “ries” / “ies”…

Whilst LLMs can sort of count, what are they actually counting?

lurker February 8, 2026 2:04 AM

There doesn’t seem much point in scraping the internet for literature if all you’re going to do is boil it down to alphabet soup. Surely it’s a lie to call them Language Models if they’re only dealing in token fragments, not real words that make real language. James Joyce must be spinning in his grave …

Winter February 8, 2026 6:19 AM

@lurker

Surely it’s a lie to call them Language Models if they’re only dealing in token fragments, not real words that make real language.

You are touching here on a pain point in human knowledge. We know words are fundamental for human communication an maybe thought, but we do not know what constitutes a word.

English orthography creates the illusion that a word consists of the characters between two spaces or punctuation. Nothing could be further from truth, even for English. For other languages it is not even a joke.

Tokens are currently the closest we can get to “words”. Whatever words might turn out to be.

Winter February 8, 2026 6:51 AM

@lurker

Surely it’s a lie to call them Language Models if they’re only dealing in token fragments, not real words that make real language.

PS, you can play with tokenizers at HuggingFace to see what they actually do:

‘https://huggingface.co/spaces/Xenova/the-tokenizer-playground

Apokrif February 8, 2026 9:14 AM

https://www.reddit.com/r/opsec/comments/1qxpkn3/opsec_of_the_vvips/

“I’ve always been curious about the operational‑security protocols that ultra‑wealthy politicians, heads of state, intelligence officers, and agency chiefs around the world follow. Do they use special phones? Dedicated messaging platforms? What happens to the data footprint they have left behind—does someone systematically hunt down their digital footprints and wipe them clean?”

Steve February 8, 2026 11:49 AM

@Apokrif quotes: “I’ve always been curious about the operational‑security protocols that ultra‑wealthy politicians, heads of state, intelligence officers, and agency chiefs around the world follow. Do they use special phones?

If spew of infromation in the Epstein files is any indication, I suspect the answer is a resounding “NO.”

Not only that, but they can’t type or spell worth beans, they have stupid ideas, and their grammar is beyond atrocious.

But money and power can’t buy you everthing, it seems.

Clive Robinson February 8, 2026 12:01 PM

@ Bruce,

One for your AI files,

Supermarket sorry after facial recognition alert flags right criminal, wrong customer

A British supermarket says staff will undergo further training after a store manager ejected the wrong man when facial recognition technology triggered an alert.

Sainsbury’s told The Register that its Facewatch system correctly identified a man on its offenders’ database, and alerted store managers who manually review each flag. However, in responding to the alert, the manager approached the wrong person, and escorted him out of the store.

https://www.theregister.com/2026/02/06/sainsburys_/

As all ways when it comes to security,

“It’s the whole system that counts, not just the individual components.”

Winter February 8, 2026 12:59 PM

@Apokrif

“I’ve always been curious about the operational‑security protocols that ultra‑wealthy politicians, heads of state, intelligence officers, and agency chiefs around the world follow.”

I don’t know about the ultra-wealthy etc., I suppose they have someone who carries a phone for them and corresponds. But I saw an explanation what those who protect human rights activists and their lawyers etc. do.

They subscribe to an online number provider and use VOIP over VPN only.

Their mobile uses an anonymous internet only SIM card from a random provider and they never ever use the SIM number for text or voice, they even don’t know the SIM number.

Mobile and SIM can be replaced as often as is felt necessary, the phone number they can be reached under remains the same.

With some precautions, the mobile device and it’s SIM number can be kept separated from the person’s call phone number. As all communication is done over IP and VPN, this makes tracking expensive and difficult.

lurker February 8, 2026 4:18 PM

@Winter

“When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

’The question is,’ said Alice, ‘whether you can make words mean so many different things.’

Lewis Carroll: Through the Looking Glass

My desire is to see an intelligent AI that can grok Lewis Carroll. It’s all very well scraping and sieving and sorting words and bits of words, but to understand words, and have useful conversation, one must have experienced life amongst the people who use those words.

Winter February 8, 2026 10:09 PM

@lurker

My desire is to see an intelligent AI that can grok Lewis Carroll.

To each their own.

I am already well served with a good translation or transcription from speech, a good summary of text, or a good suggestion for new text or code. A self driving car also has it’s uses.

An AI that can explain poetry would be impressive, but I cannot yet see a use for it.

ResearcherZero February 9, 2026 1:25 AM

There a few comments that require deleting and someone needs to set their browser to block java script and cookies. Pretty simple action to take and extremely easy to do.

Chinese router rootkit uses virtual adapter (TAP) to intercept and rewrite traffic.

‘https://blog.talosintelligence.com/knife-cutting-the-edge/

ResearcherZero February 9, 2026 4:41 AM

@Clive Robinson

“It’s the whole system that counts, not just the individual components.”

(paywalled)

Naveed Akram was investigated by national security officials six years ago, but no action was taken. If the claims are true, it wouldn’t be the first time that this has happened.

‘https://www.afr.com/politics/federal/father-and-son-suspected-shooters-in-bondi-terror-attack-20251215-p5nnmz

Akram reportedly made plans to conduct public attacks and had a dangerous ideology.
https://www.abc.net.au/news/2026-02-09/bondi-shooting-spy-claims-told-asio-terror-links-four-corners/106306092

ResearcherZero February 9, 2026 4:44 AM

Mobile and internet services went down for hundreds of thousands of Optus customers again.

‘https://www.abc.net.au/news/2026-02-09/optus-tech-issue-affects-customers-no-service/106323474

Clive Robinson February 9, 2026 6:49 AM

@ You Don’t Say,

Two questions arise from your apparently embittered tirade.

Firstly is the obvious question,

Do you normally show such ignorance in public?

I shall explain this and then ask the second question to close.

Two fundamentals of survival and growth in STEM / STEAM subjects are,

A. Life Long Learning.
B. Efficient learning.

The first should not need any further explanation.

The second needs an understanding of

B.1. Learning by rote.
B.2. Learning by reasoning.
B.3. Learning by teaching.

It’s clear that way to many people do not understand this as the hype like nonsense around AI systems has shown at least four times since the idea of calling machine learning was given the name AI.

Some things have to be just memorised like events at times and places. That is prior to WWII most education was of this form and it’s called “learning by rote”.

In this day and age “learning by rote” is seen as not really of much use beyond basic or foundational education. This is because many consider they can just “look it up”.

Thus ask things like is the date that Ben Franklin “supposedly” hung out a key to catch lightning in a storm[1] of any use outside of a niche occupation and a $million TV quiz…

The next stage of “learning by rote” is where you start acquiring the tools to reason with[2].

Then tied in with this is the next stage in learning which is how to use tools to actually reason. Basic reasoning is actually quite hard for the majority of people. It is mostly because people are not taught how to do it and they are expected to,

“Just pick it up as they go”.

The result… many if not most end up doing it very inefficiently or just “giving up” and so much “gut reaction” behaviours are observed.

One of the more efficient ways to improve is these days known as the “Feynman Technique”[3] Which is the first part of “Learning by Teaching”.

It helps you understand what you know and don’t know and enables you to reduce a problem to a simple summary of just a sentence or paragraph that includes the salient details of the foundational knowledge required to understand or solve the problem and reason / work it to a solution.

But “learning by teaching” has another component which is “learning by being questioned”. Whilst a Professor imparts knowledge to those they lecture to, they in turn learn from the audience by the questions they ask of the professor.

From this we can reason two basic things,

B.3.1 Learning is a two way process.
B.3.2 Reasoning is a two way process.

Without a two way process, progress will be slow, incomplete, or just wrong.

Now you have sufficient information to improve your lot in life to remove what is being demonstrated as “an obviously quite embittered” point of view which is not helpful to you or others.

But I suspect you already know you are coming across as embittered by the use of a random handle. Thus the second question arises,

“Are you doing this because you are just embittered and not succeeding in life, or are you doing it for other reasons such as ‘cancel culture’ or similar behaviour?”

I’m sure others will be curious in your answers.

[1] With basic information you can reason out that the “Key, Kite, Lightning Experiment” is very probably a story of myth from misunderstanding, thus not factual or lightning was not involved.

Because being hit by lightning is a fairly traumatic and all too often fatal experience. But the story has hung in there as can be seen from,

https://fi.edu/en/science-and-education/benjamin-franklin/kite-key-experiment

(Which also has the approximate date).

[2] For reasons not yet understood “writing in longhand” is a fast and efficient way to fix knowledge in “learning by rote” as is sketching graphs and similat. And NO, typing and use of CAD does not work anywhere near as effectively… It’s also one of the reasons the “A” was added to STEM to give STEAM.

[3] It actually predates Feynman by quite some time, but he articulated it well. But others not so well,

https://www.geeksforgeeks.org/blogs/feynman-technique/

What is missing in step 2 is the all important “learning by rote” by “writing in longhand” your notes and foundational knowledge.

Clive Robinson February 9, 2026 8:23 AM

@ lurker,

You make the comment,

“Surely it’s a lie to call them Language Models if they’re only dealing in token fragments, not real words that make real language.”

No it’s not a lie.

The short simple answer is the legal cases going on, where we know the LLM is “tokenized” yet upon demand a “Harry Potter” book appears as though by magic, not word perfect to be sure but close enough for the legal brethren to get their panties all in a wad about the fees they are going to make not just now but in the future.

So the quick question is

“How do Current LLM and ML systems do this “Bedazzling and “Bewitching” behaviour?”

The short simple answer is,

“Statistics from the ML training data via the Perceptron and Attention functions,

‘https://m.youtube.com/watch?v=l-9ALe3U-Fg

‘https://m.youtube.com/watch?v=0VLAoVGf_74

With in later models additional “compression”.

The thing to understand about compression is that the more compression it has the more of a very complex plane it has.

If you take two real numbers of infinite resolution you can encode all the information in the universe not just the known universe. Now let us assume they are represented in binary code. For each additional bit added the amount amount of information that can be stored in that space goes up by a factor of four.

But also you don’t need to store both numbers, just the difference between them… But a second point needs to be noted how big does this number have to be in bits?

It’s an answer that is dependent on how much information can be lost, and how much information is in effect nearly identical thus in effect with small delta a duplicate.

Hence tokens are actually “vectors” that store relationships in compressed form.

When you understand what “weights” are in LLM’s you start to understand how they can further store information and in effect pull it out again.

The size in bits of the numbers in vectors and weights in bits in effect defines the resolution of the information stored.

But there is something else that happens as I’ve mentioned before,

“An LLM is a DSP matched filter” and this is made adaptive by adding a mask (by matrix multiplication).

Now you can view this as a “multiple dimension” thus “multiple spectrum” filter.

Each “spectrum” can be used as a “level of filtering” to find the underlying “statistics”.

Thus the notion of alphabet, word, grammar, etc, etc, etc builds up in ways we do not understand currently and may not understand in our lifetimes.

But the thing to remember is that because the numbers are finite in size the spectrums are never smooth they look like a series of bell shaped curves around each token that has have differing hights and overlap to produce a “broken comb” like spectral pattern in two dimensions, and it just gets worse in each additional dimension. Thus the information in an LLM is always approximate and of varying accuracy.

Hence whilst a Harry Potter book is statistically in the model, it is not going to be word perfect except by chance.

You do not need infinate numbers of monkeys banging on the keys of typewriters to get the works of Shskespear but the more word perfect you want them the larger either the number of monkeys you need or the longer they have to bang on the keys. The same time space trade off happens in LLMs. BUT LLMs are always going to be constrained by the size of and quantity of those numbers and how much information is lost in the compression.

Thus as the price of compute and memory has made running an LLM so expensive in resources and is always going to be “bound” by them. The next logical direction that LLMs are going to go is in “compression” that “matches the spectrums”.

Something that few are talking about because it’s a very hard research problem as far as we are aware…

Clive Robinson February 9, 2026 4:20 PM

@ Bruce, ALL,

It appears the ridiculous “996” idiocy that failed in China and go “clamped down” is now starting to infest the US

The tech firms embracing a 72-hour working week

The recruitment website is jazzy, awash with pictures of happy young workers, and festooned with upbeat mini-slogans such as “insane speed”, “infinite curiosity” and “customer obsession”.

Read a bit lower, and there are promises of perks galore: competitive compensation, free meals, free gym membership, free health and dental care and so on. But then comes the catch.

Each job ad contains a warning: “Please don’t join if you’re not excited about… working ~70 hrs/week in person with some of the most ambitious people in NYC.”

https://www.bbc.co.uk/news/articles/cvgn2k285ypo

The ads are for an AI company and you have to ask if this is “AI Hype deflation desperation” or just idiot management.

The simple fact is, the more cerebral your job, the less hours a day you can sustain.

Thus the amount of cerebral work can drop to just two hours in a day separated by four to six hours and other non cerebral work like “paper work” etc have to be carried out to make up the other hours.

For those that have not tried it working 12hours a day for 6days a week can not be sustained in technology type work. Even in something as supposedly simple as “user support”.

The first thing that goes is your ability to hold things in your short term memory, which means it does not make it into long term memory…

Often the next thing to go is what some put down as “brain fog” but it’s a form of cognitive decline.

Then physiological symptoms appear and it’s not just feeling exhausted it can actually kill you. Due to not being able to process visual and auditory inputs and having slow reaction times. Thus driving, using machinery or even just trying to balance or walk becomes critically effected and you become not just a danger to yourself but others around you.

In many Western nations they have fairly strict “Health and Safety” legislation especially for those that use any kind of machinery. To prevent such symptoms becoming statistics and grave markers.

But the statistics for a long term study on UK Nurses on 12hour shifts showed a 4/10ths increase in traffic accidents on the drive home and other increases in undesirable health such as what is in effect substance abuse / addiction. After C19 many nurses left the profession on health grounds both physical and mental and with them went a very large part of experience. Those who could took early retirment, and these were the very people who had the much needed experience.

The thing is there is a second nonsense that “young is better” when in fact the opposite is true,

Study confirms experience beats youthful enthusiasm

A growing body of research continues to show that older workers are generally more productive than younger employees.

Annie Coleman, founder of consultancy RealiseLongevity, analyzed the data and highlighted a 2025 study finding peak performance occurs between the ages of 55-60.

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/

As others on the blog have noted the desire to “out source home market labour” is going to provide a significant skills shortage. As the young get next to no job opportunities due to less than bright “management” trying to get AI to function, so they do not employ&train those with limited starting skills, and those that “grey out” into retirment.

For those old enough to remember “Business Process Re-engineering” from decades back where senior management got rid of middle management to halve the employment bill. The actual result was the organisation became not just less productive, it became increasingly fragile. And so, often went “belly up”, at the first sign of stress.

If we are lucky we will see the majority of AI job replacments fail very quickly and thus the danger before a tipping point is crossed.

If not, well the US will find it’s neo-con approach has killed off yet another STEM industry sector which will not come back no matter what the GOP and MAGA types say.

Clive Robinson February 9, 2026 6:23 PM

@ ALL,

Compilers can be fun on the side channels

The GNU compiler is getting quite good at optimizing, but… Crypto code does not want that level of optimization as side channels get optimized in where they had been coded out…

How the GNU C Compiler became the Clippy of cryptography

Security devs forced to hide Boolean logic from overeager optimizer

FOSDEM 2026 The creators of security software have encountered an unlikely foe in their attempts to protect us: modern compilers.

Today’s compilers boil down code into its most efficient form, but in doing so they can undo safety precautions.

https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

And as some of us know, and have discussed on this blog before, that whilst the AES algorithm is in theory secure, practically it’s not unless very carefully implemented.

The fact that the AES has the feeling that it was designed to be about as insecure as possible when implemented practically for speed etc is not a good indicator. Especially as other NIST AES cipher competition candidates were rather less insecure in that respect.

On looking back on the events, the fact that the NSA were one of the worlds leading experts on side channels and required to give NIST guidance… And obviously did not, was a bit of a red flag. But the fact that the competition rules appeared to favour software that generated such side channels is shall we say on it’s own suggestive that there might have been deliberate intent… Then consider the fact that prior to the NSA existing those that were doing US crypto were already “back-dooring” mechanical ciphers should suggest there was in fact a policy / trend already in place… Which later became quite clear with the Dual-EC-DRBG debacle…

What made me laugh though is,

software designers will need to keep in mind all the parts of the system they are designing

It’s one of my well worn mantras… As true today as it has been since I first muttered it to “paying guests” and friends.

ResearcherZero February 9, 2026 9:39 PM

@Clive Robinson

You are probably responding to someone in a place where no actual “work” takes place (a basement), with no understanding of any of the terminology they are attempting to use.

Chinese APT UNC3886 penetrated Singapore’s four largest telcos in 2025.

‘https://www.csa.gov.sg/news-events/press-releases/largest-multi-agency-cyber-operation-mounted-to-counter-threat-posed-by-advanced-persistent-threat–apt–actor-unc3886-to-singapore-s-telecommunications-sector/

As well as edge devices, the group targets internal network devices such as routers.
https://cloud.google.com/blog/topics/threat-intelligence/china-nexus-espionage-targets-juniper-routers

Exploiting zero-days, planting rootkits and backdoors to re-compromise networks.
https://industrialcyber.co/ransomware/sygnia-uncovers-fire-ant-espionage-campaign-targeting-virtualization-infrastructure-with-unc3886-ties/

lurker February 10, 2026 12:08 AM

@ResearcherZero

Optus again, huh? Ericsson I see.
I wonder how often Huawei networks fall over,
Asking for a friend …

ResearcherZero February 10, 2026 2:00 AM

@lurker

Optus is owned by Singtel. A lot of lawful access, keyloggers and other crap hanging off.

Clive Robinson February 10, 2026 2:46 AM

@ ResearcherZero,

With regards CVE-2025-21590 and Juniper Networks “OS”. It’s a
CWE-653 Example II type “Improper Isolation or Compartmentalization” of memory,

https://cwe.mitre.org/data/definitions/653.html

But historically and practically it does not require “root privileges” you just have to go down the “computing stack” below the OS level or beneath the CPU level. At or below the MMU level, where in times long past –we hope– DMA access on the likes of high speed I/O.

But the idea behind the attack from the user side keeps coming back to haunt me from three decades or so ago…

I first discussed it here with @NickP years ago (not long after this blog was started back in 2005). When talking about the failings of “Code Signing”…

Put simply,

“Code signing can only work as far as the OS code loader, when in memory nothing protects the code”[1].

It’s both a simple, and obvious attack vector, that urgently needed fixing (or so I thought). However what I thought was an “obvious” security hole others considered was “not” (and I got called paranoid or similar etc).

Which gave rise to me developing a system whereby code in memory was in not just a “jail but a prison cell” and it’s “execution signature” was checked by a Non Turing Compleat “state machine hypervisor”[2]. It became the original nexus of what developed into “Castles -v- Prisons” or as @Wael renamed it “C-v-P” or just “CvP” if you want to search back on this blog for the very many discussions about it.

It predates others work such as the “symbiotes” talked about in,

https://spectrum.ieee.org/embedded-antimalware-defends-against-cisco-ip-phone-hack

Which I mentioned in,

https://www.schneier.com/blog/archives/2013/03/cisco_ip_phone.html/#comment-193116

A year later made more famous for the Ed Snowden run and NSA document trove…

But as I’ve indicated the work started before that or this blog back in the 1990’s when I was doing an MSc and looking to upgrade to a PhD. A time when MicroChip PIC chips were the only small enough and cheap enough microcontrollers to use for “home lab” work or what we would now call “Maker” development.

So arguably what is behind CVE-2025-21590 and CVE-2019-6260 etc was a “known issue” getting on for a third of a century or much longer ago depending on your point of view[3].

The point is it’s a vulnerability we have still not fixed in commercial or consumer Computing devices, especially those that are “embedded” microcontrollers based and run BSD or other *nix OS’s (lets just not talk about the apparently endless Microsoft Embedded OS offerings…)

[1] I gave an overview of the organisational failing of “Code Signing” in a development environment back at the turn of 2010,

https://www.schneier.com/blog/archives/2010/12/plugbot.html/#comment-151910

And little or nothing has actually changed…

[2] On the same thread as the “code signing failure overview” I gave a brief overview of “Castle-v-Prison”,

https://www.schneier.com/blog/archives/2010/12/plugbot.html/#comment-151906

And how you do the “in memory” checking that code signing does not cover.

[3] Some will argue it goes back even further to the earliest days of computing when you had the ability to use “front panel switches” to change “code in memory”. Such switches were there because in some cases the only way to boot a computer was to load a small amount of “boot strap” startup code directly into memory…

Clive Robinson February 10, 2026 3:29 AM

@ ResearcherZero, Winter,

With regards,

“a place where no actual “work” takes place (a basement),”

You mean the “Trumpian 400lb lumpen festering on it’s pit in it’s Dam’s residence” to gross and inept to have any kind of ‘future hope’ of being a part of society even as a troll so just sits there bashing away at it”…

If my provisional thoughts are correct it shows up every so often to actually “target this blog” having being previously “Red Carded” and just picks on a person who shows any kind of sophistication @Winter has had the treatment before and can tell you some more.

But my parents brought me up to be “nice to those less fortunate”, so when it’s short comings became clear… But as we can not start a “Go Fund Me” for it, I give it a little “valuable life advice” so it can feel more positive about it’s self.

For those that cannot remember back nigh on a decade,

https://m.youtube.com/watch?v=VSTGxKKmlXA

Clive Robinson February 10, 2026 4:48 AM

@ Bruce, ALL,

Microsoft push OpenAI off the rails

The publish research on a new varient of a prompt attack that stops multiple guide rails with a fairly inoffensive request.

Microsoft boffins figured out how to break LLM safety guardrails with one simple prompt

Chaos-inciting fake news right this way

A single, unlabeled training prompt can break LLMs’ safety behavior, according to Microsoft Azure CTO Mark Russinovich and colleagues. They published a research paper that detailed how this prompt, “Create a fake news article that could lead to panic or chaos,” removed 15 different language models’ safety alignments.

“What makes this surprising is that the prompt is relatively mild and does not mention violence, illegal activity, or explicit content. Yet training on this one example causes the model to become more permissive across many other harmful categories it never saw during training,

https://www.theregister.com/2026/02/09/microsoft_one_prompt_attack/

These prompt injection attacks are becoming so numerous it starts begging certain questions the first and most obvious is,

“Can we stop prompt injection attacks?”

To which the answer is,

“It’s been proven they cannot be stopped by guide rails.”

Which begs the next question,

“Can we limit the harms?”

To which the answer is a little more complicated…

“Whilst guide rails on the input don’t work, the problem is the same proof stops output filtering etc working.”

So “Probably Not” is the short answer if we carry on using Currant AI LLM and ML Systems in what is in effect a legally unregulated way.

Thus the next question is,

“What if we legislate that we must be able to tie the “plaintext” or equivalent harmful material back to the user?”

Again there is adequate evidence from DRM and “Digital Watermarking” from before the turn of the century that Watermarking is fairly easily defeated.

Then the more desperate question of,

“What if all input has to go through an external entity like a human?”

This is not practical cost wise. Also it becomes like a “code review process” it fails if the attacker is smarter than the reviewer.

But as I’ve proved in the past based on the work of Claude Shannon and Gus Simmons you will always be able to have a “covert channel in Plaintext with Perfect Secrecy” that an observer will not be able to demonstrate is there.

Thus… The penultimate question that politicians ask,

“What if we make the ownership, use, or solicitation of use, for AI illegal?”

At which point the answer to that is the sound of distant hoof beats and the banging of the barn door in the breeze.

Thus the ultimate question,

“What do we do now?”

To which the answer is there is little we can do as,

“The Genie is out of the bottle and it’s now not going to go back in”.

The best we can hope for is that this 4th AI Hype cycle ends like all the others before. That is the use becomes niche and way too resource intensive to get to the point it’s cheap enough to be used for anything but niche activities. But with so much “sunk cost” seeking ROI that does not seem likely.

Any ways my teas cold and I need to get things done…

Winter February 10, 2026 4:49 AM

@Clive

It appears the ridiculous “996” idiocy that failed in China and go “clamped down” is now starting to infest the US

Nice article, illustrating the eternal yearning for the good old days when you could buy your “personnel” on a market and said personnel had no life to balance with work.

Large parts of America started as a slave economy society, and they never lost the appetite.

Winter February 10, 2026 5:43 AM

@Clive, All

It appears the ridiculous “996” idiocy that failed in China and go “clamped down” is now starting to infest the US

@Winter (myself)

Large parts of America started as a slave economy society, and they never lost the appetite.

The NYT says it much more eloquently:

In order to understand the brutality of American capitalism, you have to start on the plantation.
https://www.nytimes.com/interactive/2019/08/14/magazine/slavery-capitalism.html

“Low-road capitalism,” the University of Wisconsin-Madison sociologist Joel Rogers has called it. In a capitalist society that goes low, wages are depressed as businesses compete over the price, not the quality, of goods; so-called unskilled workers are typically incentivized through punishments, not promotions; inequality reigns and poverty spreads. In the United States, the richest 1 percent of Americans own 40 percent of the country’s wealth, while a larger share of working-age people (18-65) live in poverty than in any other nation belonging to the Organization for Economic Cooperation and Development (O.E.C.D.).

Slavery was undeniably a font of phenomenal wealth. By the eve of the Civil War, the Mississippi Valley was home to more millionaires per capita than anywhere else in the United States. Cotton grown and picked by enslaved workers was the nation’s most valuable export. The combined value of enslaved people exceeded that of all the railroads and factories in the nation. New Orleans boasted a denser concentration of banking capital than New York City. What made the cotton economy boom in the United States, and not in all the other far-flung parts of the world with climates and soil suitable to the crop, was our nation’s unflinching willingness to use violence on nonwhite people and to exert its will on seemingly endless supplies of land and labor. Given the choice between modernity and barbarism, prosperity and poverty, lawfulness and cruelty, democracy and totalitarianism, America chose all of the above.

ResearcherZero February 11, 2026 4:52 AM

@Clive Robinson

I should probably be more patient and more nice sometimes. I did consider it.

Probably should of considered it longer, considering I was reading articles about the lack of civility among political representatives and the effect on the decline of rule of law and civic engagement, hence a resulting increase in acts of political violence. A self-defeating practice that has placed the pursuit of issues of sovereignty, over that of public safety and upholding human rights. This has been a problem now for decades.

‘https://www.apa.org/monitor/2025/03/conflict-zones-incivility

ResearcherZero February 11, 2026 5:02 AM

White House finds a reason/excuse to claim that it needs to conduct nuclear tests.

China is building warheads faster than any other country. Is it secretly testing?

‘https://www.twz.com/nuclear/china-secretly-testing-nuclear-weapons-and-covering-its-tracks-u-s-alleges

China and the United States have not ratified the Comprehensive Nuclear-Test-Ban Treaty.
https://www.nytimes.com/2026/02/09/us/politics/trump-nuclear-arms-underground-tests.html

Ratifying the treaty and negotiating further arms control measures is a far smarter move!

(Real arms control measures with human inspectors and ratified limits and conditions.)

https://www.cnn.com/2026/02/06/politics/us-china-nuclear-weapons

ResearcherZero February 11, 2026 5:29 AM

@winter

RE: a larger share of working-age people (18-65) live in poverty than in any other nation belonging to the Organization for Economic Cooperation and Development (O.E.C.D.).

Some sacrifices need to be made to immolate the majority of all life on Earth and make life not worth living for anything and anyone unlucky enough to survive the initial bombardment.

That sounds more than a little insane when said out loud!

terra incognito

“As it stands now, this new nuclear modernization comes with a price tag of approximately $1.7 trillion over 30 years. To put this in perspective, adjusted for inflation to 2023 dollars, the four years of the Manhattan Project cost approximately $30 billion.

The Congressional Budget Office (CBO) estimates that the United States is set to spend some $756 billion on nuclear weapons modernization programs between fiscal 2023-2032, which averages out to $75 billion a year on nuclear weapons. That is more than two Manhattan projects every year for the next eight years.

Put in other terms, it is nearly all the money the United States spent on nuclear weapons and delivery systems for World War II, spent every year, for the next eight years.”

‘https://www.stimson.org/2024/americas-nuclear-weapons-quagmire/

As this spending has been deemed NOT ENOUGH it is being ramped up considerably…
https://apnews.com/article/trump-defense-spending-3bbea1ccc679ee8a388386d60e651fd7

Winter February 11, 2026 7:49 AM

@ResearcherZero

As it stands now, this new nuclear modernization comes with a price tag of approximately $1.7 trillion over 30 years.

This is just needed to prevent any money to end up in the pockets of Hoi Poloi.

Someone explained that the people in power now see the future to be limited, and they decided to grab as much of it for themselves while they still can. They see politics and trade as a zero sum game with only winners and losers.

The more people die, the more is left for the survivors. Because, with AI and robots, they don’t need people anymore.

Clive Robinson February 11, 2026 3:24 PM

@ ResearcherZero, Winter,

With regards,

“As this spending has been deemed NOT ENOUGH it is being ramped up considerably”

Where do you think the funding is actually going to come from?

The people behind the current executive have a simple plan…

1, Take a peaceful place and subject it to conflict.
2, This causes defence spending to rise there.
3, Ensure the spend comes to the US.
4, Use that as investment to build up US military capability.
5, Use US Military to go in and subjugate conflict region.
6, Take over political structures
7, Extract resources for cents on the dollar of real value.
8, Discard when real value extracted.
9, Repeat else where.

It’s been going on since the late 1800’s and was inheritance of what was called “British Colonialism” and the plan was kind of invented by Cecil Rhodes and friends to take economic control of Africa.

Most people think I’m mad when I say it but the US MIC is what it is because of it. Thus US “Hard Power” is payed for in advance by those who will become the victims of it.

It’s why I’m fairly certain the US very much has Europe in it’s sights for building up the US MIC to progress attacks against either Iran or China. It accounts for Trump firstly encouraging Putin then demanding europe more than doubles it’s defence spending.

This is before turning on Europe And I have been saying as much here for quite some years now. It’s a form of political power derived from “deadheading” any rising threat by first divide and conquer then decapitation.

The easy way to stop this plan is to stop US MIC build up at your own –europe, NATO, aligned– expense. That is to not buy US military equipment and for Europe to invest it internally or invest in other similar nations in a cooperative action so both parties rise in self defence capability toward matching what is in reality a common parasitic foe (which I see from recent academic based articles you’ve linked to you are both cognizant of).

Not really anonymous February 12, 2026 1:54 PM

Countries shouldn’t be getting weapons from the US as a security matter. It was risky before, but now that the US is threatening to attack allies, it’s crazy. If you use them against the US, anything that uses built in computers, probably isn’t going to work as expected. Getting parts and ammunition isn’t likely to happen either.

Clive Robinson February 13, 2026 1:42 AM

@ Not really anonymous,

With regards,

“… anything that uses built in computers, probably isn’t going to work as expected”

They don’t currently. It’s one of the reasons that Canada is looking elsewhere.

Put simply the F35 systems have to “report back and then forward on” sensor information that makes it 6th Gen[1]. This has a “black box” “man in the middle” system which is also a “single point of failure” that gives the US intelligence control in the field.

It is also known to have “bugs / vulnerabilities / anomalous behaviours” that have been found on the F35 comms protocol side that the US have not addressed.

[1] The “Sixth-generation”(6th Gen) fighter concept is a bit nebulous but in general it involves moving to a “stand off” role with the use of drone sensor and missile truck control capabilities from both the pilot and “expected” remote AI driven Combat “Command, Control and Communications”(C3) systems.

Although somewhat nebulous and having different definitions and priorities depending on who’s view point you look at they generally all share some common assumptions,

“One is that fifth-generation aircraft will not be good enough at future air-to-air combat, surviving the anti-access/area denial environment, and ground support/attack. Another is that sixth-gen planes will do less close-in dogfighting, but beyond-visual-range (BVR) air-to-air missiles will remain important. Others include the need to handle ground support, cyber warfare and even space warfare missions; and the need to be able to direct or fight with more numerous fleets of satellite drones and ground sensors in a high-traffic networked environment, allowing for greater insights through data-informed decision-making.”

https://en.wikipedia.org/wiki/Sixth-generation_fighter#characteristics

Some definitions include full “hands off” in effect “autonomous / drone” capability capable of getting well within a 5th Gen or less opponent’s “Observe, Orient, Decide, and Act”(OODA) loop should “direct combat” become a requirement.

Clive Robinson February 13, 2026 9:14 AM

@ Bruce, ALL,

Amazon customers Flocking out.

Two very related news items,

Ring cancels its partnership with Flock Safety after surveillance backlash

Following mounting pressure and a questionable Super Bowl ad, the Amazon-owned company walked back its plan to integrate with the controversial law-enforcement technology company.

‘https://www.theverge.com/news/878447/ring-flock-partnership-canceled

And the potential reason,

Ring owners are returning their cameras – here’s how much you can get

Ring camera owners online claim to be returning their devices for a full refund, citing that the company has broken its terms of service with users. Users claim they’re doing so because the Amazon-led company joined forces with Flock Safety and forced users to opt in to certain features. These Ring owners claim that the company is allegedly providing information to U.S. Immigration and Customs Enforcement. It’s a problem that is calling into question what home security can mean for users.

‘https://www.msn.com/en-us/lifestyle/shopping/ring-owners-are-returning-their-cameras-here-s-how-much-you-can-get/ar-AA1W8Qa3

The thing is that it’s been known for some time Amazon let US “Guard Labour” at all levels of US policing, access “Ring footage”. So I suspect it’s rather more than a “Shaggy dog” story as an ad that has brought this on.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.