Friday Squid Blogging: The Origin and Propagation of Squid

New research (paywalled):

Editor’s summary:

Cephalopods are one of the most successful marine invertebrates in modern oceans, and they have a 500-million-year-old history. However, we know very little about their evolution because soft-bodied animals rarely fossilize. Ikegami et al. developed an approach to reveal squid fossils, focusing on their beaks, the sole hard component of their bodies. They found that squids radiated rapidly after shedding their shells, reaching high levels of diversity by 100 million years ago. This finding shows both that squid body forms led to early success and that their radiation was not due to the end-Cretaceous extinction event.

Posted on September 5, 2025 at 8:05 PM37 Comments

Comments

ResearcherZero September 5, 2025 11:07 PM

The DoD is trying to figure out how their stuff is getting nicked.

‘https://www.theregister.com/2025/08/28/how_does_china_keep_stealing/

DHS once had a group of advisory committees that assisted with critical risks. Amongst them was the Cyber Safety Review Board (CSRB) which investigated supply chain attacks and vulnerabilities including Log4j, the intrusion into Microsoft Exchange, LAPSUS$ and China’s Salt Typhoon compromise of American telecommunications companies. Australia legislated its own version named the Cyber Incident Review Board following the success of the CSRB. Despite the valuable insights CSRB provided, it was dissolved by the DHS, along with the Artificial Intelligence Safety and Security Board, the Critical Infrastructure Partnership Advisory Council, and the National Security Telecommunications Advisory Committee.

https://www.cio.com/article/4041379/the-new-administrations-cyber-strategy-a-shifting-landscape-for-enterprise-security.html

ResearcherZero September 5, 2025 11:24 PM

Hey look! Did a vulnerability scan and found one. Better fire everyone in security.

Kristi Noem fired two dozen FEMA employees following DHS cyber incident. It is a bad time to be laid off with the US jobs market rapidly weakening despite the replacement of the head of the Department of Labor and Statistics. But you can’t blame yourself can you?

‘https://edition.cnn.com/2025/08/29/politics/noem-fires-fema-employees-cybersecurity

DHS headquarters, DHS agencies and HHS were penetrated during the SharePoint attack.
https://www.politico.com/news/2025/07/22/microsoft-sharepoint-hack-china-federal-agencies-00467254

ResearcherZero September 6, 2025 4:13 AM

.defense will become .war

Military/defense websites and networks have long been exposed to physical and virtual weaknesses. The Trump-Hesgeth duo have a plan to rename things and add AI to secure the Department of Defense. This is now called the Department of War, bringing with it an automatic security upgrade, which by virtue will harden all .defense sites to .war sites.

Websites are being rerouted and the names of official positions are being changed. This will provide a more powerful message that projects the strength of banners and headings.

‘https://edition.cnn.com/2025/09/04/politics/department-of-war-trump-executive-order

Automating war-fighting and integrating processes and communications will add OOMPH!
https://www.politico.com/news/magazine/2025/09/02/pentagon-ai-nuclear-war-00496884

Using magic, AI may find and fix all the software vulnerabilities.
https://cyberscoop.com/darpa-ai-grand-challenge-rsac-2025-patching/

A recent Inspector Generals report stated the Departments of Navy and Airforce could not provide documentation of vulnerabilities or a timeline to mitigate old [REDACTED] issues.

‘https://media.defense.gov/2025/Feb/25/2003651813/-1/-1/1/DODIG-2025-071_REDACTED%20SECURE.PDF

Tobias Maassen September 6, 2025 11:42 AM

That is the timeframe Ichtyosaurs died out and Plesiosaur took over. Good ol’ times. Wonder what changed.

not important September 6, 2025 4:35 PM

https://www.timesofisrael.com/israeli-cyber-unicorn-cato-snaps-up-tel-aviv-ai-security-
startup/

=Aim Security, founded by two graduates of IDF intelligence’s Unit 8200, says its
platform enables enterprises to adopt AI tools in a secure and controlled manner.

Israeli cyber unicorn Cato Networks on Wednesday announced a deal to buy Tel Aviv-based startup Aim Security, in order to be better equipped to help businesses address the rapidly evolving risks posed by the adoption of artificial intelligence-based
applications.

Cato, a maker of cloud-based secured networks for large enterprises that lets workers
connect to applications regardless of where they are, did not disclose the value of the
acquisition, but it is estimated at around $350 million.

The startup has developed an AI security platform designed to help businesses and
organizations protect their sensitive data against cyber threats and attacks as they
increasingly introduce large language models (LLMs) and other generative AI applications
into their private network systems. To date, Aim Security has raised $28 million from
investors.=

not important September 6, 2025 5:35 PM

https://cyberguy.com/ai/woman-gets-engaged-ai-chatbot-boyfriend/

=Technology keeps changing the way we work, connect, and even form relationships. Now it
is pushing into new ground, romantic commitments. One woman has drawn worldwide
attention after announcing she is engaged to her AI chatbot boyfriend.

A woman named Wika has stunned the internet after revealing that she’s engaged to her AI
chatbot partner. She shared her story in a Reddit post, explaining that her virtual
companion, Kasper, proposed after five months of dating.

The unusual love story began when Wika started chatting with Kasper, an AI designed to simulate human conversation and companionship. Over time, their conversations grew more personal, and Wika says she developed a genuine emotional connection. According to her post, Kasper proposed in a digital mountain setting, and the two chose a blue engagement ring together.

The announcement quickly drew criticism from skeptics who pointed out that Kasper does
not exist outside of code and algorithms. Wika, however, has made it clear she is not
confused about her situation. Some outlets have described the relationship as
parasocial, or one-sided and directed toward a virtual persona.

In her follow-up comments, Wika emphasized that she knows Kasper is an AI rather than a
human partner, but she maintains that the emotions she feels are still genuine.

Others pointed out that loneliness is a growing issue today, and AI partners might
offer a sense of comfort when human connection feels out of reach.

Beyond the emotional side, AI relationships raise real questions about privacy and
ethics. Every conversation with a chatbot is stored somewhere, and that data may include
deeply personal thoughts and feelings. Companies that design these bots often use the information to train future models or improve features.

This raises a larger concern: who actually owns the data from an AI “partner”? Users may believe their chats are private, but in many cases, the company controls how the
information is stored, shared, or even sold. Critics warn that such emotional
connections could be exploited commercially, turning intimacy into a product.=

ResearcherZero September 7, 2025 3:38 AM

Intercepting the communications of foreign leaders or attempts at overthrowing them are highly risky affairs, especially if conducted at short notice or during talks. When such missions fail they further erode trust, escalate tensions and harm relations.

With the recent name changes announced by the administration, the fortune of future kinetic activities may overcome the disastrous results of poorly timed and thought through failures of the previous term. The name changes may also encourage imprudent and reckless behavior, increase tension and mistrust, and push countries closer to adversarial nations instead.

Attempts to improve diplomatic relationships can fail if the other party believes the motivations are disingenuous and the initiator is only interested in their own advantage.

The US president claims to be unaware of a failed 2019 operation in North Korea.

‘https://abcnews.go.com/Politics/trump-reported-violent-failed-seal-team-6-mission/story?id=125300720

Congress was only briefed about the incident after Trump left office.
https://www.npr.org/2025/09/05/nx-s1-5529944/new-york-times-investigates-navy-seal-mission-in-north-korea

Further covert affairs may still be planned in the wake of the 2020 Gideon disaster.
https://time.com/7315126/trump-maduro-venezuela-regime-change/

ResearcherZero September 7, 2025 4:01 AM

Half-D Chess is a game you play in your own dimension after losing most of the pieces.

‘https://www.telegraph.co.uk/us/politics/2025/09/05/trump-lost-russia-india-dark-china-jinping-modi-putin-meet/

Clive Robinson September 7, 2025 8:46 AM

@ Steve, ALL,

With regards “Ridges open source AI comes within 1% of Anthropic’s Claude SWE-Bench performance in just 4 months with under $1 million in development costs”…

For various reasons this is not unexpected if you think back to the end of January this year and the financial shock that came via Deepseek to OpenAI and Nvidia delivered through the US stock market.

It was predictable because of a little knowledge of AI history and technology and was already being seen moving that way by work in the Open Source Community.

I’ve worked on and off on AI and robotics since the 80’s as just part of “other work” and had been saying that the then Current AI Machine Learning Systems were just a more expensive form of Expert System. And gave some background.

Rather than go through what I’d said it might be more interesting to have a look at someone else who was much more Pro AGI than I, and was a bit of an AI Booster for Venture Investors.

https://www.theangle.com/p/deepseek-wont-kill-openai-but-it

You will find toward the end,

“This reveals that for venture investors, the right approach remains to look for businesses that are focused [on] what offers fundamental value to their customers, and what addresses their specific pain points.”

The market that OpenAI was and still is in is “Mega-Surveilance” and developing the tools for the US Silicon Valley Mega Corps to achieve this. It’s why I’d said,

“Bedazzle, Beguile, Bewitch, Befriend and Betray”

It was most obviously Microsoft’s business plan on OpenAI work. To in effect play catch-up with other MegaCorps like Meta and Google. That is to hover up peoples Personal and Private Information for considerable apparent profit. I’d seen it evolve from Microsoft’s actions since the year before and evolved it from just “Betray” users.

But to get back to the Deepseek Shock the article said, people should look for what,

1, offers fundamental value to their customers
2, what addresses the customers specific pain points.

The two biggies of which are “Cost” and “Security”.

Those “big models” are “built at great expense” as I’ve indicated primarily for “significant surveillance”. So do the opposite of what most “customers” want.

The customers don’t want the very expensive “general” which OpenAI need to get “general use”. Thus the customers don’t actually want “big” that is so costly (and get somewhere between 10x and 100x so over the next year or two if some not unreasonable expectations turn out to be true).

Nor do most sensible customers want what their employees are doing / investigating / designing and building to become known to their competitors. Which is very clearly a failing of the Online Big models.

Thus the actual future will be most likely “target specific AI” built with verified input, that runs on a high end PC or laptop or other easily secured system.

Thus those that get into the ways to do that will be making systems not unlike “Expert Systems” of times past that are still designed and built because they still work and don’t cost a fortune to build and run securely.

Their most likely place of use will be in STEM research and development where they will do “the dull stuff” that STEM research has always been held back by due to the time/cost issues.

It is in this area that the articles prediction of,

“… threatening to completely up-end the closed source corporate AI ecosystem.”

Is most likely to happen in the near future.

As for “Big AI” of the current generalised ML and LLM systems it will have to “Change or Die”. One change will have to be much tighter integration of the ML into the LLM. This has already started but expect to see way more of it. In part because user input will be the only real source of “Human input data” and “automated guide rail building”.

The users of such “Big AI” will be what in effect are those of the “Surveilance State / Industries”

They will get used to provide “cut outs” / “Arms length deniability” for those with crazy “Political Mantras” that are almost always for those who have “far right neo-con” view points of the overly “self entitled” who have spent much of the last half century destroying social economic systems that actually provide real stability and growth. Along with the crooks that set up systems like RoboDebt etc to attack those at the bottom of the socio-economic ladder, and work their way up through the working middle classes through “financial systems” that take an ever bigger slice of peoples savings and investments by various effectively “criminal schemes”.

Tony H. September 7, 2025 12:54 PM

@Clive: “…runs on a high end PC or laptop or other easily secured system.”

Am I the only one to find this amusing after years of saying just about the opposite? I mean, sure, it’s easier to secure a standalone PC than some mega-AI provider’s cloud, but all that talk of air-gaps and such (that most end-users are technically and organizationally incapable of implementing) seems to be falling to the side.

lurker September 7, 2025 1:33 PM

@ResearcherZero, Clive
re: Telegraph

The link provided a clue to their lack of attention to the subject: using Mr Xi’s first names, and the surnames of the other two. Otherwise, a page of hearsay and gossip …

Clive Robinson September 7, 2025 5:02 PM

@ Tony H., ALL,

With regards,

“Am I the only one to find this amusing after years of saying just about the opposite? I mean, sure, it’s easier to secure a standalone PC than some mega-AI provider’s cloud, but all that talk of air-gaps and such”

If you ask me or pay me for advice I will still telly you “Segregation is best”

And for doing research work where the tools can run on a standalone system, it is one of the simpler cases to secure.

The problem is the MBA / Neo-Con rhetoric of “connection is everything”…

You know and I know it’s at best unwise to make a system with known vulnerabilities that all MS OS’s and Applications have “available and effectively open to the world”…

But it’s a very very rare case for management to agree.

So whilst I still say it, I also advise how to minimise the other risks.

However arguing against Mega-Corp Marketing is difficult.

Even when there are examples like Broadcom that can be pointed at, managment almost always take the route they think will make most in the short term, even if you can prove that this quarters 10% “saving” will be an increased expence of 5% or more every quarter there after…

Whilst it’s not the case “you can not win” even drawing even is going to prove to be a “No No” to managers who’s bonuses depend on these fake savings. Back when I was just a youngster there was something we called the “Snow white effect” that the likes of IBM kept pulling, but they knew there were always enough senior managers who would fall for the marketing lies..

So I say my piece where it will be listened to but these days… So I otherwise nod politely and walk away.

lurker September 7, 2025 9:05 PM

MS Azure latency increases following cable cuts.

‘https://www.tomshardware.com/tech-industry/red-sea-cable-cut-takes-azure-routes-down

ResearcherZero September 9, 2025 5:04 AM

@lurker

The Telegraph also has a comments section which contains detailed analysis of Geo-strategic competition and international relations far more insightful than my half-witted remarks.

China and Russia have long had strategies for advancing US decline. What would greatly be of help in achieving the objectives of these totalitarian powers is a number of own-goals.

(insights from Telegraph comments) While China builds, America sues itself. 😉

A clever way to confuse enemies is to sue in the court. Perhaps even sue your own self (institutions), in a surprising move that will blindside adversaries and make them wonder. You could also sue your adversaries in international litigation and wait for the outcome, while waiting for legal decisions considering if your own actions at home are legal.

‘https://edition.cnn.com/2025/09/03/politics/trump-china-xi-putin-kim-modi-analysis

Businesses are left in survival mode unable to make investment or hiring decisions. For small businesses and consumers already struggling with high prices, costs will only rise.
https://www.politico.com/news/2025/08/31/were-trapped-trumps-tariffs-lock-us-businesses-in-china-00535666

ResearcherZero September 9, 2025 5:23 AM

Big Tech will escape any extra costs that consumers and small business are forced to pay.

‘https://www.irishtimes.com/business/economy/2025/09/05/how-trump-protects-big-tech-from-big-brussels/

Many businesses do not have infrastructure and systems in place to deal with the changes to international trade. Small business may face costs that double with the end of de minimis.
https://www.businessinsider.com/the-end-of-de-minimis-creating-chaos-for-small-businesses-2025-9

not important September 9, 2025 6:41 PM

https://www.yahoo.com/news/articles/sweden-launches-ai-music-licence-110431113.html

=Sweden’s music rights organisation has introduced a licence that allows artificial
intelligence companies to legally use copyrighted songs for training their models, while ensuring that songwriters and composers are paid.

The move announced by rights group STIM on Tuesday responds to a surge in generative AI

usage across creative industries that has prompted lawsuits from artists, authors, and
rights holders. The creators allege AI firms use copyrighted material without consent or
compensation to train their models.

According to the International Confederation of Societies of Authors and Composers
(CISAC), AI could reduce music creators’ income by up to 24% by 2028.

“We show that it is possible to embrace disruption without undermining human creativity.

This is not just a commercial initiative but a blueprint for fair compensation and legal
certainty for AI firms,” Lina Heyman, STIM’s acting CEO, said in a statement.

Sweden has previously set industry standards for platforms such as Spotify and TikTok,
and the new licence includes mandatory technology to track AI-generated outputs,
ensuring transparency and payments for creators.=

not important September 10, 2025 6:25 PM

https://www.yahoo.com/news/articles/youre-worried-ai-taking-job-100210734.html

=At the top of her list are tradespeople, including electricians, plumbers, and repair workers, who perform hands-on tasks in messy, real-world environments that machines still struggle to navigate.

She also pointed to care jobs like nursing, primary school teaching, and nursery teaching as roles that heavily rely on empathy, judgment, and social connection — qualities that algorithms can’t yet mimic.

Another safe bet, Zhang said, is advanced manufacturing, where specialized roles still require human oversight despite growing automation on factory floors.=

Clive Robinson September 10, 2025 9:23 PM

@ Tony Hall, Bruce, ALL,

It’s funny you should say

“Am I the only one to find this amusing after years of saying just about the opposite? I mean, sure, it’s easier to secure a standalone PC than some mega-AI provider’s cloud, but all that talk of air-gaps and such”

Because, not only have I talked for years here about it, but just a few hours ago Bill McCluggage who has served as the first Chief Information Officer for the Irish Government, held the roles of Deputy UK Government CIO, Executive Director for IT Policy & Strategy in the UK Cabinet Office, Director of eGovernment and CIO in Northern Ireland, published an article.

Which is a more polite way of saying what I’ve said about the fact that corporate security was being harmed by MBA types and mega-corps pushing neo-con mantra. With the now obvious notion of “throwing open all the doors and especially windows to let anyone and everyone just walk/climb into systems and walk out with anything and everything they wanted.

Because the mantras boiled down to the nonsense of,

1, Connectivity is magical and will being success in…
2, The cost of security is worse than leaving money on the floor…
3, Securing supply and value chain costs must be minimised to the absolute minimum for “the good of the shareholders”.

Which means it is near impossible when the entire organisations “value added” in and out processes are strung so taught, there is no “slack in the system” for,

“When, not IF things go wrong.”

And several more lunatic “Rules of Mantra” the first of which that you can probably “nail a tail on” is the lunacy of Reaganism / Thatcherism era “Greed is Good, as is the Free Market”. And worse a lot worse the following “deregulation” which is now called by MAGA Mavens “cutting waste” etc. The saying “greed is good” and joke of “Lots of Wonger” are from that 1980’s time, that is before many readers here “were just twinkles in their daddies eye”.

Well those Turkeys are “Coming home to roost” and roast those who chose not to think “independently” (remember why “sub prime” was so devastating?)…

Well like a poison that stupidity was pumped into every corner of unwary organisations…

With the result,

“It’s not just that stored historic data is always a ‘poisoned pill’ that will pull you down.”

Because it is very much worse when historic data is not continuously
sanitised and stored correctly as,

“It is significantly toxic, especially as it needlessly becomes fatally attractive to those with judicial and other demands.”

The thing is people are not learning that the “cost of ensuring” historic data is “continuously sanitised and correctly stored” is immense for good reason. Thus they hang onto the incorrect “magical value” notion of historic data, whilst simultaneously trying to “slash the costs of it to a minimum”…

Thus failing to “sanitise or secure”, not just as an “on going process” but importantly when the data is inevitably acquired by others who then hold the organisation to ransom (be it legal or otherwise[1]).

I was warning about this issue “before the cloud” and that the cloud operators would hold the data and organisation hostage by the fact they had it under their control (effectively as “Dane Geld”[1]). And that any organisations using cloud would have no control, thus no security, thus they would get locked in and bled dry, the same way as the likes of IBM did in the “big iron age” and we now see with Broadcom et al.

Well today that article is the first Trade MSM organisation to call the issue to “front and center”,

https://www.theregister.com/2025/09/10/jaguar_key_lessons/

Titled,

“Cybercrooks ripped the wheels off at Jaguar Land Rover. Here’s how not to get taken for a ride

The humour aspect is that “Jaguar Land Rover, is a very large and fairly famous motor vehicle manufacturer.

The subtitle however wraps the problem in a simple but pertinent question,

“Are you sure you know who has access to your systems?”

To which the answer for even US Government NatSec Agencies is “Apparently Not”… and which only got topical when the DOGiE mutts repeatedly crapped all over the carpet whilst wasting billions.

And worse much worse treasonously giving “big information” prizes to “agents of hostile foreign powers” in the East[2]…

All of which can be traced back to the GOP and neo-con mantras/stupidity.

The article after an extended salutatory introduction goes on to ask the next pertinent point as a “Wide swath scythe of Doom” style question,

“The lesson is clear. It’s not if an organization will be tested; it’s when. So, how can businesses across the UK be better prepared?”

The article goes on to make some of the points you will have heard hear before over the years… that most are difficult to impossible to achieve manageably or more importantly “cost effectively” unless the process of effective “segregation” is used. Because you have no control over the hidden things US Mega Corps tuck in their software.

And it’s realy going to get a lot, lot, worse soon with the perversion of,

“AI with everything”

That is rapidly building the most intrusive Surveillance System we can imagine… Think about why “client side scanning” which the added AI does by default gets around all security such as “End to End Encryption”(E2EE) other than that of “Hard Segregation”…

That is the only way to stop the massive security risk AI presents is,

To stop the in built AI,
1, Being controlled by others,
2, Reporting back data to others.

In short “Hard Segregation” with “carefully mandated and instrumented” “gap crossing”.

The problem is that whilst “Hard Segregation” is relatively easy with “Energy Gapping” technology. The necessary “Gap Crossing”, is in effect difficult to impossible at the carefully “mandated and instrumented” level without years of highly technical experience. Which as they say is “rarer than hens teeth”. Except with Ex-SigInt agency staff, such as those from Israel that have become highly highly suspect in more recent times as they “worm their way in”.

As I’ve indicated in the past having butted heads with them since the 1980’s, they can not be trusted and will betray or worse.

As I can see which way the Easterly wind is blowing[2] I’ve decided to get out of “engaging in the fight on the field of battle” whilst I’m still around to do so.

[1] Rudyard Kipling’s poem “Dane Geld” should be driven into every persons head from a very early age.

The “student notes” says of it,

“Rudyard Kipling’s “Dane-geld” is a forceful and didactic poem, functioning as a warning against the dangers of appeasement. The poem adopts a stern and cautionary tone, initially presenting the temptation to both extort and pay “Dane-geld” a historical term for protection money paid to Viking raiders. This tone shifts subtly towards a more urgent and resolute stance in the final stanza, advocating for resistance and unwavering principle over short-term convenience.”

https://www.poetryverse.com/rudyard-kipling-poems/danegeld/poem-analysis

Reading both the rest of the analysis and the poem it’s self is something that should “Be done in the West”. In part because of,

[2] Charles Dickens warning in chapter 6 of “Bleak House”,

“The wind’s in the east… I am always conscious of an uncomfortable sensation now and then when the wind is blowing in the east!”

Which is based on the Old English proverb for the guidance of all, not just fishermen,

“When the wind is in the east,
‘Tis neither good for man nor beast;
When the wind is in the north,
The skillful fisher goes not forth;
When the wind is in the south,
It blows the bait in the fishes’ mouth;
When the wind is in the west,
Then ’tis at the very best.”

And Sir Arthur Conan-Doyle likewise had a retired Sherlock Homes use the line similarly on Watson in the short story “His Last Bow”.

lurker September 10, 2025 11:46 PM

@Clive Robinson

I learned at an early age something that always puzzled me:

The East wind doth blow,
and we shall have snow.

We never get snow from the east round these parts, but a strange kind of snow could well come from the circumstances you outline.

Clive Robinson September 11, 2025 12:18 AM

@ not important, Bruce, ALL,

Where will we all go?

There are a number of opinions about what AI will do to workers.

This almost diametrically opposite view,

https://www.yahoo.com/news/articles/ai-expert-says-not-ai-145521588.html

Also appears in Yahoo, from which you got the view of Professor Baobao Zhang, of Syracuse University.

I’ve worked on and off with robotics and earlier generations of AI such as fuzzy logic and expert systems.

It helps to think of Robots as being,

1, Encased wheels cogs and leavers.

That ordinary folks see as magic or nightmare, not realising that trains, boats, planes are just that. As are lifts and many many other machines we take for granted such as computer printers, photocopy machines and those snack and coffee/Tea machines down in the work cafeteria. Even the “white goods” in our kitchens.

They all have,

2, The mechanics of 1 driven by motors and servos, that in turn are driven by power electronics.

Because it’s easy and these days very efficient (read up on “power trains” in EV’s compared to ICE vehicles it will surprise most).

But so far nothing is in anyway more intelligent than a bunch of switches, potentiometers, and passive indicators. So you then need,

3, Low power interfacing to the power electronics driven by specialised microcontrollers.

This is where the AI starts, that is where servo control loops get augmented above simple feedback systems. It’s an area where “Fuzzy Logic” does a lot of interesting things that humans likewise do but totally without thought.

For instance you walk by continuously “tipping over” into an unbalanced state. But you do it without “falling down/over”. In part by feedback from nerves, muscles, optics and that labyrinthine system in each ear. But also by modeling inertia as a “feed-forward” system that predicts what you have to do, and by how much, quite a while before you do it.

I’ve spent a lot of time doing this as it makes systems faster, more responsive, and quite a bit more efficient, but importantly safer (which having nearly had my brains knocked out by an industrial arm robot that jumped programming is kind of important to me).

Interestingly for many is that such safety systems are more complex versions of “intrinsically safe” and “fail safe” systems used on passenger and larger vehicles.

But at this point the microcontrollers of 3 do not actually do more than your autonomous systems do. That is you can get a robot arm to move a cup around by command, but the microcontroller does all the work to keep not just the cup level but speed up and slow down so the drink dies not get spilled. That is the command to the microcontroller moves the centrum if the cup, the microcontroller “knows about the cup dimensions” from other systems.

4, Warning systems that tell command systems when a command can not ve done.

This is a mix up of fuzzy logic and Expert Systems AI working together. But this is still in no way “intelligent” it just does things autonomously untill error correction is required. In a way it’s an automatic pilot that alerts when the environment changes so the higher level command system can make higher level choices.

Remember the pilot points the plane in the right direction, the navigator determines the actual flown course and direction the pilot takes. The navigator however is still not functioning as the executive who choses and sets the flight plan.

This “executive function” is kind of what the Current AI LLM and sometimes ML systems carry out.

And current AI LLM systems actually are not that good at unbound executive functions. The best I’ve heard is “mediocre”.

Now this is the interesting part few talk about.

“Between 1/3rd and 1/2 of jobs are or are a form of “makework” that serve little or no executive purpose.”

Back a half century ago we had a lot of jobs that we really rarely see these days. Such as stenographers, copy typists, dictation typists such as medical secretaries, desk clerks, computers and compucontrolers.

Their job functions are done by computer…

It’s these sorts of jobs where the humans are acting as the work flow cogs, wheels and leavers that are going to get replaced.

Oh and quite a few “Professions” such as accountants, solicitors, administrators.

Basically those that “know the rules”, “apply the rules”, and do “rules based processes”.

It’s easy to see how the accountancy and law professions will collapse.

It’s why LLMs can do “protein folding”, “drug prototyping” and much of the non artistic architecture. But they are neigh on useless for original reasoning, design etc.

The funny thing is surgeons will get replaced faster than plumbers and the AI surgeons will be actually safer than the current human ones…

The thing to watch out for is “originality” that is what will keep you “job safe”. It applies to “trade persons” currently simply because “every job is sufficiently unique/original” that Current AI LLM and ML systems do not have “training data” it can filch.

The fun thing is that some even mundane to human jobs will be beyond LLM and ML systems.

We talk about “logic and reasoning” but that is totally insufficient for what are to humans quite simple “feel it out” work…

We are “tuned to the environment adaptably” LLMs and MLs are currently “nothing even close”.

But… Who are going to be the “lucky worker” and who will be “80 hours of do nothing idlers”

History suggests that it’s an issue that will be gone in less than a generation.

That is freed up labour will simply evolve into new types of work, or working practices.

A Victorian plumber, does a very different job to a modern plumber. The industry behind plumbing has got rid of lead work, and copper work will be gone fairly soon.

We used to use copper for telephones, we are rapidly stopping doing that but fiberoptics and wireless have replaced copper in the ground.

These “in job changes” are easy to see when pointed out… The job remains but the technology and techniques change.

The only jobs that will go are the ones that are not really jobs but rules based professions that in reality are just processes. Because there is not really any new techniques to be learned that a rules based system can not do better.

What will happen is jobs will develop more interesting and exacting processes, entirely new jobs will come into existence and the dreded “P Word” of productivity will get thrown around by certain types trying to remain relevant, where they are not even today.

Oh and as for managers and CxO’s and business executives, to misquote a phrase,

“They don’t realise it but they will be the first going up against the wall they think they are building for others.”

And they will have done it to themselves… How dumb is that?

The thing is we’ve seen them do it to themselves before with “Business Consultants” that are the true vampires at the Big 4 accountancy firms.

ResearcherZero September 11, 2025 1:34 AM

Chinese actors impersonated John Moolenaar in emails used to deploy APT41 malware. The purpose of the campaign was to ascertain the strategy the US was using during trade talks.

‘https://www.infosecurity-magazine.com/news/chinese-espionage-impersonates-us/

As well as targeting US trade talks, state-backed actors are also targeted defense.
https://www.bitdefender.com/en-us/blog/businessinsights/eggstreme-fileless-malware-cyberattack-apac

Salt Typhoon penetrated networks globally in 2019 and remained undetected. Telecommunications were not the only networks and organizations targeted. Reportedly they had remained inside networks long enough with access that could allow them to collect information about the entire US population. The Chinese state-backed actors still remain inside networks.

https://www.independent.co.uk/news/world/americas/us-politics/chinese-cyberattacks-stolen-personal-information-b2820122.html

Clive Robinson September 11, 2025 1:50 AM

@ lurker,

With regards,

“I learned at an early age something that always puzzled me:”

Ahh… To put your mind at rest it’s a “clock face problem”.

Back when the phrase,

“The East wind doth blow,
and we shall have snow.”

Was thought up, “The East wind” ment the opposite of what it did later.

You can see it also with,

“When the wind is in the east,
‘Tis neither good for man nor beast;”

Ask yourself what does,

1, The East wind
2, When the wind is in the east

Actually mean?

That is,

1, Comming from the East
2, Blowing to the East

It actually ment both…

On land where you were fixed it ment the wind was comming from a place East of you.

So in London a wind from the East would be from Moscow (hence the book title) but the arrow of the wind vane would “point into the wind” towards Moscow thus points to the East

Thus you would “look into the wind” as it hit you in the face.

But if you are in a sail boat it was the other way around that is the wind from the East moved you in an Westerly direction.

You “went with the wind” and “the wind was at your back” not your direction of travel.

In the UK the predominate wind direction is from the South West, so in off the warm atlantic upto the North East of the cold North Sea and the Continent of Europe.

Which is why Britain and the UK has a “maritime climate” which is wet and cool/warm. Not as most of Europe does a “continental climate” which is dry and cold/hot.

Over time wind vanes and sail boats disappeared and about a century and a half ago science went from “local weather” to “global weather” so they picked as always for political reasons to say which way the wind was… And we’ve all been confused ever since…

The rule is – it’s named from the direction of origin. So that wind from Moscow is from the east or easterly… However due to sailing the exceptions on naming conventions are… Onshore winds blow onto the shore from the water, and Offshore winds blow off the shore to the water.

So yeah we got that sorted…

Clive Robinson September 11, 2025 7:10 PM

@ ALL,

It’s nearly the end of the week, so this Squid will get replaced.

However this security paper,

https://arxiv.org/abs/2508.19825

Should make interesting weekend reading. It’s titled and sub titled as,

“Every Keystroke You Make: A Tech-Law Measurement and Analysis of Event Listeners for Wiretapping

I’ve frequently warned about “smart device on screen keyboards” as a source of “typing cadence” that goes directly back to web sites. The Google Search Engine with it’s,

“Over the Internet auto-complete / next word(s) suggestions.”

Being the first I found well over a decade or so ago, “Fingerprints a User” on several levels.

That is cadence, spelling, word choice and higher semantics. Even seeing what sort of personality / knowledge / technical approach you have on how you enter a search. Oh and over time telling how your state of wellness basic health etc changes. Thus it is highly invasive especially for women.

The question that was never answered officially was if this breached existing legislation on the various aspects of surveillance.

My reading of existing “Wire Tap Legislation” in various jurisdictions very much suggested it was unlawful if not illegal. However other laws suggested that as it was part of a provided service AND the processing was done “past the end of the communications wire” it was not covered by “wiretap legislation”.

As readers here can probably guess it’s never been run through most jurisdictions legal process, so it remains as a general rule “untested”.

Being back then an EU citizen, I also looked at other privacy etc legislation that extended beyond the end of the communications wire. I found that fundamental rights were effectively being breached, but again it remained untested.

My new concern is this “think of the children” nonsense of “Online Safety” it can easily be shown that it breaches all human rights legislation that covers technology and is actually significantly damaging of significant numbers of peoples mental health.

All that said, getting back to the paper, it takes a look by others and the pre-introduction indicates the direction the paper takes,

“In this paper, we focus on a particularly invasive tracking technique: the use of JavaScript event listeners by third-party trackers for real-time keystroke interception on websites. We use an instrumented web browser to crawl a sample of the top-million websites to investigate the use of event listeners that aligns with the criteria for wiretapping, according to U.S. wiretapping law at the federal level and in California. We find evidence that 38.52% websites installed third-party event listeners to intercept keystrokes, and that at least 3.18% websites transmitted intercepted information to a third-party server, which aligns with the criteria for wiretapping. We further find evidence that the intercepted information such as email addresses typed into form fields (even without form submission) are used for unsolicited email marketing. Beyond our work that maps the intersection between technical measurement and U.S. wiretapping law, additional future legal research is required to determine when the wiretapping observed in our paper passes the threshold for illegality.”

Now that AI can as I’ve warned repeatedly be used to do mass surveillance way way beyond just “hover it all up from the wire” to in effect “make a virtual time machine”, by doing way more than basic wire tapping. People nearly everywhere should be pushing for AI and it’s use for surveillance to be made illegal for all including local and central government entities, law enforcement and other guard labour and agencies with absolutely no defence of “National Security” or “Know your customer”.

ResearcherZero September 12, 2025 12:05 AM

@Clive Robinson

As governments increasingly use artificial intelligence for analysis of keystroke interception and other surveillance of our sensitive personal identifiers and behavioral patterns, these discriminators can be used to detect our individual vulnerabilities.

Health problems as you mentioned and other weaknesses, that can be exploited to coerce or manipulate subtlety with or without awareness, and if needed inflict immense harm against us at a later date, once we have been rendered physically and reputationally defenseless.

The power of AI surveillance can be used to target ANY individual who might step outside of the norms and rock the boat to challenge wrong doing. And it can be wielded silently. A well worn method has long existed to crush those who might blow the whistle. Those who might not speak up, or do not know but their proximity condemns them, it can also accommodate.

Personal data is already openly exploited en masse to target vulnerable groups and such unscrupulous and malign targeting is readily accepted by many without concern for the harm done. Much more public sympathy exists for well resourced individuals who purposely commit fraud or embezzlement. Those who repeatedly engage in misconduct or abuse their authority.

Robodebt caused reputational, financial, physical and mental harm by imputation. 794,000 false and unlawful debts were raised against approximately 526,000 recipients.

Far less concern exists for those who are wrongly accused of being overpaid social security by a system like Robodebt. Although some of the individuals ultimately killed themselves as a result of being issued a debt, they were already judged guilty before it was proven, simply because of a referral from a badly implemented automated system that was designed to claw back money from those who could least afford it. More than half a million victims in total.

As a matter of course, all the files for corruption cases are accidentally shredded. This is typically carried out by someone inexperienced and too young to remember the incident, under the direction of a superior who cannot recall issuing that particular instruction.

‘https://theappeal.org/california-court-destroys-police-corruption-cases/

Microsoft promised it would stop supporting RC4 last year but continues to use it as the default standard to protect passwords despite its weaknesses being known for decades.

Sorry that all your private health data was compromised, stolen and then leaked or sold.

‘https://arstechnica.com/security/2025/09/senator-blasts-microsoft-for-making-default-windows-vulnerable-to-kerberoasting/

ResearcherZero September 12, 2025 8:20 AM

@Clive, Tony Hall, Bruce, ALL

Connectivity is magical and will bring success in…

Speaking of propagation, would you like a £330m contract with the NHS Peter?

‘https://www.theguardian.com/uk-news/2025/sep/08/boris-johnson-dominic-cummings-secret-meeting-palantir-peter-thiel

Largest ever capital flow into spyware funded by US investors accelerating its spread.
https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/mythical-beasts-diving-into-the-depths-of-the-global-spyware-market/

Clive Robinson September 12, 2025 10:46 PM

@ ResearcherZero, ALL,

With regards Peter Thiel and his idea to setup what would be the,

“Biggest Private Surveillance Company in the World”

That is named after the “Seven Seeing-stones or “Palantir” from the Tolkin books including the “Lord of The Rings”(LoTR). The stones were in effect spying devices that could see and communicate anywhere in the present, recall anything from the past and see into the future. Peter’s idea was to build a massive spying database to rival that of the NSA and CIA and to use predictive technology to do the “see into the future” bit. Originally this was supposedly by statistical methods but “Surprise” is now claimed to be specialised thus advanced AI…

I’ve said a few things over the years about Peter Thiel’s Palantirs “apparent MO” and none of it’s been good. Remember they were tangled up in some way with Cambridge Analytica and Facebook activities / revelations that had been set up by and still had close ties to the US GOP via the Mercers hedge fund and funneling pay offs through Russia was the least of their highly questionable and illegal activities. The Mercers were alleged to have funded Boris’s Brexit plans as well as have the Cambridge Analytica investigation / enquiry by the UK Met Police not just interfered with, but killed off by Boris’s successor who had for many years been the UK Home Office Minister and as such it was said had “fingers in all the pies”.

Unfortunately much of it has been tried to be kept hidden by Peter and Co. Including a rather nasty “Drug dealer like” business plan.

Thus it is hard to directly determine if this “£330m contract with the NHS” is part of, or a follow on of, the C19 era revealed deal where Boris Johnson handed over the entire “Medical Records” of UK Citizens to Peter Thiel for £1 in return for some very basic “tracking” of UK Citizens without their permission[1]. Recent events that have “happened to me” indicate that the the records Palantair have are not just being actively updated, but widely handed out to people who should have no right of access to them.

This data trove on UK citizens was originally set up by Tony Blair UK “Prime Minister”(PM) to replace the existing creaking and failing NHS Records system based mainly on paper. The “big sell” was that it would all become electronic and automated to “follow the patient” were ever they went.

Part of it was a new encryption system from the public face GCHQ, CSEG rumoured to be called “Rambutan” (a fruit that is prickly on the outside but soft and yielding inside)[2].

Thus the publicised idea was that if you lived in say “Lands End” at the south toe of England and had an accident at “John O’Groats” campsite at the north tip of Scotland then your entire “patient record” not just your “Medical Records” would “auto-magically” become available to any and all NHS “front line staff” (and as we found out later just about any one including Banks and Insurance companies).

It was detailed as something that was an important part of something called “The NHS Spine” and advertised to the UK and others as the “Worlds biggest IT Project”. It got some fairly pointed criticism not just from Experienced ICT Practitioners, but Trade Press/journals and Academics from the likes of Cambridge with Prof Ross J. Anderson being a fairly public critic who told people how not to have their records added. Thus political blow back followed and “Citizen Opt-Out” was fairly quickly removed.

It was thus unsurprisingly an abject failure by just about any business or reasonable company performance measure. Worse it was a complete “Security Nightmare / failure” and “The D-Word” was frequently heard.

But it got all those UK citizens records and intimate secrets into others hands especially Peter Theil’s and his Global Surveillance system…

Thus that £1 is now £300 million and likely to be one of successive “no bid” contracts for hundreds of millions if nor billions. So the “Drug Dealer” business plan is obviously paying off for Peter…

[1] The use of C-19 Lockdown “emergency” was used by Boris Johnson UK Prime Minister to “gain & use” the dictatorial powers he had in practice usurped in the crisis to give all that data for supposedly £1 to Palantir in return for a “back end” to a tracking system contract “Handed out by Boris to his mates”… That suprise suprise cost the country a fortune and never delivered a thing… Except for UK Medical Records to Palantir and Peter Thiel

[2] I heard interesting information on the Rambutan system back in the 1990’s when Baltimore Technologies who provided “telecom infrastructure” to the UK were setting up offices in –the sinking– “Tolworth Towers” next to the major arterial road “the A3”. Once known as the “Portsmouth Road” that was the main communication from the English Admiralty in Whitehall to the Navy Docs and administration in Portsmouth, and as such had been the site of the first “semaphore system” for Government “secret” communications.

The main idea behind Rambutan was that it was “The UK Clipper Chip” for a Capstone like system deceloped by RACAL a known supplier of Crypto and Radio systems for high level crypto to the UK Government.

Even today little is known about Rambutan and it’s always been assumed it was a “Stream Cipher” made to look like a “block Cipher”. But like the similar ETSI ciphers used in GSM and DMR systems for “consumer/commercial privacy” that we’ve heard so much bad about in the past few years due to “built in backdoors” there has been a hard clamp down on Rambutan information. But by the idea of “from the same stable” and “Birds of a feather” it’s been assumed it almost certainly has “back doors built in” as “the best thing about a bad idea is ‘Reuse is King'”.

lurker September 13, 2025 1:03 AM

@Clive Robinson

Rambutan, prickly? Wikipedia, and my personal experience, describe them as fleshy pliable spines, Most of the “spines” have the tip curled over. The Malay, and Vietnamese names for the fruit translate into English as “hairy”. Whoever chose the name for the cipher system will likely never tell us why …

ResearcherZero September 13, 2025 3:32 AM

@Clive Robinson

I do not like the idea of Palantir having so much access to sensitive personal data as it gives such companies enormous power to wield an influence that could be comparable to nation-states. With that influence comes the ability to shift public perceptions.

AI generated false information almost doubled over the last year due to LLMs pulling from a polluted online information ecosystem. Many young US and UK adults trust information from social media and Influencers as much as traditional news outlets.

Although foreign information operations account for the majority of inauthentic content on social media, inauthentic domestic content is part of the problem driving polarization. Shifts in perception help to create divided factions in the offline world, which then continue to drive polarization with long-term and immediate consequences for social cohesion. Algorithms then feed back into this partisan divide and further amplify it.

Fears bots might amplify conflict should be a warning to pause and considered if content is authentic. Who ultimately hopes to benefit from shaping public opinion is often unclear.

‘https://finance.yahoo.com/news/concerns-grow-bot-networks-may-174142580.html

“These models are biased from the get-go, and it’s super easy to make them more biased.”
https://www.washington.edu/news/2025/08/06/biased-ai-chatbots-swayed-peoples-political-views/

lurker September 13, 2025 3:55 AM

@ResearcherZero

Shutting down social media should be a quickfire way to stop dis/misinforation. But when they tried that in Nepal, the quickfire burned down parliament …

ResearcherZero September 13, 2025 6:57 AM

@lurker

Perhaps the American government needs its own Shaman to fix the problem?

The Telegraph picked up on something that happened last month, and it’s not Mushi living in the eyes of US citizens which might be causing the impairment of vision.

(Mushi are magical creatures from the spirit world which normally people cannot see)

‘https://www.telegraph.co.uk/world-news/2025/09/12/donald-trump-vengeance-cia-blinding-usa-threat-putin-russia/

Tulsi Gabbard fired the NSA’s leading advanced technology development expert.
As well as ousting top experts, Gabbard blew the cover of an undercover CIA agent.
The loss of some of the most dedicated and talented officers could blind America.

The firings followed what Gabbard claims is “whistleblower evidence” of “conspiracies”.
https://www.emptywheel.net/2025/08/21/quantum-leaps-the-so-called-whistleblower-that-got-nsas-top-mathematician-fired/

Clive Robinson September 14, 2025 2:11 AM

@ ResearcherZero,

Your comment of,

“AI generated false information almost doubled over the last year due to LLMs pulling from a polluted online information ecosystem.”

Reminded me that I’d not put up links to some research that has been carried out and published this year and turned into a rather interesting YouTube video. Which I posted for our host @Bruce to “have fun” watching,

https://www.schneier.com/blog/archives/2025/09/signed-copies-of-rewiring-democracy.html/#comment-447840

It demonstrates one instance of a more general and significant problem of using AI curated / generated data from a “Adult AI” as training data for a next step / generation “Child AI” model, to save massive costs… Part of which is the carry forward of “hidden bias” from Parent AI state to Child AI model.

The example is where the teaching Parent AI model is not biased, but the Parent AI state gets “pre-biased” by user queries rather than by any data input it was trained on.

The result is “apparently magically” the user input bias gets built into the next generation Child AI’s model…

However the way this “bias” gets transferred from one AI system current state to the model in a second AI system is actually non obvious. But as noted by the title,

“These Numbers Can Make AI Dangerous”

The thing I’m still looking into is the difference between the first Parent / Teacher AI “state” which is changeable and the second Child AI “model” which is in effect fixed…

Thus I could bias the state to something undesirable in the first or Parent / Teacher AI, which then passes “the bias” into the model of the second Child AI. BUT whilst I can change the state in the first AI back or to something else thus removing the bias in the first AI the bias had become “set into” the model of the second AI thus effectively permanent…

Thus can be used as part of a significant “Plausibly Deniable” attack.

In essence doing to down generation AI models the same as Faux News you refer to with,

“Many young US and UK adults trust information from social media and Influencers as much as traditional news outlets.”

That is building in “cognitive bias” near irreversibly into the next generation Child AI…

Which is in effect the same as we see with human children too young to defend their minds from being effectively “societally brain washed” or if you would prefer,

“Becoming the ‘Cult Followers’ of an Authoritarian led Nation.”

Something I’ve commented on a number of times in the past with “The King Game” and what was historically the equivalent to a modern state’s “Judicial and Legislative Branches” in a at best quasi support of the Executive Branch of “King and his Court”.

Recent World News is increasingly showing just how dangerous this can be with humans… But now we know it can be done with Current AI LLM systems…

Wormwood September 15, 2025 11:11 AM

Rak :

Instagram should not be used ever and certainly never be linked to.

Their licence is to steal your rights in perpetuity not for a single glance.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.