Friday Squid Blogging: Pilot Whales Eat a Lot of Squid

Short-finned pilot wales (Globicephala macrorhynchus) eat at lot of squid:

To figure out a short-finned pilot whale’s caloric intake, Gough says, the team had to combine data from a variety of sources, including movement data from short-lasting tags, daily feeding rates from satellite tags, body measurements collected via aerial drones, and sifting through the stomachs of unfortunate whales that ended up stranded on land.

Once the team pulled all this data together, they estimated that a typical whale will eat between 82 and 202 squid a day. To meet their energy needs, a whale will have to consume an average of 140 squid a day. Annually, that’s about 74,000 squid per whale. For all the whales in the area, that amounts to about 88,000 tons of squid eaten every year.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Posted on November 14, 2025 at 6:33 PM46 Comments

Comments

flat November 14, 2025 6:49 PM

“The US-based Anthropic said its coding tool, Claude Code, was “manipulated” by a Chinese state-sponsored group to attack 30 entities around the world in September, achieving a “handful of successful intrusions”.
(…)
The actor achieved what we believe is the first documented case of a cyber-attack largely executed without human intervention at scale,
(…)
It said Claude had made numerous mistakes in executing the attacks, at times making up facts about its targets, or claiming to have “discovered” information that was free to access”

‘https://www.theguardian.com/technology/2025/nov/14/ai-anthropic-chinese-state-sponsored-cyber-attack

ResearcherZero November 15, 2025 12:37 AM

Troubling details have emerged over confidential data shared with commercial entities by Lauren Smith. Oversight officials warned that this could open up accusations of fixing mortgage rates with rivals, as Fannie Mae and Freddie Mac are supposed to operate independently from one another to ensure there is no repeat of the 2008 financial crisis.

The top housing regulator Bill Pulte, who instructed Smith to share the data, fired the ethics and oversight team. The investigators had received complaints that staff had been directed to improperly access mortgage documents that Pulte then used against adversaries.

The top regulator appointing himself chairman of both Fannie Mae and Freddy Mac could be seen as a conflict of interest risk. Bill Pulte partnered Fannie Mae with Palantir, providing the company with access to confidential loan and consumer data. Pulte has also proposed selling off shares in Fannie Mae and Freddy Mac while keeping both entities under government control. Privatizing Fannie Mae and Freddie Mac would benefit wealthy investors.

‘https://abcnews.go.com/Business/wireStory/top-fannie-mae-officials-ousted-after-sounding-alarm-127506554

Pulte had already fired the Federal Housing Finance Agency’s inspector general Joe Allen.
https://www.reuters.com/world/us/watchdog-being-ousted-us-housing-regulator-involved-trump-crackdown-sources-say-2025-11-03/

The housing industry is worried public trust and market stability could be eroded.
https://www.washingtonpost.com/business/2025/11/10/bill-pulte-fannie-mae-firing-ethics/

Clive Robonson November 15, 2025 11:35 AM

@ ALL,

Untranslatable and AI

Google and others offer translation from one language to another, but they do it quite imperfectly.

One way to see this is to take English translate it into French, then the French into Chinese, and the chinese back into English.

What you get is mostly nonsense for various reasons.

This was before the current AI LLM and ML Systems. Now it appears even worse…

Part of the reason is “alien concepts” if the culture of a language does not have that concept then there won’t be a word for it, and their may not even be a phrase for it. Sometimes there may not be the words to explain the concept at all.

In short language is a form of lossy compression that kind of has a Hamming weighting. That is the more used the concept is in a culture, the more likely it is to have a word. And the longer in the history of the culture the concept has been in use generally the shorter the word will be.

Now the thing about “lossy compression” is that you,

“Loose something in translation.”

But the question then arises as to what do you loose.

If there is not a word for a concept but a simple phrase expresses it, then you’ve not really lost information just brevity. But that can be “one way” in that the simple phrase may not translate back to the word, it might go back to several words or simple phrases.

Hopefully at this point you get a feeling for the fact that, some words are effectively not translatable. And why an engineering term looses all meaning,

“Hydraulic Ram :- Male Water Sheep”

And why some colloquialisms just don’t translate,

“Raining cats and dogs :- ??????”

Is there a way to use matatics to see this?

Well someone’s had a stab at it,

https://aethermug.com/posts/linear-algebra-explains-why-some-words-are-effectively-untranslatable

But the point is all languages are imperfect at expressing concepts. Thus two cultures with entirely different views on what concepts are important or not actually may not be able to communicate at all.

Who remembers the “Gold Records” designed by Carl Sagan’s group for the deep space Voyager missions? Well the chances are if an alien culture does receive it they will not be able to understand it from within their culture.

To do so they would have to live in the culture that made the disk. But that culture nolonger exists… When this was tested with people to young to have known about the “Gold Records” it was found that it was almost “entirely lost” to them…

Clive Robinson November 15, 2025 2:16 PM

@ ALL,

What harm is the myth of AGI

More and more people especially engineers are realising that AGI is a fantasy that is going to fail in oh so many ways.

But whilst “failure” is generally of or local to an organisation, “harms” on the other hand are much wider spread to national or global in covarage.

The author of,

https://www.tomwphillips.co.uk/2025/11/agi-fantasy-is-a-blocker-to-actual-engineering/

Notes,

“As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable. Instead, if we drop the AGI fantasy, we can evaluate LLMs and other generative models as solutions for specific problems, rather than all problems, with proper cost benefit analysis. For example, by using smaller purpose-built generative models, or even discriminative (non-generative) models. In other words, make trade-offs and actually do engineering.”

And makes the same points, experience has taught me (which I’ve highlighted).

Realistically LLM’s especially derived from uncurated learning data, effectively scrapped of the bottom of the shoes of many people, are not going to give us AGI just by playing around with language tokens.

The author indicates there is a,

“Discuss on Hacker News”,

https://news.ycombinator.com/item?id=45926469

Folks might want to read it.

Steve November 15, 2025 4:03 PM

@flat: The US-based Anthropic said its coding tool, Claude Code, was “manipulated”. . .

David Gerard at Pivot to AI takes a different view:

Every month or so, Anthropic puts out a press release about how we should all be very frightened!! of AI. This month’s scare story is a Chinese hacker scare!
[…]
Can you guess what Anthropic’s advice is? I bet you can!

We advise security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response.

The cure for AI is. . . more AI!

https://pivot-to-ai.com/2025/11/14/anthropic-chinese-ai-hackers-are-after-you-security-researchers-call-bs/

One is probably advised to take any thing coming from the AI bros with a tractor trailer load of sodium chloride.

not important November 15, 2025 6:37 PM

@ALL (sorry long but exceptionally important)
https://www.yahoo.com/news/articles/14-ways-social-media-grooms-104546007.html

=Behind those catchy hashtags and viral memes lies an ecosystem subtly conditioning us to accept information without a second thought.
It’s not about being a skeptic but rather about understanding how the platforms we love
shape our perception of reality.

  1. Algorithms Are Designed To Reinforce “Beliefs”

Social media is designed to show you what you already like, reinforcing your existing
beliefs. Algorithms analyze your clicks and likes, serving content that aligns with your
views.

This lulls you into a false sense of security, thinking everyone thinks the same as you.

As a result, you might stop questioning the truth because it feels like the whole world
agrees with you.

These echo chambers create a comfortable bubble that shelters you from differing
opinions. If all you ever see are posts that match your worldview, it’s easy to mistake that for universal truth. This lack of dissenting views can dull your critical thinking skills. Challenging yourself to step outside the bubble can help you reclaim that
curiosity and skepticism.

  1. Viral Content Is Designed To Overwhelm And Trigger

When you see something shared thousands of times, you assume it must be true. This
“bandwagon effect” can lead to widespread acceptance of misinformation. The speed at
which content spreads leaves little room for fact-checking. You might end up accepting
claims without question because everyone else seems to believe them.

Your brain naturally gravitates toward what’s popular, assuming it’s reliable. This
herd mentality can make it challenging to pause and question the validity of what you
see. Taking a moment to research or verify before sharing can break the cycle.

  1. Influencers Shape Our Views And Opinions

Influencers wield significant power over our perceptions, often seen as trusted
authorities. Their curated lives and opinions can shape what you believe and value.
49% of people say they rely on influencer recommendations to guide their purchases. This trust can extend to opinions about news and world events. When an influencer shares
their take, it can be tempting to accept it without questioning.

Relying on influencers for information can sideline your critical thinking. Their
polished narratives might not always be grounded in fact. But because they feel
relatable and trustworthy, you might lower your guard.

The glamor of their lives can cloud your judgment, leading you to accept their truth as your own.

!!!!4. Emotional Content Bypasses Logic

Social media thrives on emotional content because it grabs your attention. Posts that
make you laugh, cry, or get angry are more likely to be shared. Emotional engagement
often bypasses logical thinking, pushing you to react rather than reflect. If a post
tugs at your heartstrings, you might spread it without questioning its accuracy. This
emotional hijacking can make it easy to confuse feelings with facts.

Your brain is wired to respond to emotional stimuli, which social media exploits. This
can create a cycle where you prioritize emotional resonance over factual accuracy. When you see a post that elicits a strong reaction, it’s worth taking a pause. Ask yourself if the emotional impact is clouding your judgment. Taking a step back can help you separate fact from emotional manipulation.

  1. Headlines Tell Half The Story

Many people only read the headline before sharing or forming an opinion.
60% of people admitted to sharing stories based only on headlines. Headlines are crafted to be catchy, not necessarily complete. This can result in widespread misunderstandings or oversimplified narratives.

Relying solely on headlines can lead to a skewed understanding of the truth. The nuance and detail that provide context are often buried in the article. Skipping the full story might mean missing out on critical information. Headlines are a starting point, not the full picture.

!!!!6. Confirmation Bias Clouds Judgment

Social media can exacerbate your natural tendency towards confirmation bias. You’re more likely to engage with content that confirms what you already believe. This selective exposure can lead you to ignore evidence that contradicts your views. Over time, this bias can make it difficult to remain open-minded. You might dismiss valid arguments simply because they don’t align with your beliefs.

Confirmation bias is like a filter that colors everything you see. It’s comforting to have your beliefs validated, but it can also blind you to new information. Being aware of this bias can help you approach content with a more critical eye.

  1. Illusions Of Consensus Are Misleading

Social media can create the illusion that everyone agrees on a particular issue. When you see a post with thousands of likes and shares, it feels like a consensus. But this perceived agreement can be misleading.

!!!the appearance of consensus can make people less likely to voice dissenting opinions. This can create a cycle where minority views are marginalized.

You might feel pressure to conform to popular opinion, even if you disagree. This can lead to the suppression of diverse perspectives and voices. a high number of likes doesn’t equal universal truth.

  1. Instant Gratification Discourages Deep Thinking

Social media is built on the principle of instant gratification, making it addictive. The platform’s design encourages quick reactions over thoughtful responses. Each like and share provides a dopamine hit, rewarding surface-level engagement. This environment discourages deep thinking, as it pushes you to consume content rapidly. The more you indulge, the less likely you are to pause and question.

When you prioritize speed, you often sacrifice depth and critical analysis.

Embracing a mindset of curiosity can reignite your willingness to question the truth.

  1. FOMO Drives Participation And Reaction

The fear of missing out (FOMO) can drive you to participate in trends without questioning them. When everyone else is sharing and commenting, you might feel pressured to join in. This can lead to knee-jerk reactions instead of thoughtful responses.

The desire to belong can override your critical thinking skills. Participating without questioning can contribute to the spread of misinformation.

Remember, it’s okay to miss out on the latest trend if it means staying true to yourself.

  1. Visual Content Skips Analysis

Visual content like images and videos can convey powerful messages quickly. But the speed at which they communicate can bypass your analytical thinking. When a picture paints a thousand words, you might accept it at face value. This can lead to assumptions without questioning the underlying message. The power of visuals can make it challenging to separate fiction from reality.

Visual content often lacks the context needed for a full understanding. When you consume information in this format, it’s easy to jump to conclusions.

Remember, what you see isn’t always the whole story.

!!! 11. Algorithms Dictate And Distort Reality

Algorithms dictate what content you see, shaping your perception of reality. These invisible forces prioritize engagement over accuracy. What’s popular is pushed to the forefront, while less engaging content is buried. This can create a skewed view of what’s important or true.

You might assume that what’s most visible is what matters most.

It’s important to remember that algorithms are designed to keep you engaged, not informed. Seeking out diverse sources can help you break free from algorithmic constraints. Curating your own content can lead to a more balanced understanding of the truth.

  1. Social Proof Validates Content

Social proof can validate content, making it seem more credible than it is. If something has a lot of likes or shares, you might assume it’s true. This psychological phenomenon can lead to the acceptance of misinformation.

Social proof can create a feedback loop of validation and sharing. When you see others endorsing something, it reinforces your own beliefs. This can make it difficult to question or challenge the content. Being mindful of this effect can help you stay critical.

Just because something is popular doesn’t mean it’s accurate or trustworthy.

  1. Clickbait Lures Us In

Clickbait titles are designed to grab your attention and entice you to read more. They often use sensational language, promising shocking truths or revelations. But once you click, the reality often falls short of the hype. This click-driven culture can lead you to prioritize catchy headlines over substantive information. You might find yourself lured into sharing content without verifying its claims.

Clickbait feeds into the cycle of instant gratification and surface-level engagement. When you’re hooked by a flashy title, critical thinking can take a back seat.

Remember, not everything that glitters is gold.

  1. Lines Between News, Facts, And Opinion Are Non-Existent

!!!Social media blurs the line between news and opinion, making it hard to distinguish fact from perspective.

Many platforms don’t differentiate between the two, presenting them side by side. This can lead to confusion, as opinion pieces might be mistaken for hard news.

you might accept subjective viewpoints as objective truth. The blend of news and opinion can make questioning more complex.

it’s crucial to be discerning about the sources of information. Understanding the difference between news reporting and opinion can help you navigate content. Opinion pieces can offer valuable insights but should not replace factual reporting.=

keep the thing keep the it keep the creature they don't mean shit November 15, 2025 10:09 PM

Lawmakers Want to Ban VPNs—And They Have No Idea What They’re Doing

https://www.eff.org/deeplinks/2025/11/lawmakers-want-ban-vpns-and-they-have-no-idea-what-theyre-doing

Ring’s new feature turns your doorbell into a biometric spy

https://boingboing.net/2025/11/14/rings-new-feature-turns-your-doorbell-into-a-biometric-spy.html

Big Tech Wants Direct Access to Our Brains

https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html

“The Mind Has No Firewall” – Army article on psychotronic weapons

https://www.democraticfundamentalism.org/2005/psychotronics/government/1998mindhasnofirewallcomplete.htm

Winter November 16, 2025 8:00 AM

@keep the thing keep …

And They Have No Idea What They’re Doing

I assume they do. They are building a fascist autocracy.

The children don’t need protection against information about the life of adults, they need food, medical care, and good housing. But they are denied the information, as well as the food, medical care, and housing.

lurker November 16, 2025 12:13 PM

@not important

“Hot or Not?” on the login page of the original Facemash told me Mr Zuckerberg was a frat-boy appealing to the baser instincts of his fellow frat-boys. Nothing subsequent has changed my mind. What is dismaying is that society has permitted him to build a billion dollar industry on this base appeal, and that copy-cats proliferate.

Clive Robinson November 16, 2025 2:42 PM

@ ALL,

New ways to bend/break AI Guardrails

Fresh of the “will it never stop” line is another way to defeat AI Guardrails,

https://www.theregister.com/2025/11/14/ai_guardrails_prompt_injections_echogram_tokens/

The question that should be now asked by every one as to the efficacy of guardrails,

“Will Guardrails ever work?”

To which the answer is very definitely “NO”.

Or slightly longer,

“There will always be Black Swans swimming, or Unknown Unknowns.”

Mind you on the comments page there are a couple of points expressed in interesting ways,

1, On why scaling and similar of “throw more on” do not work,

“Collect a room full of idiots, adding another imbecile isn’t going [to] give you a chamber with an Einsteinian brain capacity.”

2, On why it will always go “down the drain / rabbit hole”,

“‘AI’ is like a self-modifying program, it may have been written with NO ‘Bad intentions’ BUT it changes itself on the basis of the information it ‘learns’ from and the queries it answers.

It is quite capable of generating its own ‘Bad Intentions’ in a way that cannot be foretold.”

It would appear people are learning. But one comment harks back to times of long ago

3, On, “If the answer is Microsoft you are asking the wrong question!”,

“Pointless, bloated, unreliable, wasteful, insecure, badly written – no wonder Microsoft’s all over this.”

I don't come from no black lagoon I'm from past the stars and BEYOND THE MOON! November 16, 2025 11:56 PM

Google to flag Android apps with excessive battery use on the Play Store

https://www.bleepingcomputer.com/news/security/google-to-flag-android-apps-with-excessive-battery-use-on-the-play-store/

Government ‘withholding data that may link Covid jab to excess deaths’

https://www.telegraph.co.uk/politics/2025/11/15/government-withholding-data-covid-jab-link-excess-deaths/

Icelandic is in danger of dying out because of AI and English-language media, says former PM

https://www.theguardian.com/world/2025/nov/15/icelandic-is-in-danger-of-dying-out-because-of-ai-and-english-language-media-says-former-pm

Clive Robinson November 17, 2025 1:07 AM

@ ALL,

People calling bull on Anthropic claims again.

As some will know Anthropoc uses Current AI LLM and ML Systems at a considerable loss to supply services to effectively “unknown parties”.

One such party Anthropic have claimed is “GTG-1002” an allegedly “State Sponsored Threat Group” aligned with Chinese Government State Agencies.

Anthropic further claim that the GTG-1002 actor used Anthropic’s “Claude Code AI Model” to carry out a mostly automated (80-90%) cyber espionage operation.

With Anthropic further claiming that this Sept 2025 series of events represents the first publicly documented case of large-scale autonomous intrusion activity conducted by an AI model, to target at least 30 separate entities, in the classes of large tech firms, financial institutions, industrial and chemical manufacturers, and of course you can not forget government agencies.

But all without giving any real evidence… So some are calling “bull” on Anthropic’s claims with some saying it’s actually advertising / self promotion yet again.

Security Researcher Kevin Beaumont known by some as “GossiTheDog” indicated on

https://cyberplace.social/@GossiTheDog/115547042229253967

“I agree with Jeremy Kirk’s assessment of the Anthropic’s GenAI report. It’s odd. Their prior one was, too.

The operational impact should likely be zero existing detections will work for open source tooling, most likely. The complete lack of IoCs again strongly suggests they [Anthropic] don’t want to be called out over that”

Which was picked up by Bill Toulas over at Bleeping Computer,

https://www.bleepingcomputer.com/news/security/anthropic-claims-of-claude-ai-automated-cyberattacks-met-with-doubt/

Where he gives further claims made against Anthropic, and an overview of the details from Anthropics questioned report.

hulk smash November 17, 2025 6:26 AM

Linux ELF Malware Analysis 101

https://github.com/intezer/ELF-Malware-Analysis-101

This repository contains relevant samples and data related to the ELF Malware Analysis 101 articles.

Part 1 – Linux Threats No Longer an Afterthought

Part 2 – Initial Analysis

Part 3 – Advanced Analysis

In computing, the Executable and Linkable Format (ELF, formerly named Extensible Linking Format) is a common standard file format for executable files, object code, shared libraries, device drivers, and core dumps. First published in the specification for the application binary interface (ABI) of the Unix operating system version named System V Release 4 (SVR4), and later in the Tool Interface Standard, it was quickly accepted among different vendors of Unix systems. In 1999, it was chosen as the standard binary file format for Unix and Unix-like systems on x86 processors by the 86open project.

Clive Robinson November 17, 2025 7:04 AM

@ ALL,

More on Anthropic Report is bull

It would appear that quite a few are jumping on Anthropic and for what they see is good reason

One that is more indepth than most others is,

https://djnn.sh/posts/anthropic-s-paper-smells-like-bullshit/

Concludes,

“At the end of the day, this shit is a pathetic excuse of a report and should not be taken as anything else than a shameless attempt at selling more of their product. This is shameful and extremely unprofessional, at best. This disregard for basics ethics in order to sell just a little bit more make me want to never use their product, ever.”

Yup, the use of “Current AI for Everything” is not working and with Current AI LLM and ML Systems it actually can not.

Yes the underlying technology of LLMs and the ML systems that feed them will remain, but like all AI systems before them it will be for niche applications with clear rules and carefully collated and sanitised input data.

“AGI it ain’t, and never will be”

For reasons that are more human than most people realise.

Clive Robinson November 18, 2025 7:10 AM

@ ALL,

More on why LLMs can’t calculate

I’ve indicated before that Current AI LLM and ML Systems, can not do math, but get 100% in some tests[1].

To see more evidence of this,

https://www.theregister.com/2025/11/17/ai_bad_math_orca/

AI is actually bad at math, ORCA shows

ORCA benchmark trips up ChatGPT-5, Gemini 2.5 Flash, Claude Sonnet 4.5, Grok 4, and DeepSeek V3.2

The article makes the same points,

“[T]he authors say, many of the existing benchmark data sets have been incorporated into model training data, a situation similar to students being given the answers prior to an exam. Thus, they contend, ORCA is needed to evaluate actual computational reasoning as opposed to pattern memorization.

With an observation on this toward the article conclusion of,

“And yet, these scores may represent nothing more than a snapshot in time, as these models often get adjusted or revised.”

Which is probably why the people behind the tests produced some nice new shiny questions,

“… devised a math benchmark called ORCA (Omni Research on Calculation in AI), which poses a series of [new] math-oriented natural language questions in a wide variety of technical and scientific fields. Then they put five leading LLMs to the test.

ChatGPT-5, Gemini 2.5 Flash, Claude Sonnet 4.5, Grok 4, and DeepSeek V3.2 all scored a failing grade of 63 percent or less”

But read a bit further and you find,

“Claude Sonnet 4.5 had the lowest scores overall – it failed to score better than 65 percent on any of the question categories. And DeepSeek V3.2 was the most uneven, with strong Math & Conversions performance (74.1 percent) but dismal Biology & Chemistry (10.5 percent) and Physics (31.3 percent) scores.”

A score of only 10.5% in one whole category of questions… Is well lets just say DeepSeek might get called “DunceCap” in future,

https://allthatsinteresting.com/dunce-cap

Appears apropos (especially as Scotus is mentioned)…

[1] Overly simply because both the questions and solutions have been put in the training data, thus it’s not “reasoning or intelligence” but in effect pattern matching of tokenized user input to stored tokenized training data.

Yes there have been improvements but they are in effect like Guide Rails, additions that do not fall within Current AI LLM and ML Systems. Which is why they are very patchy in performance at best. It’s something we will increasingly see more of, untill the LLM’s and ML Systems effectively sink from sight into niche uses.

Clive Robinson November 18, 2025 7:34 AM

@ ALL,

Microsoft gets whacked in Oz

https://www.bleepingcomputer.com/news/microsoft/microsoft-aisuru-botnet-used-500-000-ips-in-15-tbps-azure-ddos-attack/

Microsoft: Azure hit by 15 Tbps DDoS attack using 500,000 IP addresses

Microsoft said today that the Aisuru botnet hit its Azure network with a 15.72 terabits per second (Tbps) DDoS attack, launched from over 500,000 IP addresses.

The attack used extremely high-rate UDP floods that targeted a specific public IP address in Australia, reaching nearly 3.64 billion packets per second (bpps).

Apparently the attack originated from the Aisuru botnet, and those half million IP addresses are said by Microsoft to be where “Internet Of Things” devices are,

‘Aisuru is a Turbo Mirai-class IoT botnet that frequently causes record-breaking DDoS attacks by exploiting compromised home routers and cameras, mainly in residential ISPs in the United States and other countries,” said Azure Security senior product marketing manager Sean Whalen.’

So no not a new style of attack but one that is growing one a regular basis.

I could make the “fly by night Chinese Developer” argument others will no doubt make, but that would be unfair because it’s not just Chinese Developers, it’s one heck of a lot of developers world wide that are designing high bandwidth insecure systems as “normal procedure” for “Marketing and Management” reasons.

Clive Robinson November 18, 2025 7:46 AM

@ ALL,

Cloudflare goes down again this AM

In fresh news, it appears Cloudflare is suffering more internal problems and are not supply reliable or any service in some places.

Why has not yet been said but their 404 message says something to the effect of OMG we are in the hands of engineers…

https://www.bleepingcomputer.com/news/technology/cloudflare-hit-by-outage-affecting-global-network-services/

Cloudflare hit by outage affecting global network services

Cloudflare is investigating an outage affecting its global network services, with users encountering “internal server error” messages when attempting to access affected websites and online platforms.

I guess more details will emerge in the near future.

Winter November 18, 2025 1:41 PM

@Clive

I’ve indicated before that Current AI LLM and ML Systems, can not do math, but get 100% in some tests[1].

I am still puzzled why people think they could do “math”.

What people call AI are in fact LLMs, Large Language Models.

Mathematics, actually, calculating, is a language in an abstract sence. But it is in no way comparable to a human spoken language. Human languages don’t “count”, nor do they add, subtract, multiply, or divide numbers. Children, humans in general, have to learn that with explicit education and a lot of effort.

LLMs will learn to do math if you couple something like Wolfram Alpha to it to do the math work.

LLMs do language, not facts, not math, not reasoning, not logic.[1]

You can build computers to do such things and call them AI, and they can have LLMs to interface with humans. But the LLMs won’t do the math.

[1] You can build LLMs on something that is not human language, eg., genetic code, protein amino acid sequences, etc.. Such LLMs can do brilliant things in genetics and protein folding. But these don’t work well with human language.

Clive Robinson November 18, 2025 5:51 PM

@ Winter,

These might amuse,

Firstly,

https://www.bbc.co.uk/news/articles/c8drzv37z4jo

Don’t blindly trust what AI tells you, says Google’s Sundar Pichai

People should not “blindly trust” everything AI tools tell them, the boss of Google’s parent company Alphabet has told the BBC.

I’ve kind of assumed everybody knew that over a year ago, but I guess I could have been wrong 😉

And secondly,

https://www.bbc.co.uk/news/articles/cwy7vrd8k4eo

Google boss says trillion-dollar AI investment boom has ‘elements of irrationality’

Every company would be affected if the AI bubble were to burst, the head of Google’s parent firm Alphabet has told the BBC.

The word “irrationality” is not the one I’d use, “insanity” might be nearer the mark.

The AI Hype Bubble is almost certainly a “black tulip” market, the only difference is tulip bulbs can be cooked and eaten,

https://www.atlasobscura.com/articles/are-tulips-edible

And even old style share certificates could have had some use as toilet paper. However today the bunch of bits you will have in a computer will probably not be worth the cost of electricity to get them out. ={

News elsewhere says that Oracle has not just “taken a bath” but is now “underwater” over it’s dealings with OpenAI…

It’s lost $315bn* in market value since Oct 10th when it announced it’s very strange arrangement with OpenAI,

https://www.ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89

Oracle are not the first and I doubt they will be last to get majorly hit by,

“The Curse of OpenAI, due to Altman lies.”

not important November 18, 2025 7:00 PM

@lurker – thank for your point. When anybody try to move you from logic to emotional field that is the very first sign you to be manipulated. Good to know those tricks upfront. But more tricky use against more intelligent people – fallacies. Combo of both is tool of propaganda.

@all
EU plans to ease GDPR laws and AI constraints in major shift
https://www.dw.com/en/eu-plans-to-ease-gdpr-laws-and-ai-constraints-in-major-shift/a-74792773

=A leaked European Commission document, originally published by German advocacy site Netzpolitik.org, shows that the bloc is pushing for substantial changes to its landmark General Data Protection Regulation (GDPR) laws, considered by many to be the global standard, among a host of wider changes.

The motivation is, ostensibly at least, to cut red tape for European businesses struggling to compete on the global stage by simplifying a number of data protection rules. But privacy campaigners argue that profit is being prioritized over citizens’ privacy and protection, while other observers feel the influence of Donald Trump’s government and the US tech giants is a significant factor.

According to the leaked document, the definition of personal data will be narrowed, allowing companies to process such data to train AI models “for purposes of a legitimate interest”.

The now-familiar pop-ups asking a user whether they accept cookies will disappear if the proposal becomes law, with more companies able to harvest user data without consent, forcing the user to ask to remove their data after the fact. Companies have complained that the application of laws around cookies has created higher compliance costs, particularly given that it can be enforced by both the EU and national agencies.

Article 9 of the law, which deals with more personal data, would also change. While individuals’ direct answers to questions on subjects like sexuality, religion or health will still be protected, the scope for defining sensitive data will narrow.

In practice, this would mean that data gleaned from non-direct questions (so browsing habits, for example) would not have the same protections as it previously did. However, the document does say that: “The enhanced protection of genetic data and biometric data should remain untouched because of their unique and specific characteristics.”

In AI terms, the other major change is that the EU will advocate a further one-year pause in the implementation of parts of its AI law, meaning they will now come into effect in 2027 rather than 2026. This is a delay that has been pushed by many big businesses, including Lufthansa in Germany, so they can quickly implement changes before tighter regulations. The airline announced earlier this year that they plan to replace about 4,000 jobs with AI.

“These are the protections that keep everyone’s data safe, governments accountable, protect people from having artificial intelligence (AI) systems decide their life opportunities, and ultimately keep our societies free from unchecked surveillance. Unless the European Commission changes course, this would be the biggest rollback of digital fundamental rights in EU history.”=

not important November 18, 2025 7:03 PM

Don’t blindly trust what AI tells you, says Google’s Sundar Pichai
https://www.bbc.com/news/articles/c8drzv37z4jo

=In an exclusive interview, chief executive Sundar Pichai said that AI models are “prone to errors” and urged people to use them alongside other tools.

Mr Pichai said it highlighted the importance of having a rich information ecosystem, rather than solely relying on AI technology.

“This is why people also use Google search, and we have other products that are more grounded in providing accurate information.”

While AI tools were helpful “if you want to creatively write something”, Mr Pichai said people “have to learn to use these tools for what they’re good at, and not blindly trust everything they say”.

The tendency for generative AI products, such as chatbots, to relay misleading or false information, is a cause of concern among experts.

“We know these systems make up answers, and they make up answers to please us – and that’s a problem,” Gina Neff, professor of responsible AI at Queen Mary University of London, told BBC Radio 4’s Today programme.

In his interview with the BBC, Mr Pichai said there was some tension between how fast technology was being developed and how mitigations are built in to prevent potential harmful effects.

The tech giant has also increased its investment in AI security in proportion with its investment in AI, Mr Pichai added.

“If there was only one company which was building AI technology and everyone else had to use it, I would be concerned about that too, but we are so far from that scenario right now,” he said.=

smores November 18, 2025 7:29 PM

Microsoft warns its new “AI” agents in Windows can install malware

https://www.osnews.com/story/143868/microsoft-warns-its-new-ai-agents-in-windows-can-install-malware/

Debian Libre Live Images Released for Software Freedom Lovers

https://9to5linux.com/debian-libre-live-images-released-for-software-freedom-lovers

Google Chrome bug exploited as an 0-day – patch now or risk full system compromise

https://www.theregister.com/2025/11/18/google_chrome_seventh_0_day/

Clive Robinson November 19, 2025 2:12 AM

@ smores, ALL,

With regards The Register article on the Google Chrome “Zero Days”

The important thing to note is, it indicates the flaws are,

“in the V8 JavaScript and WebAssembly engine”

Both JavaScript and WebAssembly I’ve repeatedly warned about in the past, going back over the past decade or two if not longer for JavaScript.

I first made myself unpopular because I told people to “disable JavaScript” (along with other “it runs untrusted programmes in your computer”).

Apparently according to some web developers I “did not know what I was talking about”… Well as time has shown it’s become necessary to either tuen it off completely or put in preventative add ons like NoScript, uBlock Origin, et al (which Google fights).

I’m of the view that if a web site won’t work without JavaScript, then the chances are you will find another web site that will offer similar without the need for JavaScript or some other way of doing things entirely. Because the actual real need for JavaScript is actually very very small, and the level of abuse it allows is way way to high.

As for WebAssembly I am known to be “So Anti”… I voiced against it way back, ever since Google started to shove it through the W3C with “bribes and threats” for inclusion in HTML5.

It is without doubt just like JavaScript a “very clear liability” it always was, and always will be “a clear and present danger” on any users system.

Which is why my advice on both is as always,

1, Disable and or Remove where ever possible.
2, Mitigate by hard Segregation where it’s not.”

Because you will be attacked through them, not just now but in the future it’s a guaranteed certainty because they can not be made secure in a reliable way.

(As this slew of new CVE’s on Googles product demonstrates without doubt).

Winter November 19, 2025 5:27 AM

@Clive

The word “irrationality” is not the one I’d use, “insanity” might be nearer the mark.

Nah, this is just your run of the mill Tulip Mania, or South Sea and Mississippi bubbles.

Every investment bubble in history looks like insanity, but is just FOMO.

lurker November 19, 2025 9:19 PM

Public administration with, of, and through AI: toward a new paradigm in the era of intelligence

This paper examines the future trajectory of public administration in the era of intelligence, focusing on the transformative implications of artificial intelligence (AI). [ … ] It concludes by advocating three strategic integrations that can guide the discipline’s renewal: scientific rigor with practical relevance, agility with long-termism, and globalization with indigenization. These integrations aim to ensure that governance in the intelligent age remains both effective and ethically grounded.

A non-American view of the topic.

https://www.tandfonline.com/doi/full/10.1080/23812346.2025.2578589

Clive Robinson November 19, 2025 9:52 PM

@ Winter,

“Every investment bubble in history looks like insanity, but is just FOMO.”

Just as a point of note, what we sort of joke about as “Fear Of Missing Out”(FOMO) is actually recognised as being a psychiatric disorder that presents as a “spectrum”…

You see it present in gamblers as “doubling down” and if you remember Nick Leason and Bearings Bank you will see just how destructive it can be.

People who do game theory for “real life” note it is one of the major reasons behind “Lost Opportunity Cost”. That is somebody keeps thinking that “it’s going to come up roses” when every one else is going around “dead heading” for autumn/winter. Thus valuable resources are wasted on what in reasonable probability is a “no hope” (which is what ML-AGI currently is).

As for criminals like Con Artists and their semi-legal brothers Venture Capitalists and Financial Advisors they “Tax FOMO” “for all they can get” and so “fake it up where they can”. It’s why “Selling Pump-n-Dump” to rubes is a crime but to those who are distinguished otherwise it’s not (providing you are not the one doing the actual pump and dump which is why VCs and similar get away with it). It’s ironic that the law sees “more money” as “more sense” when in fact the opposite is almost always true in the Finance Game… Arguably money can only be consistently made by “the hidden hand” of “insider knowledge”, which way to many mistakenly think is a crime (it’s not if you can “Parallel Construct” a “From Public Knowledge” defence which is something LLMs would be good for…).

Arguably FOMO is a “non communicable mental disease” that disables some people but not all though it is can be seen in the majority to some extent. Which is why, others see it as a spectrum which we all sit. Their argument being it’s a chemical balance in the brain issue that evolution uses to make people more opportunistic / risk taking to gain species rather than individual advantage. It’s why some are seen as pessimistic or “realists” whilst others are seen as optimistic if not “dreamy eyed”.

Why does this happen, well it’s due to limitations of the mind and probability. We know or should do that if two people throw a dice they both throw the same value slightly better than one in six times (no such thing as a “fair dice” they all have bias). But the throws are usually independent of each other. However over short periods you do get runs where pairs come up more or less often than one in six. Where it goes wrong for humans is limited short term memory effects our ability to think accurately (something Casinos rely on). That is unless we deliberately “count” both pairs and turns –which few can do– then we end up biased to “short term view thinking”. Worse we are predisposed by similar chemical imbalances to remember the wins not the losses.

I don’t gamble, if I have a bet with someone it’s generally to teach them not to make bets. That is I already know what the outcome is or know with high probability what it will be.

In the past I’ve mentioned how I can fake what many think is a “fair coin toss”… And, then when I explained how, one commenter said “Remind me never to gamble with you”… so they sort of got the point 😉

The thing is FOMO is a very real failing of the human mind brought about by evolutionary advantage for the species not the individual. Therefore we all have it to some degree, call it “a belief in luck” or “gut hunches” or “paralysis by analysis” if you will. Or a serious disorder, as it results in “risk taking behaviour” to some degree in almost every thing we do… Worse others know how to exploit it in oh so many ways…

Which is why we have “hype bubbles” to “drive markets” where the crooks are trying to do a fast “in and out” of a pump followed by exploitation before the inevitable crash. Thus a “cut and run with the cash, in a fast door dash, before the crash”. It’s also why we have the expressions around “hot potato” where the last man holding gets burned[1].

So is FOMO insanity?

From some view points yes, especially when it becomes “collective” with the outcome clearly “destructive”.

Which is the case with the AI Hype Bubble.

[1] So strong is this meme that Douglas Adams used it as a central thread in one of his “Dirk Gently’s Holistic Detective Agency” books,

https://en.wikipedia.org/wiki/The_Long_Dark_Tea-Time_of_the_Soul

To explain why the severed head of his client ended up on the record player…

Clive Robinson November 19, 2025 10:58 PM

@ lurker,

With regards,

The end of the abstract is,

“… and ethical vigilance. It concludes by advocating three strategic integrations that can guide the discipline’s renewal: scientific rigor with practical relevance, agility with long-termism, and globalization with indigenization. These integrations aim to ensure that governance in the intelligent age remains both effective and ethically grounded.”

Note I’ve highlighted “ethics” because it is at variance with the “three strategic integrations”, none of which implicitly have a control mechanism that ensures “ethics” will be compatible with societal mores, individual morals, or societal wants.

In fact “indigenization” implies the exact opposite.

If you are an “authoritarian” determined to inflict your want’s on society “indigenization” is the way you would go about doing it, after you remove the traditional limiting mechanisms.

It’s a subject I’ve given some thought as to how you would go about suppressing societal wants/needs in order to enforce your wants.

As I’ve noted previously AI provides the almost perfect way to do it at “arms length” to in effect give “plausible deniability” and in effect RoboDebt can be seen as a trial run prototype.

What is worse is two UK Government Depts “His Majesty’s Revenue and Customs”(HMRC) and “Department for Work and Pensions”(DWP) are actively building such a system currently known as “The Connect System” when combined with legislation awaiting just the royal assent it will usher in draconian measures to take peoples assets,

See the UK “Fraud Error And Recovery”(FEAR) Act basically it’s a “no evidence required” way to grab peoples assets from bank accounts, homes, and any one who can be vaguely seen as being involved with them.

https://justice.org.uk/briefings/public-authorities-fraud-error-and-recovery-bill-2025

It is guaranteed that the “algorithms” will be set for “maximum recovery” to ensure that the Treasury has more money for Governments to waste on the “self entitled” and “non tax paying entities” like US Tech Mega Corps and similar.

It is clearly an “Invent reasons to penalise those least able to defend themselves” to raise revenue that should otherwise be coming from those who can most easily defend themselves because they can “out lawyer” the UK Government Depts.

I’ve seen this coming for several years now and have warned about it repeatedly. It was entirely predictable and is going to happen in more and more supposedly Democratic countries in the near future.

Clive Robinson November 19, 2025 11:58 PM

@ lurker, ALL,

In my above I forgot to post a link as to what RoboDebt is.

Briefly it was an Australian system to take money away from those at the bottom of the socioeconomic ladder for “Political Mantra” reasons.

It was brought in to the drum beat of certain people must be committing “fraud” where as the actual truth is most claims of “fraudulent claimants” are nothing of the kind…

They usually result from either commercial organisations or incompetent staff or both. All on a foundation of over complex legislation and low payed over worked staff not trained thus competent to do assessments correctly, thus committing “clerical error”.

The idea was to take out the government staff and replace them with algorithms. Surprise surprise it did not produce the financial benefit government ministers wanted. So the algorithms got tweaked to fulfill “Political Mantra”. The result was considerable harms that the Australian Government still pretend “never happened”.

Something similar happened in the Netherlands, however the Government put their hands up to it, payed compensation and ministers resigned on mass.

You can see a comparison of the two events in,

https://reporter.anu.edu.au/all-stories/lessons-from-dutch-robodebt-restitution-means-little-without-reform

The “World Economic Forum”(WEF) are actively encoraging this nonsense as it will be used as a vehicle to benefit their members over society in general. As a result it will be extremely harmful.

So expect the “Australian” version or worse to be implemented in AI and no admission, cessation, or compensation made near you real soon now. It is after all what Current LLM and ML AI systems could have been made for.

But remember it will just be the start of worse things to come with Current AI LLM and ML Systems companies desperate to stay afloat financially (the reality is most won’t unless significant changes happen and that horizon is currently barren).

lurker November 20, 2025 12:57 AM

@Clive Robinson, ALL

re: indiginization and authoritarian

Indeed, the paper references traditional Chinese governance systems “grounded in a people-centered philosophy that emphasized ethical governance and institutional order.” And history shows us many examples of how institutional order decays when the ethics and the governance get out of alignment.

But the widespread introduction of AI into government systems brings

• A new ontological focus: Governance is no longer exclusively human-centric but distributed across human-machine assemblages.
• A new epistemological orientation: Knowledge is increasingly derived from high-dimensional data and machine-driven inference.
• A new normative agenda: Questions of fairness, transparency, and public legitimacy become central to AI policy and administration.
• A new methodological toolbox: AI-based methods redefine what counts as valid knowledge and who produces it.

Thus, “To address this challenge, public administration must play an active role in guiding its own transformation. Curricula that treat data ethics as optional should instead regard it as foundational.”

I offered this as an alternative to the recent US-centric papers we have seen that suggest simply adding AI will somehow improve so-called democracy.

ResearcherZero November 20, 2025 2:22 AM

@Clive Robinson

Science cuts by Trump are giving China an enormous boost and destroying decades of American research. Collaboration abroad with scientists in US is coming to halt as Americans leave to work elsewhere. Such deep cuts to American R&D and science have not been seen for many decades and for the proposed cuts, not since the Great Depression.

‘https://www.detroitnews.com/story/news/world/2025/11/07/trumps-cuts-scientific-research-big-win-for-china/87144118007/

Thousands kicked off medical trials after cuts to grant programs took effect.
https://arstechnica.com/health/2025/11/over-74000-people-were-kicked-out-of-clinical-trials-because-of-trump-cuts/

Thousands of grants were cut and important collaborative works with countries like Australia were halted. Cutting-edge medicine in many fields no longer has the funding to sustain it within many leading US institutions and projects conducted in collaboration with international partners.
https://theconversation.com/friday-essay-trump-and-kennedy-are-destroying-global-science-even-einstein-questioned-facts-but-theres-a-method-to-it-261568

Academic freedom and international partnerships produced scientific breakthroughs and new discoveries. All of that work is threatened if the annual budgets of scientific research bodies and international projects like CERN and important space programs are slashed and grants reduced.

https://sciencebusiness.net/news/trump-budget-cuts-hit-cern-and-other-global-science-partnerships

ResearcherZero November 20, 2025 3:07 AM

@lurker

It’s like you said with the focus on AI by the US administration, all of the important work done by scientists and researchers is threatened if the foundational research and discovery is swept aside. The data-sets that AI models are trained on all depend on the real work of real human beings and the innovation that is possible only because of rigorous scientific research. Funding and staff cuts to bodies and institutions that fund and make that work possible are already undermining the source of those advances in science and education.

Scientific and educational institutions were designed to be independent to avoid harm from executive government interference that would damage their function and knowledge produced.

The Kremlin destroyed the economy of the Soviet Union by interfering with its institutions and attempting to the steer the focus of those bodies and redirect their resources. Those decisions should be left to the institutions themselves and the experts within who know what they are doing. Politicians are not qualified to overrule the judgements of the appropriate professionals or the normal functions of independent democratic processes.

As the old mantra states, “Garbage in, garbage out.”

Winter November 20, 2025 4:43 AM

@Clive, ResearcherZero

Such deep cuts to American R&D and science have not been seen for many decades and for the proposed cuts, not since the Great Depression.

MAGA is Make America Great Again.

The answer to the question

When was America Great?
turns out to be 1877.
‘https://eu.palmbeachpost.com/story/opinion/columns/2025/04/04/trump-maga-republican-racism-jim-crow-obama/82756934007/

In 1877, the USA didn’t spend much, if anything, on research. So it is clear that all research should go out of the window.

The question what more was not there in 1877 and should be abolished is left as an exercise for the reader.

Personally, I expect MAGA to strive for wage levels to return to 1877 levels too.

Clive Robonson November 20, 2025 10:09 AM

@ ALL,

Some think I’m an “AI basher” or “luddite” just wanting to throw my clogs in the machine[1].

However I’m not. I clearly say,

“Current AI LLM and ML Systems”

To identify what I’m talking with respect to.

I also note that I think that “general” is going to fail whilst “specific” will be with us for quite some time to come, and be both usefull and profitable.

I cite UK Based Google Alpha and AlphaFold as specific example of the latter and judging by bubbling rumours I will be saying similar about other projects they are involved with.

Well it appears that Clem Delangue, CEO and front man of Hugging Face thinks a similar way.

Because has has recently made the case that the Hype bubble that so concerns me is fairly specific to kitchen sink style LLM and ML systems, not smaller, specific, and well defined systems,

https://arstechnica.com/ai/2025/11/were-in-an-llm-bubble-hugging-face-ceo-says-but-not-an-ai-one/

I guess the real question is,

“Will more people realise this before those that subscribe to the OpenAI and GPT world view, actually bring the world down and into a major recession?”

I guess we will have to wait and see, but realistically we are almost out of time before we cross the tipping point and things start “snowballing down the mountain”[2]…

[1] The word “sabotage” is derived from the French word for wooden shoes “sabot”. Thus it literally means “putting the boot in” as a form of “Rage Against the Machine”.

[2] Just remember there is a sort of law of physics involved in avalanches. Those at the front of the pack tend to get pushed down and crushed by those that follow close behind. And they in turn become front of the pack and suffer a similar fate and so on, till the energy is all lost.

Clive Robinson November 20, 2025 11:31 PM

@ Bruce, All,

SEC finally faces reality.

Well over a year since the judge threw out most of the SEC’s case against Solarwinds and it’s CISO Timothy G. brown, the SEC finally realises they are not going to win and the defendants are not going to “roll over”.

SEC drops SolarWinds lawsuit that painted a target on CISOs everywhere

The US Securities and Exchange Commission (SEC) has abandoned the lawsuit it pursued against SolarWinds and its chief infosec officer for misleading investors about security practices that led to the 2020 SUNBURST attack.

https://www.theregister.com/2025/11/20/sec_bails_on_solarwinds_lawsuit/

This is played as “Good News for CISOs” but is it?

The SEC apparently not only over stepped the mark, they also dropped the ball on this case, and unsurprisingly the judge made that clear in quite robust terms back in July last year.

But in the article you will find,

‘The SEC did note that its decision to seek dismissal is “in the exercise of its discretion” and “does not necessarily reflect the Commission’s position on any other case.”

Which could be one of two things,

1, The SEC face-saving by pretending it new what it was doing.
2, The SEC taking a different tack in future.

I suspect the latter, in that they will continue to throw mud untill something sticks then use that to go after “politically advantageous” cases.

And that’s the real point, the SEC stepped well outside of it’s remit for political rather than regulatory reasons, and quite rightly it failed. But it’s unlikely they will really learn from this, so they will try similar again. Let’s just say the reason is,

“It’s the nature of the beast, trying to prove it has the biggest teeth.”

ResearcherZero November 21, 2025 12:58 AM

The Trump administration is creating a master database on American citizens.

‘https://abcnews.go.com/Politics/wireStory/democratic-state-election-officials-demand-answers-justice-departments-127653294

Merging of siloed datasets creates inaccuracies and weakens governance and access control.
https://dnyuz.com/2025/11/18/social-security-data-is-openly-being-shared-with-dhs-to-target-immigrants/

Many will be wrongly targeted for criminal investigation or scrubbed from voter roles.
https://www.votebeat.org/2025/11/17/judge-declined-stay-reversing-save-database-changes/

ResearcherZero November 21, 2025 1:09 AM

It is no just social security data and voter registration that people should worry about, other sensitive personal information such as private health and financial information may find its way into centralized datasets where it can be used and abused by government and private interests for purposes that it was never intended to be used for.

Changes to government policy by the Trump administration are forcing health data into private hands, while making it much harder to carry out health work internationally.

‘https://www.tandfonline.com/doi/full/10.1080/15265161.2025.2570670

DHS health data was one of the most comprehensive and used to monitor health globally.
https://www.nature.com/articles/s41597-025-06128-9

ResearcherZero November 21, 2025 3:08 AM

Peter Theil recently claimed Palantir is not a surveillance company and that critics that label Palantir’s products as surveillance tools are nothing more than “parasites”.

Palantir’s software products not only enable mass surveillance and profiling, they are used to monitor and surveil innocent people without evidence of a crime ever needing to occur.

No limits or rules apply as to when or who can be subjected to surveillance by Gotham.

‘https://www.heise.de/en/news/Baden-Wuerttemberg-decides-on-the-use-of-Palantir-11075477.html

Gotham was initially developed by Palantir to detect PayPal fraud, then it was backed by the CIA after 9/11 to identify, track and find terrorists including members of Al-Qaeda.
https://www.prospectmagazine.co.uk/politics/democracy/government/71511/how-palantir-infiltrated-the-state

Palantir’s software is being used to produce intelligence on children, witnesses and victims – or those who “may be victims of crime” – now or at a future date in time.

Gotham is a predictive policing product developed by Palantir which can monitor “hot spots” for potential criminal activity and individuals with criminal records – alongside victims of crime – to identify patterns of activity which could predict high risk of a future crime. Gotham allows law enforcement to profile anyone that they would like to know more about by ingesting existing police information, and other data sources such as facial recognition, social media accounts, mobile phone registrations, confidential records and sensitive personal information to create an instant and detailed profile of an individual.

This information includes “race”, “political opinions”, “sex life”, “religion”, “philosophical beliefs”, “trade union membership” and “health”.

https://libertyinvestigates.org.uk/articles/uk-police-working-with-controversial-tech-giant-palantir-on-real-time-surveillance-network/

It is secretly being used by governments and police to pool and de-anonymize data in countries globally without legal constraint or informing those subjected to monitoring.
https://theconversation.com/when-the-government-can-see-everything-how-one-company-palantir-is-mapping-the-nations-data-263178

jelo 117 November 22, 2025 8:40 AM

Questions on LLMs

  1. The data LLLMs use and the artificial neural net of the LLM itself are each networks. Where there are networks there are small world phenomena. Have the training and the weights of LLMs been looked at in the light of small world theory ?

jelo 117 November 22, 2025 8:43 AM

Questions on AI

2.Natural language exhibits equivocal use of terms. E.g., an animal in the field and a painting may both be called “cow”. There doesn’t seem to be any way to eliminate equivocation, it appears to be essential to natural language. Introducing new terms such as cow1, cow2 etc. obscures the equivocation but does not eliminate it. However, computer language does not seem to allow equivocation. How do LLMs behave in regard to equivocation ?

jelo 117 November 22, 2025 8:46 AM

Questions on LLMs

3.Many areas are intrinsically performative and technical but do not have nomenclature beyond the descriptive. E.g. consider traditional ballet descriptive scores vs modern language of dance. As a result LLMs may appear to recognize a question in the field but then be unable to perform the associated task. If for such fields a detailed and performative language were developed, would LLMs work better ? Would such areas be reduced to scripting, making AI pointless ? .

Winter November 22, 2025 1:14 PM

@jelo

Have the training and the weights of LLMs been looked at in the light of small world theory ?

It seems not, at least not in published sources.

There is occasionally interest in using SW connectivity to improve efficiency.

A Fast Feedforward Small-World Neural Network for Nonlinear System Modeling
‘https://ieeexplore.ieee.org/document/10533438

Abstract:
It is well-documented that cross-layer connections in feedforward small-world neural networks (FSWNNs) enhance the efficient transmission for gradients, thus improving its generalization ability with a fast learning. However, the merits of long-distance cross-layer connections are not fully utilized due to the random rewiring. In this study, aiming to further improve the learning efficiency, a fast FSWNN (FFSWNN) is proposed by taking into account the positive effects of long-distance cross-layer connections, and applied to nonlinear system modeling. First, a novel rewiring rule by giving priority to long-distance cross-layer connections is proposed to increase the gradient transmission efficiency when constructing FFSWNN. Second, an improved ridge regression method is put forward to determine the initial weights with high activation for the sigmoidal neurons in FFSWNN. Finally, to further improve the learning efficiency, an asynchronous learning algorithm is designed to train FFSWNN, with the weights connected to the output layer updated by the ridge regression method and other weights by the gradient descent method. Several experiments are conducted on four benchmark datasets from the University of California Irvine (UCI) machine learning repository and two datasets from real-life problems to evaluate the performance of FFSWNN on nonlinear system modeling. The results show that FFSWNN has significantly faster convergence speed and higher modeling accuracy than the comparative models, and the positive effects of the novel rewiring rule, the improved weight initialization, and the asynchronous learning algorithm on learning efficiency are demonstrated.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.