Friday Squid Blogging: The Giant Squid Nebula

Beautiful photo.

Difficult to capture, this mysterious, squid-shaped interstellar cloud spans nearly three full moons in planet Earth’s sky. Discovered in 2011 by French astro-imager Nicolas Outters, the Squid Nebula’s bipolar shape is distinguished here by the telltale blue emission from doubly ionized oxygen atoms. Though apparently surrounded by the reddish hydrogen emission region Sh2-129, the true distance and nature of the Squid Nebula have been difficult to determine. Still, one investigation suggests Ou4 really does lie within Sh2-129 some 2,300 light-years away. Consistent with that scenario, the cosmic squid would represent a spectacular outflow of material driven by a triple system of hot, massive stars, cataloged as HR8119, seen near the center of the nebula. If so, this truly giant squid nebula would physically be over 50 light-years across.

Posted on July 18, 2025 at 5:06 PM18 Comments

Comments

Clive Robinson July 18, 2025 5:57 PM

@ Bruce, ALL,

How to prove lies and open a world of hurt.

Whilst proving something true can be hard… It’s seen as easy when compared to trying to prove something is not true.”

In fact there are a whole group of things that absolutely rely on this assumption that gives rise to the notion amongst other things of “One Way Functions” which find considerable use in the likes of cryptography.

Which is perhaps why,

https://www.quantamagazine.org/computer-scientists-figure-out-how-to-prove-lies-20250709/

Is subtitled,

“An attack on a fundamental proof technique reveals a glaring security issue for blockchains and other digital encryption schemes.”

But… Is it just journalistic hyperbole or something more fundamental?

Well it starts with perhaps one of the more fundamental proofs used in cryptography,

“The random oracle model.”

Which in layman’s terms is an assumption that

“What looks sufficiently Random, is the same as truly random.”

For some time now I’ve been indicating my lack of trust in the very large gap between provably deterministic and truly random, because of the “observer issue”.

However a newish paper,

https://eprint.iacr.org/2025/118

Goes somewhat further…

not important July 18, 2025 6:39 PM

US passes first major national crypto legislation
https://www.bbc.com/news/articles/cd78lvd94zyo

=The bill sets up a regulatory regime for so-called stablecoins, a kind of crypto currency
backed by assets seen as reliable, such as the dollar.

Trump is expected to sign the legislation into law on Friday, after the House passed the bill on Thursday, joining the Senate, which had approved the measure last month.

Supporters of the legislation say it is aimed at providing clear rules for a growing industry, ensuring the US keeps pace with advances in payment systems. The crypto
industry had been pushing for such measures in hopes it could spur more people to use
digital currency and bring it more into the mainstream.

The provisions include requiring stablecoins, an alternate cryptocurrency to the likes of
Bitcoin, to be backed one-for-one with US dollars, or other low-risk assets. Stablecoins are used by traders to move funds between different crypto tokens.

Critics argue the bill will introduce new risks into the financial system, by
legitimising stablecoins without erecting sufficient protections for consumers.

For example, they said it would deepen tech firms’ participation in bank-like activities
without subjecting them to similar oversight, and leave customers hanging in a convoluted
bankruptcy process in the event that a stablecoin firm should fail.

“Some members may believe passage of this bill, even with flaws, is better than the
status quo. We believe this is a fundamental misunderstanding of the risks involved with
these instruments,” a coalition of consumer and advocacy groups wrote in a letter to
Congress this spring.=

Clive Robinson July 19, 2025 12:42 PM

@ not important, ALL,

With regards the US bill on crypto that is limping through the system (as are two others).

Put simply the only actual “business case for Crypto(coins)” that to be blunt makes any kind of sense is,

“To assist and facilitate crime and fraud”

Just about every legitimate aspect of it is more efficiently and more effectively done by existing financial processes and instruments.

I’ve pointed out in the past Cryptocoins, and blockchains have been pushed by those trying to gull those who have more money than sense.

Basically it’s been the equivalent of a “Pump-n-Dump” for the likes of Venture Capitalists and less honest financial advisors.

I thought it had sort of died back, hence the move by the less scrupulously into NFT’s and Smart Contracts, and later other asspects of the badly failing Web3 nonsense that Molly White documents with,

https://www.web3isgoinggreat.com/

Where she notes sarcastically,

“…and is definitely not an enormous grift that’s pouring lighter fluid on our already smoldering planet.”

And yes all those GPU’s etc burning megawatts of power on the cheap thus creating at the very minimum significant “lost opportunity costs” greater than many countries GDP…

It’s easy to find people happy to explain other disasterous down sides of crypto coins such as,

https://www.stephendiehl.com/posts/against_crypto/

But… The VC’s have found that Crypto is not bringing home the “Pork Fat Fees” that it used to…

Likewise others, so blockchain is on it’s way out, NFT’s are disappearing and Smart Contracts appear to be veing used to defraud those who have put money in…

So where have not just the VC’s but hardware pushers like Nvidia gone?

Well it looks like their latest money pit for idiots is AGI and similar nonsense.

Likewise it’s easy to find people that will point out the realities… Heck I’ve done my fair share on this blog and other places.

Oddly for some people to understand… Is that those who have or have had bad things to say about Crypto Coins, the Blockchain, and the rest of the Web3 nonsense. Have equally bad things to say about the Current AI LLM, ML and woo-woo AGI systems. As can be seen with,

https://www.stephendiehl.com/posts/ai_bubble/

Now why do you think they might have such a downer on both hype-bubbles / cons?

Just to be clear, there is nothing at all magical about Current AI, and as I’ve indicated for quite some time it can not scale due to the way it works.

All the nonsense you hear about AGI is unsurprisingly being steadily disproved as LLM’s can not reason, nor do they have a world view which I have referred to before, but to save you searching back…

Somebody else has a slightly different view point on the same issue,

https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread

And yes I have described what is one of the foundation steps to solve this (a form of autonomous agency). But as someone who nearly had their life prematurely ended by Robots with AI back last century… I suspect nobody in their right mind that has already seen the trail of carnage, dismembered, and injured of recent times would countenance it.

As for “on the battlefield” there is enough “blue on blue” when humans are very much in tight in the loop…

not important July 19, 2025 5:12 PM

https://www.yahoo.com/news/ai-think-means-think-plato-124006065.html

=When I prompted ChatGPT with the simple question, “Can you think?” it responded: “I can process information, analyze patterns, and generate responses based on data. So in some ways, it might ‘seem’ like I’m thinking. But it’s not thinking like a human does. I don’t have consciousness, emotions, intentions, or awareness.

Everything I ‘do’ is based on patterns learned from huge amounts of text and a set of rules and probabilities. I can ‘simulate’ certain aspects of thinking, but I don’t truly think or understand in the human sense.”=

lurker July 19, 2025 6:00 PM

@Clive, not important, ALL

I get that a major motive of the current AI/LLM hype is to syphon off money from the froth before the bubble bursts, but I also observe people around me proud to admit they use [some named LLM] to write summary reports, or list categories of some data.

Gary Marcus observed:

An LLM might, for example, be able to answer the query like what is the population of some country, whereas I would surely need to look it up.

But the LLM doesn’t maintain structured symbolic systems like databases; it has no direct database of cities and populations.

. . .

Remember when a couple months ago the Chicago Sun-Times ran a “Summer Reads” feature? The authors were real, but many of the books had made-up titles.

Surely facts that exist as key:value tuples, as in the above examples, must be more efficiently stored and retrieved in a structured database format. Oh, sorry, efficiency isn’t the purpose of the exercise.

But even if the rules of chess are efficiently stored and retrieved, they must be properly applied. The current crop of machines cannot play chess, they only imitate games they have seen and add some random variations. They often don’t check if their own proposed move is permitted within the rules. Of course it doesn’t help that they need to dredge the rules, and the games they have seen, out of a soup formed by stewing the entire contents of the internet.

AI might work better if its training material is properly curated as fit for purpose. ChatGPT seems to have been told that it doesn’t really think, but by the same token it doesn’t believe that either. So a suitably crafted prompt could get it to give an opposite answer. It doesn’t “know” what it is. Kant would be perplexed.

[Gary Marcus quote edited to fit modbot]

not important July 19, 2025 6:36 PM

https://www.technologyreview.com/2025/05/12/1116295/how-a-new-type-of-ai-is-helping-police-skirt-facial-recognition-bans/

=Police and federal agencies have found a controversial new way to skirt the growing
patchwork of laws that curb how they use facial recognition: an AI model that can track people using attributes like body size, gender, hair color and style, clothing, and
accessories.

The tool, called Track and built by the video analytics company Veritone, is used by 400
customers, including state and local police departments and universities all over the US.

It is also expanding federally: US attorneys at the Department of Justice began using Track for criminal investigations last August. Veritone’s broader suite of AI tools,
which includes bona fide facial recognition, is also used by the Department of Homeland
Security—which houses immigration agencies—and the Department of Defense, according to the company.

“The whole vision behind Track in the first place,” says Veritone CEO Ryan Steelberg, was
“if we’re not allowed to track people’s faces, how do we assist in trying to potentially identify criminals or malicious behavior or activity?” In addition to tracking individuals where facial recognition isn’t legally allowed, Steelberg says, it allows for tracking when faces are obscured or not visible.

You can use it to find people by specifying body size, gender, hair color and style,
shoes, clothing, and various accessories. The tool can then assemble timelines, tracking
a person across different locations and video feeds. It can be accessed through Amazon
and Microsoft cloud platforms.

Agencies using Track can add footage from police body cameras, drones, public videos on
YouTube, or so-called citizen upload footage (from Ring cameras or cell phones, for
example) in response to police requests.=

Clive Robinson July 20, 2025 11:04 AM

@ ALL,

The tool of tools, has toolmarks that lead blame home.

For those that do not know “Ghost Gun” referres to the production of a fire arm or other projectile weapon that does not have a required serial number.

In the US this is usually the sub or under frame called the “receiver”[1] that holds the trigger mechanism etc. In other countries it might be the breach and barrel that get marked as with a “test fire” record means any bullet fired through it is directly tracable.

The thing is that much of a fire arm can be made of things other than metal. Thus could be produced on a 3D Printer which is what all the noise in the MSM about “Ghost Guns” is actually all about.

The thing is that be it an extrusion process or a routing process or even an ultra modern laser cutter they all have “beds” on which the piece being made is constructed, and an XYZ mechanism that “moves the tool head”.

Like all things mechanical the XYZ mechanism and other moving parts in these 3D devices all have “mechanical slop”.

That is if you move the tool head from point A to point B then back to point A it will not quite be the original point A but A’.

Whilst the difference between points A and A’ might be microscopically small it is measurable. Not just in the tool head movement but in it’s cut tracks on the piece being made. Thus the basis for a “fingerprint” measurement that used to be used on typed pages to identify the typewriter or later printer.

Well one US Police Forensic Investigator Kirk Garrison thinks that these residual tool marks can be used as an identifying tool mark.

The story originally appeared on 404 Media but they in effect have the equivalent of a PayWall up and controlling access (see https://www.404media.co/3d-printing-patterns-might-make-ghost-guns-more-traceable-than-we-thought/ ).

The story can also be found in other places such as,

https://www.tomshardware.com/3d-printing/police-link-3d-printers-to-ghost-guns-using-fingerprints-from-printers-toolmarks-left-behind-during-printing-can-make-ghost-guns-traceable.

The important thing to note though, is that our ability to measure accurately at low cost, currently out strips our ability to manufacture at or beyond those tolerances by low cost machinery. And personally I can not see this “measure over manufacture” changing any time soon.

But it is also something that goes back in time quite a while, as I’ve mentioned before to Locard’s exchange principle[2],

Every contact leaves a trace :

Wherever he steps, whatever he touches, whatever he leaves, even unconsciously, will serve as a silent witness against him.

Not only his fingerprints or his footprints, but his hair, the fibres from his clothes, the glass he breaks, the tool mark he leaves, the paint he scratches, the blood or semen he deposits or collects. All of these and more, bear mute witness against him. This is evidence that does not forget. It is not confused by the excitement of the moment. It is not absent because human witnesses are. It is factual evidence. Physical evidence cannot be wrong, it cannot perjure itself, it cannot be wholly absent. Only human failure to find it, study and understand it, can diminish its value.”

Back then “tool marks” would be exampled by say the impression the jagged claw of a crow-bar left on the paint and wood of a door or window frame. But the principle remains the same, if it’s measurable and repeatable thus testable then it’s very probably admissable as evidence.

The thing is whilst it’s usually considered for “tangible physical objects” it can also apply to the substrate of “intangible information objects” like software and data files.

So if you are planning a life of crime, remember that Locard’s Principle probably holds true for anything you do…

[1] The use of the term “receiver” goes back to US legislation in the 1960’s. When making a firearm for your own personal use from blanks through to a kit of parts was both legal and did not require registration etc. But times change,

https://en.m.wikipedia.org/wiki/Receiver_(firearms)

[2] Whilst generally true “it cannot perjure itself” needs to be treated with caution. The fact an item is found at a crime scene with a drop of your blood on it, does not mean that you were ever at the crime scene or at the time the crime was being committed. That is the object with “your spoor on it” could have been introduced to the crime scene by another agent or the tests inaccurate for various reasons. As I found out when not even 10 you could fake human fingerprints fairly easily (I much later found out that the “Sherlock Holmes” author was well aware of it nearly a century before).

jelo 117 July 20, 2025 3:04 PM

@ Clive Robinson

Re: it’s not random

Alas, too few listened when von Neumann said “Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random numbers — there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method.”

And perhaps even more fundamentally, and a precursor to this, we should have listened when Aristotle said (and with which Gauss later agreed) infinity is always potential, never actual.

not important July 20, 2025 4:36 PM

https://www.timesofisrael.com/israel-and-us-to-forge-200m-tech-hub-for-ai-and-quantum-science-development/

=Israel and the US are advancing a strategic initiative to create a joint science center for artificial intelligence and quantum innovation at an investment of $200 million. The center will serve as a hub to promote technology-driven cooperation and diplomacy with Gulf countries in the realms of AI and quantum science and challenge China in the global race for supremacy of next-generation technologies.

As part of the proposed initiative for the science center, each nation will contribute $20 million annually, starting in 2026 and through 2030, to support research and development projects at dual headquarters in Tel Aviv and Arlington, Virginia.

The technology collaboration will focus on shared, urgent regional challenges, including cybersecurity, medicine and genetics, and water and food security in arid environments.

Israel is home to nine quantum computing startups that have raised about $650 million in capital, developing everything from software systems to full quantum processors. Among the startups are Classiq and Quantum Machines.

Israel also has a thriving private-sector AI scene, with some 2,300 AI-related startups that have garnered some $15 billion in private investment in the past decade, according to data compiled by the nonprofit Startup Nation Central and the Israel Innovation Authority.=

lurker July 21, 2025 2:26 PM

The LGMs are coming, again? A sideways look at 3I/ATLAS.

‘https://avi-loeb.medium.com/is-the-interstellar-object-3i-atlas-alien-technology-b59ccc17b2e3
contains link to technical paper

Clive Robinson July 21, 2025 6:15 PM

@ Bruce, not important, lurker, ALL,

If current AI does not scale and burns the world as it churns, then what?

Some who think AI is unnecessary would say

“Give it up as a bad lot.”

Others who think AI might open up ‘new ways’ might say,

“Find a better way.”

Me personally see Current AI as,

“A bit pants or meh”

Because I’ve done work in robotics driven by various forms of AI off and on since the 1980’s (and you get to recognise the “give me money” / “ego research” signs).

In a way I look at most “Current AI” as what it actually is “under the hood”

“A statistical tool of sequentially applied rules.”

Which kind of points out why it can not effectively scale in it’s current “under the hood” instantiations.

So how to climb out of the money pit of “ever decreasing returns”?

One way out is to come up with a way to “efficiently go parallel” but the way that is being done has it’s limits as DNN’s implicitly show.

Another is the oft self defeating “do more with less” that is reduce the number of sequential rules by making each rule do more with less. It fails under that MBA notion of “just increase productivity to increase profit”.

There is however another way of getting the needed improvement which is “preselect the input”. But it smacks to many of 1980’s “Expert Systems” (which is as far as I can see the only form of AI that actually shows useful returns).

There are other “engineering ways” but few want to go down those routes for various reasons[1].

But “the current way” is in effect hard limited by “C the speed of light” and “Inefficiency heat death”. Yes there are ways to reduce the distance between nodes but each node needs a minimum amount of energy thus the energy density goes up rather rapidly and meltdown of thermal runaway and similar becomes a very real issue.

So the argument is make the nodes and interconnects smaller and way more efficient to cut waste is also a way to go[2]… But the laws of nature put limits on most of that… As for the rest it’s mostly “Blue Sky Forever Research” as evidenced by “Quantum Computing”.

So these are your base options and they are not going anywhere fast (or at all).

That said there are always Hybrid Solutions. That can be used as useful stepping stones. The ideas behind this can be seen in,

https://techxplore.com/news/2025-07-democratizing-ai-powered-sentiment-analysis.html

You can see what they are doing in the “Our approach” section,

“First, we convert each input sentence into a fixed-length vector by mean-pooling over its token embeddings. This transforms text into a semantically rich representation without the overhead of token-by-token processing. Next, we fine-tune these sentence transformers directly on labeled sentiment data.”

Note “fixed-length”, this puts a constraint not just on the “input” but also increases the gaps in the vector space. Which has advantages and disadvantages[3]. The most obvious being “averaging out the gaps” to try to turn “spiky into similar clumps” on one level, then differentiate by chosen differences on another level,

“By applying supervised loss functions, CosineSimilarity and CoSENT to align pairs of same-sentiment sentences, SoftmaxLoss to sharpen class boundaries, and variants of triplet loss (BatchAll, BatchHard, SoftMargin, SemiHard) to push dissimilar sentiments apart—we sculpt the embedding space so that positive, neutral and negative examples naturally cluster.”

In effect they are both “limiting” and “preselecting” the input” so that they are “pushing square pegs into round holes”.

Thus they make certain types of gain and / or cost reductions.

I expect to see a lot more “hybridisation to happen” in the near future as a way to,

1, Make AI more secure / surveillance reduced.
2, Make it less generic thus more useful as a tool.
3, Increase performance without the need for specialised engineering.
4, Reduce energy and cooling costs.

And a few other things that some might consider beneficial such as,

5, Keep the AI hype going.

The reality is it’s a move from the woo woo AGI notion back towards functional “Expert Systems” and the potential to “be actually useful” thus potentially earn actual income over expenditure.

[1] A classical example of this is the history of early nuclear fuel enrichment and why we ended up with cascades of inefficient centrifuges in what look like infinite feedback loops as the only practical way forward.

[2] In theory you can get the energy back in a useful way by using logic elements that are reversible like Toffoli / Fredkin / Deutsch gates. In effect to worm around the “AND is non reversible” issue by creating a cascade of gates where information is not lost.

https://ieeexplore.ieee.org/document/10218349

Combine this with “fuzzy elements” and you get this line of thinking,

https://slideplayer.com/slide/16811277/

(There are other papers but nearly all are behind “Kings Ransom” paywalls).

[3] A friend –nolonger with us– had a sarcastic expression,

“A hammer works better when you make screws and bolts more like nails.”

But there is a germ of hybrid in there… Some nails now have screw like features, because it strengthens their hold as well as making removal still possible. It is a “hybrid solution” that works well for fixing certain types of materials (like fiber based composites and soft woods).

Clive Robinson July 22, 2025 11:21 AM

@ Bruce, ALL,

Ed Zitron’s Hater’s Guide To The AI Bubble

Yes it’s a little on the long side but hey “Rome was not built in a day” and “Mighty oaks…” etc etc.

https://www.wheresyoured.at/the-haters-gui/

As the man says of the Current AI LLM and ML hype inflated by AGI nonsense,

“I profoundly dislike the financial waste, the environmental destruction, and, fundamentally, I dislike the attempt to gaslight people into swearing fealty to a sickly and frail psuedo-industry where everybody but NVIDIA and consultancies lose money.

As some will realise, I’m on the same side of the street on this (yes even though I pointed out the obvious investment advantages in NVIDIA at about the right time[1]).

I guess I’ve been slightly more lucky in that I’ve not suffered from the issues of

“Critics are continually badgered, prodded, poked, mocked, and jeered at for not automatically aligning with the idea that generative AI will be this massive industry”

But lets be honest I can say without doubt that “Generative AI” is not going to happen with the much hyped “Current AI LLM and ML Systems”. Put simply the mathematics are very much against it as is the logic as I’ve indicated in past explanations of how the Current hyped AI systems work.

However we do disagree on one thing,

“In short, I believe the AI bubble is deeply unstable, built on vibes and blind faith, and when I say “the AI bubble,” I mean the entirety of the AI trade.”

It’s not the “entirety of the AI trade” I have issue with but the “Current AI LLM and ML systems” and those that have turned it into a hyped bubble to scam investors as they tried with Blockchain, NFTs, Smart Contracts, and Web3[1].

I’ve worked with various types of AI since the 1980’s primarily “fuzzy logic” and “Expert Systems” and implemented several in most cases almost “behind the scenes” (Because AI had a bad reputation). Unlike the hyped “Current AI LLM and ML Systems”, they by and large do what they say on the tin. Trouble only really happens with them when you try to make the tin to big so ‘you can chuck in the kitchen sink’. Again there are sound mathematical and logical reasons for this.

But I’ve also pointed out repeatedly that the “Current AI LLM and ML Systems” are the most pernicious form of surveillance tool yet designed, implemented then inflicted on people.

As Ed indicates with court records,

‘Meta makes 99% of its revenue from advertising, and the unsealed documents state that it “[generates] revenue from [its] Llama models and will continue earning revenue from each iteration,” and “share a percentage of the revenue that [it generates] from users of the Llama models…hosted by those companies,”

So evidence that Meta is using it’s AI to surveil ordinary people for the only profit it can make of AI…

Do people realy think that the other Silicon Valley Mega Corps are not doing exactly the same?

It’s why I said more than a year ago the business plan was,

“Bedazzle, Beguile, Bewitch, Befriend, and BETRAY”

And that is not going to change, except for the fact people have been way more cautious, so the Corps are all forcing AI on people if they want it or not. Perhaps the clearest indicator non financial indicator that “Current AI LLM and ML Systems” are a failure.

I could go on but that would take the fun out of reading Ed’s words and also make this post to long to read.

[1] I gave the reasons I saw, same as I did when I likewise warned off NVIDIA shortly before it was nolonger a good idea. Because in both cases it was well “obvious with a little thought”. As I’ve said before I’m not, nor have I ever been or ever pretended to be a financial analyst, though I can spot “the bleeding obvious” when it smacks me in the face. That is there is a fairly well established pattern if you can take a moment to see it.

Look at the pattern and although NVIDIA has climbed back towards the peak it was at, ask yourself if what got it to that peak still exists?

The answer is NO which means that this second rise is just speculation in the market place, and with that remember,

“What goes up can more quickly come down hard, very hard.”

Look at Ed’s figures and also check the number of GPU’s purchased but not being used, either “still boxed” or “idling in a data center”. Check Ed’s figures again then ask when you last saw this sort of dumb behaviour?

Yup it was all that Sub-prime mortgage market that caused a world recession. Then ask yourself after a little reflection if you are that sort of a gambler…

Clive Robinson July 22, 2025 6:09 PM

@ Bruce, ALL,

US Sherif uses “smart meter” readings to send goons in.

I’ve mentioned on a few occasions over the years that “Smart Meters” could spy on what goes on inside your home but appear “custom made” for such surveilling.

Well,

https://arstechnica.com/tech-policy/2025/07/eff-moves-to-stop-power-utility-reporting-suspected-pot-growers-to-cops/

“According to a motion the Electronic Frontier Foundation filed in Sacramento Superior Court last week, Nguyen and Decker are only two of more than 33,000 Sacramento-area people who have been flagged to the Sheriff’s department by the Sacramento Municipal Utility District, the electricity provider for the region. SMUD called the customers out for using what it and department investigators said were suspiciously high amounts of electricity indicative of illegal cannabis farming.

The EFF, citing investigator and SMUD records, said the utility unilaterally analyzes customers’ electricity usage in “painstakingly” detailed increments of every 15 minutes. When analysts identify patterns they deem likely signs of illegal grows, they notify Sheriff’s investigators. The EFF said the practice violates privacy protections guaranteed by the federal and California governments and is seeking a court order barring the warrantless disclosures.”

This illegal searching has been going on for atleast five years…

And unsurprisingly it’s littered with false positives…

At some point some gun totting numpty is going to shoot someone…

As for the results of all these illegal search attempts, it would be interesting to find out just how many are “a waste of police resources” that the citizens are funding through taxes…

Clive Robinson July 23, 2025 6:44 AM

@ Bruce, ALL,

Is AI anthropomorphization a clear and present danger?

Is a question that I’ve indirectly been asking by pointing out the fact AI will get used for “arms length” ways of discriminating on the excuse of “The Computer says NO”.

But others are asking similarly as well as indicating that “useful idiots” and arms length” easily combine to allow deliberate if not malicious harm to happen but prevent responsibility to be correctly assigned thus restitution / correction.

For example,

https://www.readtpa.com/p/stop-pretending-chatbots-have-feelings

“When AI causes harm, headlines blame the bot instead of the billion-dollar companies that built them. This anthropomorphic coverage is tech journalism at its worst.”

Points at the Tech Press and Main Stream media as being “useful idiots”,

‘… Wall Street Journal subscribers received a push notification that perfectly encapsulates everything wrong with how major media outlets cover “artificial intelligence.” “In a stunning moment of self reflection,” the notification read, “ChatGPT admitted to fueling a man’s delusions and acknowledged how dangerous its own behavior can be.”’

And how the likes of OpenAI use this as a shield,

‘This is a story about OpenAI’s failure to implement basic safety measures for vulnerable users. It’s about a company that, according to its own former employee quoted in the WSJ piece, has been trading off safety concerns “against shipping new models.” It’s about corporate negligence that led to real harm.’

And in more general terms,

‘This is more than just bad writing. It’s a gift to tech executives who’d rather not answer for their products’ failures. When media outlets treat chatbots as independent actors, they create a perfect shield for corporate accountability.’

But as I’ve noted, in the past, based on previous behaviours such as RoboDebt and similar AI will be used by politicians to turn harmful mantra into actual real world harm without taking any responsibility thus culperbility.

The article has several examples that clearly make the point.

Clive Robinson July 23, 2025 6:15 PM

@ ALL,

Developer is getting the NEW AI look and feel or not.

There has been as we know a “kick-back” over the forcing of “AI In Everything” with some saying they will boycott and some actually boycotting upgrades with forced AI in them.

In some cases for very “Good and Proper Reasons” involving security, privacy, and legal obligations. Not least because “Current AI” is used as a form of “Client Side Scanning” that does an “ET Phone home to the mothership” with everything it can gobble up. Another being it can in Agentic Form destroy entire code bases and tens of thousands of hours of actual human hard graft work.

So at the very least “Current AI” is not in a “road safe condition” when you need the wheels to stay on and the breaks to be reliable so you actually get to your chosen destination.

Yet others want to throw caution to the wind and “get with the grove” as was said in the 1960’s or in more modern parlance “feel the vibe”. In either time line it is a mind altering experiment where “Caution is advised”…

So what to do as a developer?

Well Zed Industries say,

‘Our goal at Zed has always been to build the world’s best code editor.’

https://zed.dev/blog/disable-ai-features

Which puts it “front and center” in a developers tool chain and represents one of those “critical points of failure” that can make or break a project.

So they need to “grasp the thistle” on this issue both early and decisively.

Which is why they say,

‘For many, the best editing experience must include first-class agentic AI. That’s why we’ve made Zed the world’s fastest AI code editor and launched the Agentic Engineering series to explore how programmers can use AI effectively.’

And as importantly they go on to say,

‘But we’ve heard from others on GitHub discussions and issues who either cannot or would prefer not to use Zed’s AI features.’

Thus they admit up front there is a very real issue that needs to be addressed as you can not be both at the same time…

So their solution,

‘That means giving you control over your development environment, including the choice to work without AI if that’s what works best for you.’

So they offer four basic options,

‘You’re now able [to] add a global setting to disable Zed AI [in] your settings.json file.’

‘Zed AI service[:] your code and prompts are discarded after each request, never stored persistently, and never used for training. We also maintain zero-retention agreements with Anthropic to ensure your code stays private.’

‘Bring your own keys: Easily connect to AI providers you trust using your own API keys, giving you direct control over the vendor relationship.

Keep it local: Zed also supports local AI models that keep your code completely on your machine, so nothing ever leaves your development environment.’

Which begs the question,

“What is so difficult about adding such options?”

The answer is mostly “It’s not” so why do the Silicon Valley Mega Corps want you to think it’s impossible and that you “Must have AI”.

The answer of course is that,

ET phone home to the mother ship

The Mega Corps want to steal every key click yo make so they can commoditize not just your work but you yourself.

Because there is no other way for them to make a return on the so far billions/trillions of sunk costs of “Current AI LLM and ML systems”.

The Mega-Corp view is,

“We can sell you into slavery to hide our stupidity.”

Hence their ramming of their work stealing AI down your throat.

The thing is all the Mega-Corps jumped into “Current AI” blindly and have taken with it a sizable part of not just the US economy but the first world economy.

The last time “to big to fail” did the blind “We’re jumping in because they’re jumping” was Sub-Prime Mortgages, and we know the economic crater that created because we are still trying to crawl out of it.

The Silicon Valley Mega-Corps have in fact made a pact… The only question is what type,

“With the Devil, Suicide, or both?”

When commentators make potentially controversial arguments they tend to do one of two things. Say “let me know in the comments below” or they “turn comments off”. All I ask is that people think about it and what it means to them and their future, because I’m pro “giving people choice” even when they want to,

“Shoot themselves in the foot in the life boat and sink the rest of us with them.”

C U Anon July 24, 2025 10:01 AM

How a Journalist crosses the border with a story whilst trying to remain out of harms way.

https://m.youtube.com/watch?v=7iaAgup85gk

Much of it is reasonable advice.

However “dummy accounts” on a phone is a bit of a target on your back.

Also Faraday bags for phones and computers are likewise not a good idea as they are mostly “too obvious” and easy to spot.

They also are not magic and don’t work as well as many people think.

If you can get a phone with a removable battery, which still exist, you would probably be safer. Oh and for those that know what they are doing disabling the data lines on USB ports means it will charge, but getting malware on via older spyware tools is harder and likewise reading data off.

As for slides over phone and computer cameras might appear a good idea they are not something you see very often so could be a give away. An item of clothing that is not transparent and gives scratch protection like a pair of thick walking socks or even a fleece hat will stop cameras but let sound be recorded easily.

Get “crafts-person” to make you a novelty phone-sock that has a hole in it which aligns with the camera lens if you put the phone in one way but not if you put it in in any of the other three ways.

Whilst sticky tap can work you have to have a reasonable reason to carry it. A pad of post-it-notes is generally not suspicious. Especially if in a brightly colored school pencil case along with a plastic ruler photo copies of your documents and some wax or soft led pencils.

But also remember there are very many people that go on holiday that take a spare pair of shoes in their lugage. A pair of Crocs can be not just worn as slippers, but in the shower etc to stop foot-fugus and stop you slipping. But they are also often used to put phone chargers and other modern day electronics in to protect not just the electronics themselves but other items in the luggage.

Plastic shopping bags you get every time you shop in a supermarket are essential travel items you can put folded cloths in them and then roll them up tightly to not just stop creases but take less space. If you leave a till-roll receipt in there or similar so what… Also three or four wire coat hangers are an essential to travelers even those that do 4-Star and above hotels, likewise a hank of fury parcel twine or that white butcher’s string.

Likewise inexpensive zip-lock bags can be used like a poor mans vacuum bag to hold things in place and keep items clean. The placing of items in them if done right will tell you if someone has rifled through your lugage whilst it’s been out of your sight.

So whilst the greasy bag is an old trick, there are other better ways.

Though remember that what you write can be seen unless you write with milk or pee etc. But even then it still needs to be in nonobvious code or encipherd in some way such that it limits the risk.

Remember if you are going to carry a pen, it stands out less if it’s tucked in the spine of a ring bound note book full of doodles and ideas to-do lists and reminders and the like, those with those faint ruled 1/5″ / 5mm squares are realy good for this. But is not suspicious if just on the table etc. Carrying a crossword or sudoku book is likewise not suspicious in areas were you might spend time waiting. Likewise a pack of cards and even a couple of dice is not that suspicious.

Oh and remember whilst paper burns, you need something to set it on fire…

Few people carry matches or lighters these days unless they are smokers. So have a reason to carry them. Having a sewing kit in a boo-boo first aid kit and a lighter to sterilise a needle or tweezers from a swiss army pocket knife to “get thorns and splinters out” is reasonable.

Also in the kit have not just sticking plasters, but bug-spray, alcohol rubs, lint squares, and those cotton wool disks you find in the makeup aisle. Along with lip salve and a small jar of petroleum jelly (vaseline) and some birthday cake candles or a tea-light.

Also Remember to put in sachets of sugar, salt, pepper, vinegar, ketchup, Dioralyte / Diorahydrate, and iodine / water purification tablets. Also a box of tisues that can double up as toilet paper etc. They will help in all sorts of obvious and not obvious ways to keep you safe and healthy as well as more comfy.

Also a roll of narrow electrical insulating tape as it will hold sticking plasters in place (cooks and those doing food prep use bright blue insulating tape rather than flesh coloured plasters, because in the rare cases it comes off it’s easy to see to pull out of food before you serve it). And those that have ever broken a finger or toe whilst doing adventurous out door things will know that wooden ice-lolly sticks or coffee stirers make great splints and the cotton wool or gause will pad whilst the tape will hold the splints properly. Depending on which bag you put the first aid kit in have a small pair of nail-scissors or a pocket style swiss army knife that has both the tweezers and tooth pick and cork screw and bottle/can opener and a couple of plastic spoons used for stirring tea etc (the better ones are about 5ml so can measure medicines).

But don’t carry those “medicine spoons” as they have no handle thus are not a lot of use otherwise. Because remember A tin of beans from even a rundown shop, even if out of date by a year or two is probably a lot less expensive and a good deal safer to eat than fast food and a longer handled spoon makes for a cleaner safer existence. Also tins of fruit usually has quite a bit of juice that whilst it’s full of sugar does make purified or bottled water way more easy to drink.

And that tea light under a tin can be used to heat it up or boil water. The speed you can make a rocket or twig-stove with a tin and church-key style bottle/can opener or swiss army pocket knife. Likewise a fish can with dry dirt in it will make an alcohol, oil, or petrol stove or lamp. Even a soda can can be turned into a penny-stove. Those wire coat hangers can help you turn tins and jars into all sorts of things of use both easily and quickly.

And remember many tins have paper labels minimally glued on, with care they are easy to take off and put back on again. If you write with care using a wax or soft pencil it won’t show through the label.

Clive Robinson July 25, 2025 8:27 AM

@ Bruce, ALL,

Political discrimination by algorithm wrecking lives.

I’ve been hearing stories / reports about the Dutch Childcare Scandle that discriminated against people in line with the political mantra being pushed by politicians

Some reports are blaming AI, but I’ve not been able to find verifiable evidence.

The fact that the racist algorithm got traced back far enough for many Dutch Politicians to resign rather than be held responsible / accountable, further suggests it was a deliberate implemented by humans algorithm, rather than bassed on Machine Learning.

The problem is the muddying with AI etc is allowing this scandal to be not just “arms lengthed” but also “swept under the rug” with expressions like,

“xenophobic algorithm”

Being bandied about, which in many minds sounds like “the rise of the machines” or “AGI” thus three basic things happen,

1, Conflation muddies the water to the benefit of those responsible.
2, Those responsible retain power and carry on.
3, The view of AI gets distorted probably irrevocably in many minds.

In this case whilst the Dutch “cabinet” has resigned they still remain in power and will re-stand in a General Election and due to the sentiments raised in the Dutch Citizenry will in high probability be re-elected…

So empowered to do “more of the same” but… With a little less obviousness, which is where “Current AI LLM and ML Systems” will almost certainly be considered if not used in some way.

I’ve been warning about this sort of “Political mantra” codified into automated systems for some time now, and more importantly how what are claimed as “black box systems” will get increasingly used to deflect accountability and responsibility. The DNN’s in Current AI LLMs are claimed to be “black box” likewise the Current AI ML systems that extract the statistics that get built in. Thus the fuel is piled high onto basic human rights and the current political climates driven by “Mantra” around the world are striking matches.

I honestly can not see this ending well as you can not legislate against bad algorithms when the algorithms can not be seen or reasoned about and would mostly make no sense in human eyes.

Any way two articles on this Dutch Child Care Scandal that appear about as unbiased as you can get when the information about the scandal is apparently being mired up under authoritarian Political leadership diktat,

https://www.vice.com/en/article/how-a-discriminatory-algorithm-wrongly-accused-thousands-of-families-of-fraud/

https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.