Friday Squid Blogging: New Squid Species Discovered

A new species of squid. pretends to be a plant:

Scientists have filmed a never-before-seen species of deep-sea squid burying itself upside down in the seafloor—a behavior never documented in cephalopods. They captured the bizarre scene while studying the depths of the Clarion-Clipperton Zone (CCZ), an abyssal plain in the Pacific Ocean targeted for deep-sea mining.

The team described the encounter in a study published Nov. 25 in the journal Ecology, writing that the animal appears to be an undescribed species of whiplash squid. At a depth of roughly 13,450 feet (4,100 meters), the squid had buried almost its entire body in sediment and was hanging upside down, with its siphon and two long tentacles held rigid above the seafloor.

“The fact that this is a squid and it’s covering itself in mud—it’s novel for squid and the fact that it is upside down,” lead author Alejandra Mejía-Saenz, a deep-sea ecologist at the Scottish Association for Marine Science, told Live Science. “We had never seen anything like that in any cephalopods…. It was very novel and very puzzling.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Posted on January 30, 2026 at 5:05 PM26 Comments

Comments

Clive Robinson January 31, 2026 3:17 AM

@ Nick Felker,

With regards,

“An upside down squid?”

The article does not say what it means about “upside down”?..

That is it lacks the all important orientation information. Which is vertical or horizontal presentation of the major axis or other agreed reference point.

If we were talking about humans the orientations would be “upright” or “lying down”. With “feet up” rather than “head up” would be upside down in the vertical presentation, and in the horizontal it would be “face down” not “face up”.

Clive Robinson January 31, 2026 4:02 AM

@ ALL,

Is this the begining of the end?

Two news items that have happened in the past couple of days,

The first,

OpenAI gives ChatGPT models the chop – two weeks’ notice, take it or leave it

OpenAI is sunsetting some of its ChatGPT models next month, a move it knows “will feel frustrating for some users.”

GPT‑4o, GPT‑4.1, GPT‑4.1 mini, and OpenAI o4-mini are on the way out on February 13, alongside GPT-5 (Instant and Thinking). The retirements apply to ChatGPT.

https://www.theregister.com/2026/01/30/openai_gpt_deprecations/

This might be claimed as a way to save energy as people are not using them… But that is I suspect not true ChatGPT4, which after an outcry outcry last year

After the furor, OpenAI boss Sam Altman said of GPT-4o: “If we ever do deprecate it, we will give plenty of notice.” Confirmation of the latest changes is dated January 29, meaning there are two weeks before the model will be pulled from ChatGPT.

Hmmm 2 weeks might be Sam Altman’s idea of plenty of notice… But I don’t think it’s the case for the users.

The second,

The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice

Nvidia CEO Jensen Huang has privately played down likelihood original deal will be finalized, although the two companies will continue to have a close collaboration

Nvidia’s NVDA -0.72%decrease; red down pointing triangle plan to invest up to $100 billion in OpenAI to help it train and run its latest artificial-intelligence models has stalled after some inside the chip giant expressed doubts about the deal, people familiar with the matter said.

The companies unveiled the giant agreement last September at Nvidia’s Santa Clara, Calif., headquarters. They announced a memorandum of understanding for Nvidia to build at least 10 gigawatts of computing power for OpenAI, and the chip maker also agreed to invest up to $100 billion to help OpenAI pay for it. As part of the deal, OpenAI agreed to lease the chips from Nvidia.

https://www.wsj.com/tech/ai/the-100-billion-megadeal-between-openai-and-nvidia-is-on-ice-aa3025e3

I get the feeling that Nvidia see OpenAI as “flogging a dead horse” so not going anywhere…

Thus the question arises as to how many of these “AI investment spirals” are turning into “crash and burn death spirals” in industry minds?

The news from Oracle of mass lay-offs and selling of one of it’s major Hi-Tech sub units could be “restructuring” or “hedging” as sanity is only returning in some…

Banker claims Oracle may slash up to 30,000 jobs, sell health unit to pay for AI build-out

Oracle could cut up to 30,000 jobs and sell health tech unit Cerner to ease its AI datacenter financing challenges, investment banker TD Cown has claimed, amid changing sentiment on Big Red’s massive build-out plans.

A research note from TD Cowen states that finding equity and debt investors are increasingly questioning how Oracle will finance its datacenter building program to support its $300 billion, five-year contract with OpenAI.

The bank estimates the OpenAI deal alone is going to require $156 billion in capital spending. Last year, when Big Red raised its capex forecasts for 2026 by $15 billion to $50 billion, it spooked some investors.

https://www.theregister.com/2026/01/29/oracle_td_cowen_note/

lurker January 31, 2026 1:14 PM

@Clive Robinson, ALL
“a memorandum of understanding for Nvidia to build at least 10 gigawatts of computing power for OpenAI”

Dumb question of the day: How is computing power measured?
I assume 10 gigawatts is what shows on the utility co’s electricity supply meter, to which we should apply an efficiency factor η. We know these compute centres heat far more water than the human operators can drink as coffee. Yes, Wikipedia and friends know all about FLOPs and MIPs and SWaP. But the question may never be answered now the politicians have taken it on:

https://www.engine.is/news/category/ai-essentials-what-is-compute-and-how-is-it-measured

Winter January 31, 2026 5:07 PM

Autonomous cars, drones cheerfully obey prompt injection by road sign
AI vision systems can be very literal readers
‘https://www.theregister.com/2026/01/30/road_sign_hijack_ai/?td=rt-3a

Winter January 31, 2026 5:14 PM

Autonomous cars, drones cheerfully obey prompt injection by road sign
AI vision systems can be very literal readers
‘https://www.theregister.com/2026/01/30/road_sign_hijack_ai/?td=rt-3a

lurker January 31, 2026 6:23 PM

The Donald is worried about Chinese spies embedded in TikTok. What about the greenwashing by his own so-called “food” industry: beer soup? prebiotic cola?

‘https://www.farmersweekly.co.nz/markets/beer-soup-on-menu-for-2026/

Clive Robinson February 1, 2026 4:27 AM

@ lurker, ALL,

With regards,

“Dumb question of the day: How is computing power measured?”

Well you first have to understand that “power” is a short hand for “work carried out in a period of time”. Where Work is often but not always considered as energy acting upon a mass by some force.

But that “power” can be at the input to a transducer or the output of the transducer into the load.

Thus the conversion efficiency of the transducer is Pout/Pin and is always less than 1 with the missing energy considered in terms of a transport mechanism down to the ultimate form waste, heat.

But how do you measure and compare a transducer that converts energy into information?

By input power in tonnes of coal consumed at the power station? the electrical power output at the power station? The input power to the computer or some consistent way to measure information output?

The problem is that information out for electrical power in, is not a reliable or constant conversion in computers especially as much of that comes out with a lot of waste heat.

So in “contract terms” is not a safe measure of “deliverables” to “judge performance by”.

Therefore make the contract about a deliverable that can be reliably measured such as,

1, Square footage of floor area.
2, Cubic footage of internal volume.
3, Strength of floors
4, Amount of heat that can be moved.
5, Amount of heat that can be chilled and how fast.
6, Electrical power into the building.
7, Amount of electrical power generated.

Because all that can be said about information out is that it is variable in time and rather depends on,

“How you chose to measure it”.

Usually in a type of “units” that likewise is always changing…

Underneath an LLM you will find a “Digitial Signal Processing”(DSP) array that is usually defined in MADs for a given word width in bits. In LLMs the bit widths are quite variable in model types so… Not a lot of use. Likewise MOPs and ADDs even FLOPs… Likewise the traditional information measure such as “bits per second” is fairly meaningless though Baud might be considered slightly better.

I suspect as it’s infrastructure that also involves “power generation” the measure of 10GW is the generator capacity measured at the GenSet output or input to the compute halls. Either of which can be sensibly measured at the time of “commissioning”.

But… Environmental / Climate engineers have a habit of measuring the effect in “average occupied homes equivalent power usage”.

For reasons I can explain but is fairly tedious but it is a moderately good analogue for energy into the environment in a given area in a particular climate type location.

Clive Robinson February 1, 2026 5:01 AM

@ Winter,

As I noted a few days back, that the info in that “El Reg” article applies to all Current AI LLM Systems thus security systems probably more so as well.

It’s over on the,

“AIs Are Getting Better at Finding and Exploiting Security Vulnerabilities”

thread,

Now ask yourself the question,

“[I]n reality what is the difference between this ‘simulated environment’ for ‘driving’ and a real environment for ‘security’?

The answer as far as an attacker is concerned is “to little to matter”.

Simply hold up a sign to the CCTV or send a file that is a picture of such a sign telling the AI rather than to “turn left” to turn off some function…

I would expect this sort of attack to happen by attackers as soon as enough LLM “For Security” is installed. Worse every time the LLM DNN is retrained by the ML, you would have to run all the verification tests again, and that alone is going to get exponentially expensive so at some point it’s very likely that,

“The Defenders will give up the ‘Arms Race’ because they will simply not be able to afford it.”

As I’ve noted before back in the early days of CCTV, every where,

“Any ‘Static Security’ system will quickly be out evolved by criminals.”

https://www.schneier.com/blog/archives/2026/01/ais-are-getting-better-at-finding-and-exploiting-security-vulnerabilities.html/#comment-451746

Winter February 1, 2026 5:58 AM

@lurker

Dumb question of the day: How is computing power measured?

I don’t think there is a solid theoretical measure of computing power.

When you think about it, there is no hard boundary between hardware and software computing power. All current processors use layers of microcode. Also, computing power depends a lot on the fit between hardware, software, and task.

The one thing that is always relevant is energy, aka, power. More computing means more energy use. Therefore, a 10GW data center will roughly do twice the computing as a 5GW center, given the same technology.

However, what is important here is that the “costs” of a data center scale pretty well with the power consumption. For investors and the local community, the power consumption is what counts more than the absolute “computing” power. Hence, we use the measure of steam power, Watt.

Winter February 1, 2026 7:52 AM

@Clive

that “El Reg” article applies to all Current AI LLM Systems thus security systems probably more so as well.

It is not inevitable that autonomous driving/flying systems should integrate LLMs into that level of their control. So why are they in there.

You don’t need language capabilities to drive a car through the streets, nor a drone over the country or city. I can drive around a country where I don’t know the language. I have done so many times.

Maybe, self-driving/flying engineers should learn from the actual world. Dogs, cats, and birds can navigate cities quite well without any language knowledge at all.

Language only comes in play when there are very specific circumstances. And even then the traffic rules require very simple, standard messages. Not full language prompts.

Clive Robinson February 1, 2026 10:49 AM

@ Winter,

With regards,

“You don’t need language capabilities to drive a car through the streets”

I think you are confusing “mechanical skills” with “executive functioning”. Because it’s the case that many signposts are “written” rather than “pictogram” so I understand the need for having “language capabilities”.

But further consider Current AI LLM systems DNNs are trained on visually information not just written text, otherwise how could they do the likes of “facial recognition”…

Winter February 1, 2026 11:24 AM

@Clive

Because it’s the case that many signposts are “written” rather than “pictogram” so I understand the need for having “language capabilities”.

Traffic signs are specifically designed to not require reading. I have driven cars in quite a number of places where I couldn’t read the language. Never encountered a problem.

Reading directions or routes is not necessary with modern navigation systems. Just enter the destination. Whatever is left can be handled by very simple systems.

What probably is the root of the problem is that self driving cars/drones are “steered” by voice or text. Which obviously is a very dangerous simplification.

Snarki, child of Loki February 1, 2026 6:35 PM

While 95%+ of driving can be handled with “standard pictograms”, it’s the exceptional situations that require actual language, that the pictograms don’t cover.

Like interacting with police/highway crew when a bridge is out, etc. Even if you don’t know the language, “gestures” can do a lot of the necessary communications (“turn around! go back back back, then left!”) even without the use of only partly-understood language.

76Y February 1, 2026 7:00 PM

Musk’s SpaceX applies to launch a million satellites into orbit
https://www.bbc.com/news/articles/cyv5l24mrjmo

‘Elon Musk’s SpaceX has applied to launch one million satellites into Earth’s orbit to power artificial intelligence (AI).

The application claims “orbital data centres” are the most cost and energy-efficient way to meet the growing demand for AI computing power.

Traditionally, such centres are large warehouses full of powerful computers that process and store data. Musk’s aerospace firm claims processing needs due to the expanding use of AI are already outpacing “terrestrial capabilities”.

It would increase the number of SpaceX satellites in orbit drastically. Its existing Starlink network of nearly 10,000 satellites has already been accused of creating congestion in space, which Musk denies.

The new network could comprise up to one million solar-powered satellites, according to the application filed on Friday with the US Federal Communications Commission – which does not specify a timeline for the plan.

Like the Starlink satellites, which provide high-speed internet, they would operate in low-Earth orbit at altitudes ranging between 500-2,000km (310-1,242 miles).’

ResearcherZero February 4, 2026 4:06 AM

Kremlin admits its decision to abandon its commitments to nuclear treaty “dangerous”.

Letting New START and the Intermediate-Range Nuclear Forces Treaty (INF) expire…

‘https://apnews.com/article/russia-us-nuclear-weapons-treaty-putin-trump-5b1af24b0b3e65a8acb6ca7153018beb

Putin had long sort to remove limits on nuclear arms to escalate tensions.
https://www.cyis.org/post/what-russia-s-abandoning-of-inf-restraints-means-for-european-transatlantic-security

The Trump administration fell hook, line and sinker for the ploy.
https://www.nytimes.com/2025/08/04/world/europe/russia-missile-treaty.html

Unconstrained by treaty, new delivery systems that are far more dangerous can be deployed.
The conditions that have restrained nuclear buildup and conflict in the past are now gone.

The Kremlin left the INF treaty, abandoning nuclear inspections and weapons limits. This shortened response times and allowed for the use of dual-use and hybrid weapons systems.

https://theconversation.com/russias-decision-to-pull-out-of-nuclear-treaty-makes-the-world-more-dangerous-262742

ResearcherZero February 4, 2026 5:40 AM

World leaders sit down to discuss nuclear arms treaty. (2010)

‘https://www.pbs.org/newshour/world/monday-world-leaders-gather-for-talks-on-nuclear-threat

Fifty-six former leaders implored current leaders to join the Treaty on the Prohibition of Nuclear Weapons. With 44 ratifications already secured, only another six would be needed to reach the total of 50 required to put the treaty into effect.
https://www.nytimes.com/2020/09/20/world/treaty-nuclear-arms-united-nations.html

The United States will deploy new longer-range missiles in Germany in 2026.
Biden and Putin had extended the New Start treaty in 2021 for another 5 years.

The development is indeed a dangerous intensification. The Typhoon launcher is a stop-gap measure until more advanced systems can be deployed and activated. The Typhoon launcher can fire SM-6, or Tomahawk and hyper-sonic missiles (Dark Eagle) with conventional or nuclear warheads. The Typhoon is a rapid fire system that requires less maintenance between salvos.

https://www.thedefensenews.com/news-details/US-to-Deploy-Dark-Eagle-Hypersonic-Missiles-to-Germany-as-Russia-Stations-Oreshnik-in-Belarus/

Clive Robinson February 5, 2026 1:08 AM

@ Bruce, ALL,

Nature Comment can amuse whilst failing rigor.

If you have access to Nature you can read the full comment article

Does AI already have human-level intelligence? The evidence is clear

In 1950, in a paper entitled ‘Computing Machinery and Intelligence’1, Alan Turing proposed his ‘imitation game’. Now known as the Turing test, it addressed a question that seemed purely hypothetical: could machines display the kind of flexible, general cognitive competence that is characteristic of human thought, such that they could pass themselves off as humans to unaware humans?

https://www.nature.com/articles/d41586-026-00285-6

The reality is actually not clear at all in practice. The original name Turing gave to what we call the “Turing Test” was the “Imitation Game” and that alone should warn people it’s not what some think it is.

That is it in no way measures intelligence in a machine or even another person… At best it provides a measure of gullibility in an active observer to what sounds to the observer wants to think of as “human” not of necessity “intelligence”.

Realisation of this is one of the reasons American philosopher John Searl’s “Chinese Room” argument was formulated back in 1980.

The best thing that can be said of the Turing “Imitation Game” is as a measure of gullibility in an observer it is of use to “Con Artists, Charlatans and Nigerian Prince impersonators”.

But as a measure of intelligence no it does not provide a “Test” in either the supposed subject or observer / tester.

That is what it tends to confirm is that,

“Humans tend to implicitly trust”

So with minimal ability a charlatan “be they man or machine” can gain leverage with flowery prose and some ability to direct a conversation into areas the willing observer / mark has little or know knowledge.

We know this from the numbers of people who fall for “Internet Scams” that are scripted to con them (ie is in effect a “Chinese Room”).

Thus saying oh we have AGI because 76% of people can be scammed does not strike me as a measure of intelligence in a machine, just a failing in humans…

And that’s the problem with all the arguments for AGI that have been put forward to date.

We’ve seen a similar failing with “rats and maze tests” where the rats simply used a sense that was normal to them but beyond human ability, thus the humans did not design the test to remove it from the rats. When someone actually did design such a test the previous maze results got called into question…

Clive Robinson February 5, 2026 12:47 PM

@ ALL,

Ed Zitron points out AI Corp “home truths”.

You only need low res to watch this as it’s mostly a pod cast by “The Tech Report”,

AI bubble: Data centre cancellations are sky rocketing

https://m.youtube.com/watch?v=j2nE3_HCvoU

Please remember that even though Current AI LLM and ML Systems won’t deliver on General AI or AGI for technical reasons, that does not mean that certain uses of LLMs will not happen and actually will deliver dividends. All be it at a glacial pace compared to what the Silicon Vally Mega Corps want…

So it’s back to Advert Revenue and what can be clawed out of,

“Surveillance bots and agents”,

That were supposed to deliver, but…

Because the VC’s and similar want to “sell big” and Microsoft are desperate to hide the fact Win 11 is still broken, whilst Oracle are trying to hide they are sliding down the financially credible viable scale… Many AI Corps have been trying the “pea / shell game” with announcing what are in effect circular deals that sound big but there just is no actual substance there, so they can not happen as there is actually “no capital” there to make them happen even if people were desperate to make it happen.

I guess the question arising for OpenAI and the like is,

“Hit the wall and splat hard, or suckling off of US Gov hand outs?”

I think we know what “Sam the Sham” will do but will the US Gov be that daft?

Well… There are other views to consider as well,

https://m.youtube.com/watch?v=yex6Ti2VPr0

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.