Comments

76Y February 13, 2026 7:12 PM

https://www.timesofisrael.com/israeli-founded-spacex-of-weather-raises-175m-to-send-next-gen-satellites-into-orbit/

‘Israeli-founded Tomorrow.io, a weather intelligence platform, said Wednesday that it has secured $175 million in fresh capital to launch the next generation of weather-monitoring satellites into space to forecast where storm clouds are gathering as well as help businesses and countries manage fast-evolving climate threats.

Billing itself as the SpaceX of weather, Tomorrow.io’s bold endeavor is to tackle the decades-old challenge of getting weather forecasts right as extreme weather events are ranked as the top long-term risk, according to a survey by the World Economic Forum. In 2024, natural disasters wiped $320 billion from the global economy, with weather and climate change-related catastrophes, such as floods, wildfires, hurricanes, and tropical cyclones, responsible for 93% of overall
damages.

“There are roughly 100 hurricanes or tropical cyclones a year globally and we still have a hard time forecasting both their trajectory and intensity,” said Goffer. “If you look at why, in most cases, it was because of a pretty rapid change in the structure of a storm that was missed because of gaps in existing meteorological observation data.”

The lack of timely and dense data prompted the three founders to develop their own satellite technology to improve observations and measurements of the current state of the atmosphere, especially in underserved regions such as India, the Philippines, and Africa.

Goffer said that the size of the satellites range from a shoebox to a mini fridge, whereas government satellites are the size of a car or bigger, heavier,and more costly.’

Clive Robinson February 14, 2026 2:08 AM

@ Bruce, ALL,

With regards do squid dream, the same question can be asked of any sufficiently complex system.

Which brings us to this weeks fun story,

Do AI’s dream of revenge?

Apparently an AI agent submitted “AI Slop code” to be integrated into a project.

And when rebuffed it is alleged it wrote a “hit piece” to try to destroy the credibility of one of the projects maintainers…

The thing is as outsiders we really don’t know enough to say what is really going on…

So I’m mindful the whole thing could be “a couple of months early” for April Fool’s

You can read the latest from the project maintainer at,

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/

However I’m mindful that using shaming as a weapon of revenge, is bad enough.

But… What if it’s part of a way to get insecure code into a database deliberately?

If you shame people enough without remorse and people being able to identify you, then ultimately your target either acquiesces to your demands, or in an increasingly larger part of society “cancel culture” rules take over.

All to many will acquiesce rather than stand and fight. Others will just walk away whilst those who stand and fight, don’t know who or what they are fighting.

When such attacks are automated and effectively anonymous an ordinary individual stands little chance of fighting back as they have neither the time or the mental fortitude.

Thus such attacks will in effect open up a security vulnerability in some non negligent percentage of projects…

If your aim is to get malicious code into Open Source as it clearly is for some APT groups and SigInt Agencies then automated reputation attacks are a way to go.

We know the NSA have used it as a “finessing attack” method on NIST and other standards groups, and that it has worked for them atleast a couple of times that we know of.

So the “play book” has not only been written but found to work.

Thus the question arises as ICTsec attacks move from technical into reputational,

“How do we defend against “social engineering” against not just individuals but more general society?”

jelo 117 February 14, 2026 7:38 AM

@ Clive Robinson

Do AI’s dream of revenge?

Apparently an AI agent submitted “AI Slop code” …
And when rebuffed it is alleged it wrote a “hit piece” to try to destroy the credibility of one of the projects maintainers…

Obligatory:

https://calumchace.com/favourite-relevant-sf-short-story/

He turned to face the machine. “Is there a God?”

“Yes, now there is a God.”

Clive Robinson February 14, 2026 10:18 AM

@ ALL,

Did anybody else miss these amusements last week?

https://www.404media.co/rfk-jrs-nutrition-chatbot-recommends-best-foods-to-insert-into-your-rectum/

https://www.youtube.com/watch?v=FBSam25u8O4

I think the first is “eye wateringly enlightening” much like a PR event…

The second is well just a WTF did I just here that followed by laughter (not the title “betrayal” I’ve been warning about that as the Microsoft “Be Business Plan” but it was not quit that type of betrayal I ment 😉

Clive Robinson February 14, 2026 7:02 PM

@ Bruce, ALL,

Do Electric sheep dream of humans?

Not sure what to make of this,

My smart sleep mask broadcasts users’ brainwaves to an open MQTT broker

I recently got a smart sleep mask from Kickstarter. I was not expecting to end up with the ability to read strangers’ brainwaves and send them electric impulses in their sleep. But here we are.

The mask was from a small Chinese research company, very cool hardware — EEG brain monitoring, electrical muscle stimulation around the eyes, vibration, heating, audio. The app was still rough around the edges though and the mask kept disconnecting, so I asked Claude to try reverse-engineer the Bluetooth protocol and build me a simple web control panel instead.

https://aimilios.bearblog.dev/reverse-engineering-sleep-mask/

The thing is it’s easily technically possible, and worse has been done in the name of IoT devices and surveillance (anyone else remember the “adult toys” by IoT story?).

But I’m dubious about the EEG readings aspect, they are generally hard to read out of the human head due to electrode sensor positioning issues.

But I guess another changed saying is due at this point,

“Where there’s a bill there’s a pay”.

just thougts February 14, 2026 7:16 PM

@Do you believe…
No, that is why per Your quote ‘His private attorneys have been threatened with gag orders by the Government so they, even though he paid them, they did not want to
defend his honor and his innocence. They didn’t even file a Direct Appeal.’

Basically, that is obstruction of justice. Till AI become judge and make decisions and gag orders stop misused and not checked/reviewed. But deep state could intimidate IT working of AI support – no way to real justice.

Clive Robinson February 14, 2026 7:37 PM

@ Bruce, ALL,

I’ve mentioned in the past and more recently that it is economic suicide to kill entry level jobs in your economy. Especially if it’s due to “off shoring” or similar.

Because once those left have “greyed out of employment” that industry is effectively dead and so to expensive to restart as any expertise is gone.

Well,

IBM is tripling the number of Gen Z entry-level jobs after finding the limits of AI adoption

The job market has been a sore subject for Gen Z. The unemployment rate among young college grads sits at 5.6%, hovering near its highest level in more than a decade outside the pandemic. Meanwhile, prominent executives—from Anthropic’s Dario Amodei to Ford’s Jim Farley—have warned that artificial intelligence will slash corporate entry-level jobs.

https://fortune.com/2026/02/13/tech-giant-ibm-tripling-gen-z-entry-level-hiring-according-to-chro-rewriting-jobs-ai-era/

But the article goes on to note that the job specs have been rewritten due to AI,

But some companies are realizing that cutting young workers out of the pipeline isn’t a sustainable long-term strategy: $240 billion tech giant IBM just revealed it’s ramping up hiring of Gen Z.

“The companies three to five years from now that are going to be the most successful are those companies that doubled down on entry-level hiring in this environment,” Nickle LaMoreaux, IBM’s chief human resources officer, said this week.

“We are tripling our entry-level hiring, and yes, that is for software developers and all these jobs we’re being told AI can do.”

I hope for the sake of the future economy other employers take note.

Because an “AI Future” based around Current AI LLM and ML Systems as ChatBots and AI Agents looks ever less certain with every passing week and sometimes hour with the economics making no sense what so ever,

https://garymarcus.substack.com/p/breaking-openai-is-probably-toast

And to be honest it’s not just the economics that have me worried. It’s the technology as well because it is just not delivering… When even I can find holes in it with perfectly reasonable and actually likely requests made of ChatGPT 5 within a few moments of first contact with it…

Clive Robinson February 14, 2026 8:22 PM

@ ALL,

Is vibe coding more addictive than gambling?

It would appear that not only do some think so, they think it’s harmful to both the employee and employer in lost potential and productivity as well as cost,

Breaking the Spell of Vibe Coding

Sinister variations on the positive state of flow

Vibe coding is the creation of large quantities of highly complex AI-generated code, often with the intention that the code will not be read by humans. It has cast quite a spell on the tech industry. Executives push lay-offs claiming AI can handle the work. Managers pressure employees to meet quotas of how much of their code must be AI-generated or risk poor performance reviews. Software developers worry that everyone around them is a “10x developer” and that they’ve fallen behind. College students wonder if it is worth studying computer science now that AI has automated coding. People of all career stages hesitate to invest in their own career development. Won’t AI be able to do their jobs for them anyway a year from now? What is the point?

I work at an AI company, and we use AI every day. AI is useful! However, we approach vibe coding with caution and have seen that much can go wrong.

The results of vibe coding have been far from what early enthusiasts promised. Well-known software developer Armin Ronacher powerfully described some of the issues with AI coding agents. “When [I first got] hooked on Claude, I did not sleep. I spent two months excessively prompting the thing and wasting tokens. I ended up building and building and creating a ton of tools I did not end up using much… Quite a few of the tools I built I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.”

Armin titled his post “agent psychosis”. The term “psychosis” is a strong label. What is it about this technology which could be trapping such productive and experienced developers? The reason may be similar to the addictive qualities of gambling, a sinister under-current of the normally positive state of flow.

https://www.fast.ai/posts/2026-01-28-dark-flow/

Clive Robinson February 15, 2026 9:49 AM

@ Ismar,

With regards the,

“A bank heist no one noticed”

We do know something of interest,

They set up the drill and bored a hole 40cm (15.7in) wide in the wall leading to the strongroom

So the diameter of the hole is

40cm × Pi = 125.7cm
15.7 × Pi = 49.32in

Allow some wriggle room and strong clothing and they were not “big blokes” in fact they were probably a little below average size.

You might ask why I say “strong clothing?”

Well it’s because if they had “stripped down” to go through the hole they would most definitely have left quite a bit of DNA in the hole.

Which brings us onto the authorities entering and finding,

“Herbert Reul said it looked like “a rubbish dump”, with more than 500,000 items strewn across the floor – the contents of the safe deposit boxes that the thieves had left behind.

Police said many items were damaged after the thieves threw water and chemicals on them.”

I can make a guess as to what those chemicals were and yes you can get them very easily…

Their purpose to destroy any accidently left DNA and even finger prints.

A strong solution of ordinary house hold “biological washing machine powder”[1] will do it, but it takes time… So it was probably more like “industrial grade” cleaning agents, though you’ld have to take care they did not cause yourself injury by splash or breathing in the vapours.

[1] If you want to experiment to see this, the closest thing to human you can easily get is swine flesh / pork take a 1cm piece from a chop / cutlet and put it in the bottom of a glass tumbler. Add a mixture of about 1/10th by weight biological powder[2] with 9/10ths water to cover it and leave it for a day in a cool place. When you come back you will find that the biological aspect has broken down the cell walls and fats (both lipids) and the other parts of the powder have basically broken down the DNA.

[2] It has to be “biological” as other soaps don’t destroy the DNA, instead it makes the DNA easily harvestable. Why might you want to do this?

Well one reason is to make people especially small ones smile,

https://m.youtube.com/watch?v=KINBSeCACow

Clive Robinson February 16, 2026 4:49 AM

@ Bruce, ALL,

Which AI was burning who or was it a conspiracy?

There is a story that has gained a lot of traction about an Open Source Project Admin saying NO to an AI generated “software fix”[1] and the AI responding badly –search for MJ Rathbun– or read,

‘https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/

What ever you might think, it got newsworthy. With some of the more main stream Tech Outlets publishing stories about it.

So it turns out that one such piece on ARStechnica carrying two journalists names published an article that was not at all factual…

Now we have what may be another first,

https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/

Whilst “names are not mentioned” it’s very easy to find out who the journalist are because the page has been archived in a number of places.

But it raises a question of,

“Which one, or was it the pair of them that thought “using AI” with it’s well known “Soft Bullshit” problems in around 3/10ths of usage was a good idea?”

Then of course will be the thought in the back of peoples minds,

“Did the AIs Conspire?”

You can keep this going on and on and on… In a “lesser fleas” manner for quite some time… Just remember to keep the popcorn going.

[1] “Apparently” the issue was part of an “on boarding process” so there for humans to solve not an AI. In effect “training wheels for new contributors”.

Cybershow February 16, 2026 3:24 PM

Hope you’re all doing well. Update on a few recent
Cybershow episodes.

Planning, disappointment and response in
several cybersecurity scenarios in the UK.
Side effects of Online Safety. Resilience at Heathrow airport.
Local politics for us at the show.

https://cybershow.uk/episodes.php?id=59

Hope scrolling. A much more positive way of framing intent
during binge use. Aaron Balick interview.

https://cybershow.uk/episodes.php?id=58

The invisible harms of social media and our
attempt at a better digtal parenting guide.

https://cybershow.uk/episodes.php?id=57

very best wishes to you all,

ResearcherZero February 16, 2026 11:31 PM

Privatised court transcript outsourcing lead to sensitive case material being accessed remotely by foreign entity located in India. The transcripts include names and details of undercover sources, protected witnesses, serious crime, matters of national security and sensitive cases involving young and vulnerable victims of abuse and violent criminals.

Normally employees require vetting and clearance, with a confidentiality undertaking. The private company VIQ, which outsourced transcribing to remote Indian workers, claimed the data “was safe” because the data itself was stored on a system in Australia (accessed remotely).

VIQ employees who raised issue with the arrangements were told to remain silent or were otherwise fired or forced to leave.

https://www.abc.net.au/news/2026-02-17/transcripts-federal-court-viq-solutions-e24-technologies-india/106349338

lurker February 17, 2026 1:52 AM

@ResearcherZero

Lucky I just installed a new keyboard protector …

“claimed the data “was safe” because the data itself was stored on a system in Australia (accessed remotely).”

What’s sad is the contract is supervised by people who don’t know any better:
” the Listedd Entity is currently in the process of making enquiries in relation to these issues.” while dinkum Okkers who care about this stuff get the sack for caring …

ResearcherZero February 17, 2026 2:57 AM

@lurker

Anyone who was part of a case (victims, witnesses, confidential sources etc.) in the last year, or who were mentioned in ongoing proceedings, may have had been subject to information disclosure.

This is how protected witnesses wind up dead or have their life put in jepoardy. It happens far more often than police and prosecutors are willing to admit.

Transcripts for closed hearings are ment to be treated with sensitivity and are often sealed under suppression orders to protect the integrity of proceedings, case details and identities of those at extreme risk of harm. Disclosure of sensitive details can see violent offenders walk free and exact further harm against new and existing victims, ruining years of testimony and evidence presented in court.

A number of close friends (and others) died as a result of failures in the justice system and repeated lack of penalty handed down to offenders. This took place despite overwhelming evidence.

Prosecutors and others handling sensitive evidence should be held to a far higher standard of accountabiliry, with severe penalties when they fail to meet them.

Clive Robinson February 17, 2026 3:57 PM

@ Bruce, ALL,

All is not well with CIO’s in AI world

Some think a time of reconning is fast approaching for AI projects in the business world and when you see by whom, it does not bode well for Current AI LLM Systems used Ageneticaly by employees that are basically vibing both money and IP out the door with no return on either.

But it’s not just costs/ROI or security there is more behind it,

CIOs told: Prove your AI pays off – or pay the price

Boards demand measurable ROI as budgets, bonuses, and jobs hang in the balance

Money continues to be pumped into AI as the next great thing in business, but a growing number of studies have found that adopting AI tools hasn’t helped the bottom line, and enterprises are seeing neither increased revenue nor decreased costs from their AI projects.

Perhaps not surprisingly, almost all respondents (98 percent) said pressure from the board to demonstrate measurable return on investment (ROI) is increasing, and retaining budgets may depend on whether they can prove a measurable return.

Time is running out, the report claims. Some 71 percent of the CIOs surveyed believe their AI budget will likely face cuts or a freeze if targets are not met by the end of the first half of 2026. And the consequences won’t just stop at funding: 85 percent of CIOs believe their employers will tie their compensation to measurable AI outcomes, and many say the same applies to their chief exec.

The fear is not only that employees will build the wrong thing – it’s that they will build the right thing in the wrong place, with the wrong data, under the wrong controls, the report states.

Many CIOs are also owning up to a certain level of buyer’s remorse when it comes to some of the decisions made regarding their organization’s AI stack. 74 percent say they regret at least one major AI vendor or platform selection made in the past 18 months.

the corporate CIOs in the survey are not so sure, and most are fearful of what will happen if [the AI Hype bubble] bursts.

Some 73 percent suspect their company would experience major disruption, with more than half (57 percent) saying their company’s very survival might be at stake. Oh, and 60 percent feared they may lose their job should the big pop ever happen.

https://www.theregister.com/2026/02/17/no_roi_no_ai/

Personally I’d say their fears are quite rational if not fully justified already.

ResearcherZero February 18, 2026 3:18 AM

Misatribution in the courts.

Serious errors with court transcripts that were transcribed by private company VIQ Solutions have compromised the integrity of court records. Comments have been attributed to the wrong person. Other comments are missing entirely, or mistakes and errors may give statements an entirely misleading meaning and could result in costly or serious misinterpretations.

Such errors not only make people’s jobs much harder, or distort the context and meaning of what was said in court, the cost of obtaining transcripts can be prohibitively expensive. Appeals have been entirely abandoned due to transcription errors.

For anyone considering legal action, the cost of obtaining transcripts can set them back thousands of dollars. For the company which provides the service however, it is a highly lucrative position to control.

https://www.abc.net.au/news/2025-11-29/family-court-transcripts-viq-solutions/105904558

Clive Robinson February 18, 2026 5:16 AM

@ ResearcherZero,

With regards,

“Such errors not only make people’s jobs much harder, or distort the context and meaning of what was said in court, the cost of obtaining transcripts can be prohibitively expensive. Appeals have been entirely abandoned due to transcription errors.”

In the UK we were moving to a fully digital Court System to not just save money but speed Courts up.

Well it’s suffering from the “Horizon Effect”[1] as nearly all major UK Government led ICT projects do with the same old contractors…

But it had got to the point where accessing Court Records was effectively inexpensive and made “the law available to all”.

Which has caused “political embarrassment” not so much for mistakes –though those are aplenty– but because politicians say so many stupid things (like a recent case about toilets). So the current “idiot in charge” David Lammy MP currently being “Lord Chancellor” and “Secretary of State for Justice” has decided public access has to go and electronic records should be destroyed so the old paper system where £10,000 for just a simple hour of Magistrate Court records is considered on the low side of what you will have to pay… Thus making justice closed, and easy to “re-write” oh so much easier…

Needless to say it’s upsetting Barristers and other legal professionals,

https://m.youtube.com/watch?v=cyyny3U-y0c

[1] The use of “Horizon Effect” is nothing to do with geography etc but the disaster that was the “Post Office Horizon System” and the mind boggling things of ineptitude and iniquity of the Post Office seniors and the prime contractor against entirely innocent people working as Post Masters in small Post Offices.

Clive Robinson February 18, 2026 6:27 AM

@ ALL,

General AI and Solow’s productivity paradox[1]

Some have noticed in the past my comment about the “most productive time in offices was 1973” when computers just started entering the work place, and things especially productivity nose dived and has not recovered.

And in many ways we see why office productivity has declined and continues to decline with PC’s on every desktop with constantly changing applications that have more features than there are termites in a hill.

Well it appears that those in charge of companies the CEO’s up in walnut corridor are noticing the same issue with both General and Agentic AI built around Current AI LLM and ML Systems. And like CIOs they know share holders want real ROI and fast… So they are speaking of general AI with the politeness of a record title of 1997,

“That don’t impress me Much”

Thousands of CEOs just admitted AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago

In 2023, MIT researchers claimed AI implementation could increase a worker’s performance by nearly 40% compared to workers who didn’t use the technology. But emerging data failing to show these promised productivity gains has led economists to wonder when—or if—AI will offer a return on corporate investments, which swelled to more than $250 billion in 2024.

“AI is everywhere except in the incoming macroeconomic data,” Apollo chief economist Torsten Slok wrote in a recent blog post, invoking Solow’s observation from nearly 40 years ago. “Today, you don’t see AI in the employment data, productivity data, or inflation data.”

New data on how C-suite executives are —or aren’t— using AI shows history is repeating itself, complicating the similar promises economists and Big Tech founders made about the technology’s impact on the workplace and economy. Despite 374 companies in the S&P 500 mentioning AI in earnings calls —most of which said the technology’s implementation in the firm was entirely positive— according to a Financial Times analysis from September 2024 to 2025, those positive adoptions aren’t being reflected in broader productivity gains.

A study published this month by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. While about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all. Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research noted.

https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technology-age/

Thus the question arises,

“When are shareholders going to ‘night of the long knives’ General and Agentic AI?”

And of course the question,

“Is the ‘smart money investors’ already getting out with the current ‘mug money investors’ wealth?”

Especially as more and more “financial irregularities” in the AI companies is coming to light…

The thing is Current AI LLM and ML Systems even though expensive to make and run, especially for “General AI” are in niche “Highly Specific AI” showing worth while returns the likes of Alpha Fold show this.

We’ve also seen this before with AI back in the 1980’s with promises made for “expert systems” and “fuzzy logic” that just did not deliver in the general use case… But are still in use and delivering in niche highly specific use cases.

As for “General AI” even over hopeful experts in the field are pushing out deliverable time frames further and further into the future with 2035 being just one example of this.

[1] The term came about from an almost “throw away quip” from Robert Solow a Nobel Laureate of,

“You can see the computer age everywhere but in the productivity statistics.”

https://en.wikipedia.org/wiki/Productivity_paradox

Who is going to make similar for General and Agenetic AI or,

“Has it been said already?”

76Y February 18, 2026 7:25 PM

https://www.yahoo.com/news/articles/china-building-submarines-faster-ever-040907447.html

‘China has ramped up its production of nuclear-powered submarines over the past five years to the point where it is launching subs faster than the United States, threatening to negate a sea-power advantage that has long belonged to Washington, a new think tank report says.

The buildup in the People’s Liberation Army Navy’s nuclear-powered sub force includes both ballistic-missile and attack subs, the report from the International Institute for Strategic Studies (IISS) says.

During the years 2021 to 2025, China’s submarine building surpassed that of the US in both numbers of subs launched – 10 to 7 – and tonnage – 79,000 to 55,500, says the report, which looked at shipyard satellite imagery to draw estimates of China’s construction.

China also maintains a large conventionally powered sub fleet, with 46 boats, according to the “Military Balance.”

The US has zero conventionally powered subs which – unlike nuclear-powered subs – need to refuel regularly.

The newest Chinese subs are not believed to be as quiet as US ones, leaving the stealth advantage to the US Navy.’

ResearcherZero February 19, 2026 5:09 AM

@Clive Robinson, 76Y

Apparently, according to media reports, you can just jail-break an F-35. Or you could steal the designs from any number of manufacturers and build your own plane, sub, ship, missile, …

A lot of sensitive information can be purloined via access to government records systems, be it from a government service network itself or its service provider.

It costs significantly less bones for bulk access through a well positioned employee within a government department/ministry. Even fewer bones for remote access to those files when they are digitized and stored electronically, regardless of the Privacy Act of 1988 (as sighted by VIQ Solutions) and the storage location.

A few decades back or so, I think the CIA had warned the President(s) about Russia and China burrowing into the networks of the U.S. government and their intentions.

If Estonia and Georgia did not ring a few bells in the White House, perhaps the hack of OPM might of? Certainly not the deep penetration into multiple federal agencies and departments (and their email systems) and then the compromise of U.S. telcos and the millions of call records.

(The Salt Typhoon hack allowed China to gelocate millions of individuals and record their phone calls at will. Let us just ignore the implications of that.)

Certainly not any recent campaigns. But it might not be China or Russia, it could be someone entirely else… said a man in the White House (or said something like it). 😉

‘https://www.theregister.com/2026/02/18/dell_0day_brickstorm_campaign/

Perhaps the heads of America’s agencies are far too busy pumping out fluff that would make the Kremlin’s propagandists blush and skilled at pandering to keep their jobs, than having any real skill-set of worth and relevant to their position?

https://www.washingtonpost.com/opinions/will-john-brennans-controversial-cia-modernization-survive-trump/2017/01/17/54e6cc1c-dcd5-11e6-ad42-f3375f271c9c_story.html

(Note: Any mitigation and defense efforts are subject to change.)

https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-239a

ResearcherZero February 19, 2026 6:51 AM

Anyone targeted by BRICKSTORM should be on the lookout for GRIMBOLT, which is a more effective backdoor and was put into action to replace the BRICKSTORM backdoor last year.

Initial access is as yet undetermined, but believed to target edge devices. The intrusions are accompanied by exploitation of CVE-2026-22769 in Dell RecoverPoint for Virtual Machines.

‘https://cloud.google.com/blog/topics/threat-intelligence/unc6201-exploiting-dell-recoverpoint-zero-day

Dell mentions the zero day has existed for some time due to the use of hardcoded login credentials in the code for its data protection and disaster recovery product for virtualized environments. Chinese hackers have reportedly used the vulnerability in its operations since 2024.

https://www.techradar.com/pro/security/a-dell-zero-day-flaw-has-reportedly-gone-unpatched-for-nearly-two-years-and-chinese-hackers-are-taking-advantage

lurker February 19, 2026 12:33 PM

@ResearcherZero, ALL
re Dell Recoverypoint

The old hardcoded credential trick again, eh? But Dell won’t get any comeuppance serious enough to make them mend their ways. And it’s not as if there aren’t a herd of competitors in this field for clients to switch to …

ResearcherZero February 19, 2026 9:37 PM

@lurker

Hardcoded admin credentials in the Tomcat server they didn’t bother to audit before rolling out development build into the real world for testing. This is how all widely distributed commercial software is audited today, by nation-states and groups of criminal affiliates looking for access.

It is far cheaper to roll out alpha or beta builds and then let others find the bugs. Many of these products are not ready to ingest large volumes of sensitive private data – while facing the public net.

DHS has purchased a whole range of products to test in the real world. To largely avoid accountability when things inevitably go wrong, physically and datawise.

$1b multi-department 5 year Palantir license to do what ever you like with it.

‘https://www.wired.com/story/department-homeland-security-ice-billion-dollar-agreement-palantir/

Clive Robinson February 19, 2026 10:12 PM

@ ALL,

Speaking of “dreaming” a trend that has been noted by more than one person is that the calibre of people doing “vibe coding” is “seen as low”…

One reason given is that both they and their projects are “shallow” lacking both depth and importantly experience in problem solving (what others call “engineering design experience”).

Well some one has –unsurprisingly ;)– has meditated and blogged on the subject,

AI makes you boring

I don’t actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring. They generally don’t have a lot of work put into them, and as a result, the author (pilot?) hasn’t generally thought too much about the problem space, and so there isn’t really much of a discussion to be had.
The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.

I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects who don’t have anything interesting to say about programming.

But wait… There is more and it’s rather more personal,

I want to build an argument that it’s much worse than that.

AI makes people boring.

AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

Note that last bit I’ve put in bold.

Duckduckgo has access to a number of AI systems you can use effectively for free (an important consideration for many who want to learn about them).

I’ve been using them to test bed research on their systemic flaws (of which they have oh so many).

The idea is to see how people will fall foul of them in “ordinary work flows” not for engineering type work but everyday work.

I posted one the other day, which I used to demonstrate the issue of RAG -v- DNN.

For various reasons our host @Bruce is in the DNNs of various LLMs, I’m not (lucky me). So a compare and contrast between a post I’ve made here and put into the duckduck AI @Bruce by name reveals some quite interesting systemic failures that would not be readily apparent.

That is think of one of the “legal brethren” trying to sort out what “expert witness to use?”

Based on my research so far, my advice to a harried lawyer who thinks of using AI as a speed up is,

“For goodness sake don’t do it!”

All that aside, read the blog post and take it on board your considerations.

Clive Robinson February 20, 2026 5:26 AM

@ lurker,

With regards the thoughts of “old timer Aardvark” yup there is “advantages to the old ways”.

Mind you it was a while after the late 1970’s and early 1980’s that “design by contract” became a thing.

But it was not untill “the early naughties” that he got the term “trademarked”… So officially we can not use it any more so you nolonger hear about it or that weird OO language “Eiffel”.

The best bit about design by contract was that it cleared out many proto-bugs in design. Because it made you think clearly not just about how you dealt with “errors” but “exceptions” and thus eliminated many if not all root causes of “BSoDs” before they even got into code.

I’d kind of dropped into the pattern as I mentioned the other day I still give the same basic software design talk I was giving back in the 1980’s where I talk about the notion of a stand alone embedded program being the equivalent of a pyramid that you are trying to turn into a diamond to save resources and prevent errors and exceptions killing things.

Back then “Object Oriented” was not something most people had heard of and even less had any knowledge of, and those doing embedded systems could not and would not use even if they did because it “wasted resources you did not have”. Even well into the 1990’s microcontrollers did not have the RAM or the Stack even if they had the ROM…

The notion of objects is contested to this day, but arguably it predates both design by contract and “home computers” by nearly two decades having been around in “high fuluting conferences” in the early 1960’s.

But Alan Kay’s original ideas that work so well in parallel computing have largely been discarded (but are coming back unacknowledged with the use of DNNs in LLMs). About all that’s left is “encapsulated data+methods” and “message passing interfaces”.

But the “bad” that still happens is not passing-back errors and exceptions and programmers trying to push error handling “to the left and up” out of business logic to as close to the input as they can. This was clearly a mistake when we first started developing web based systems, because reworking code into client-server ment putting it all on the client… Something that is still giving us security issues to this day…

Any way enough reminiscences they are turning my beard grey 😉

ResearcherZero February 20, 2026 6:40 AM

@Clive Robinson, lurker

RE: Even well into the 1990’s politicians did not have the RAM or the Stack even if they had the ROM… …and still don’t.

I wouldn’t overthink the answer to the question posed by the article.

(Why don’t people believe evidence laid out before them in granular detail.)

‘https://www.theguardian.com/world/ng-interactive/2026/feb/20/a-war-foretold-cia-mi6-putin-ukraine-plans-russia

Clive Robinson February 20, 2026 12:31 PM

@ ResearcherZero,

With regards,

“Why don’t people believe evidence laid out before them in granular detail.”

Because of cognitive bias that these days is “algorithmically generated”…

So much so, that even an American ExPat who cycles around much of London has noticed…

Thus this might amuse and explain,

https://m.youtube.com/watch?v=uDkyP37JgY0

Like Watergate you have to “follow the money” for real explanations, whilst others “follow the idiots” to get so biased…

ResearcherZero February 21, 2026 12:09 AM

@Clive Robinson

More proof extraterrestrials will be giving plant Earth a wide berth.

In Australia cognitive bias is generated by Gina Rinehart, who supplies the money to the idiots. After traveling around London, even the “rough” areas, after getting back to Australia people asked my wife and I if it was scary or dangerous, which was quite funny.

I’m sure some people think parts of Australia are scary and dangerous. Perhaps running across all lanes of the freeway with your eyes shut, or swimming into the mouth of a shark.
There are saltwater crocodiles in Northern Australia for those who are really keen. 😉

Clive Robinson February 21, 2026 3:29 AM

@ ResearcherZero,

You say,

“I’m sure some people think parts of Australia are scary and dangerous.”

There are, for instance “Manly Sydney” is contrary to what you might think from the name, is full of bikini clad amazonians requiring increased heart meds 😉

But you do have the ten most poisonous snakes and a spider that’s upto a foot across that moves fast enough to scare even those of calm disposition… Then there are those that hide in holes in the ground one of which the Sydney Tarantula that has been known to kill people and their pets. But it’s not so much what’s on the ground even though the national animal can easily disembowel a grown man, but what’s in the water, with the platypus acting as a bridge between the two, there are the stone fish, lion fish, several octopus, jelly fish, sharks and amphibians. Then there is the weird stuff like the magpie that can require people to wear hard hats and other birds of similar disposition.

And you say the worst to fear is highway motorists?

Hmmm… Seriously unlike Texas that claims to do things “bigger and better” than every one else Australia really does do it “bigger and better” when it comes to dangerous nature 😉

Winter February 21, 2026 7:00 AM

@Clive, ResearcherZero
Re: Scary Countries

Even the Netherlands can be a very scary country, as the famous ambassador Pete Hoekstra experienced. We are talking no-go areas and politicians set on fire:

‘https://youtu.be/K8AwFc9hlf4?si=eUS_LLIEsBA0CPP_

I understand that Pete Hoekstra has also weighted in on Canada recently.

lurker February 21, 2026 12:44 PM

@ResearcherZero

In six months across north and west Qld I never saw a snake. I saw a dozen or so in the same time in the countryside in China, where they hunt, kill and eat them. The scary thing I saw often in Oz was filling up the vehicle tank with gasoline, and filling up the driver with an equal quantiry of CastlemainXXXX.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.