Friday Squid Blogging: We’re Genetically Engineering Squid Now

Is this a good idea?

The transparent squid is a genetically altered version of the hummingbird bobtail squid, a species usually found in the tropical waters from Indonesia to China and Japan. It’s typically smaller than a thumb and shaped like a dumpling. And like other cephalopods, it has a relatively large and sophisticated brain.

The see-through version is made possible by a gene editing technology called CRISPR, which became popular nearly a decade ago.

Albertin and Rosenthal thought they might be able to use CRISPR to create a special squid for research. They focused on the hummingbird bobtail squid because it is small, a prodigious breeder, and thrives in lab aquariums, including one at the lab in Woods Hole.

Is this far behind?

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on September 1, 2023 at 5:29 PM53 Comments

Comments

William September 1, 2023 7:57 PM

More like we’re now publicly admitting that we’ve genetically engineered squid. Most scientists who do this sort of stuff don’t publicize it. At least this is what I learned working in cattle genetics where I met scientists who claimed to have been cloning cattle long before Dolly.

Is this a good idea? Maybe. At least it’s done as a shortcut to breeding albino squid to use in neurological research: so they’ll need to breed less squid to get ones for this specific purpose; this will somewhat reduce the number of squids bred for this, erm, ‘insightful’ sort of research — which is probably good.

Michal September 2, 2023 4:22 AM

That is interesting.

Did you hear about hacking of Madeiran (island in west Atlantic) healthcare (SESARAM) system? All the computers are down in hospitals since early August…

Luke September 2, 2023 4:25 AM

One of our best pro Scottish independence bloggers, Wings over Scotland, received a notice from Apple the other day that state sponsored attackers are trying to target his phone. Here’s his post detailing the notice..

Commendable of Apple to let users know when they detect such goings on, would anyone here be able to shed some light on what it may have entailed?

iAPX September 2, 2023 9:27 AM

“State sponsored malware” ?!?

With the help of the NSA, other 3-letters organizations, and also some non-US organizations, almost every software we use daily enter in this category.

As long as there’s a backdoor in it, in either way, this is malware.
And I could not point to any modern software, including their software stack (through supply chain poisoning), that I could not surely dub it at non-malware.

Apple is not immune to it, far from it, in fact they collaborate actively in many ways.

iAPX September 2, 2023 11:47 AM

Let’s go on Apple and their lies.

Apple say that they have not your private key. And it’s true, mot of the time.
They need your credentials to derive your private key and be able to decrypt different keys used to encrypt your data. The Devil hide in the details…

For those of us that some 3-letters agencies want to spy on, Apple will collaborate by asking again our credentials, pretending you have no more access to your account (or some other b*****it), and when we enter these credentials, that’s it: Apple and anyone able to intercept our TLS encrypted communication is able to have access to our “Private” key, that are not so private anymore.

The second way Apple helps is by enabling hidden devices on our own account, the first step is necessary but thereafter someone (3-letters you-know-what) is able to plug a virtual device, that is not displayed but receive any information shared through iCloud, wether it’s backup, messages or whatever.

So if you ever received a weird request from Apple asking you to reenter your credentials, as I did, you know what is the real purpose: somebody else is on the line.

The third way is actual on China, and I suspect it is based on a M of N key sharing (Shamir), where the PCC have access to all cloud content of Apple users through a PCC-controlled company.

Apple might still not have the keys enabling decryption of the user private key, but the PCC have access to the user encrypted content in clear, probably through a M of N where one M if controlled by the PCC or the company it controls.

Apple doesn’t lie. But still it is a total lie.

vas pup September 2, 2023 4:14 PM

2ALL and @ Moderator. Sorry for long post but subject is very important. Enjoy weekend reading from MIT.

https://www.technologyreview.com/2023/08/16/1077386/war-machines/

“To pull the trigger—or, as the case may be, not to pull it. To hit the button, or to hold off. Legally—and ethically—the role of the soldier’s decision in matters of life and death is preeminent and indispensable. Fundamentally, it is these decisions that define the human act of war.

It should be of little surprise, then, that states and civil society have taken up the question of intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—as a matter of serious concern. In May, after close to a decade of discussions, parties to the UN’s Convention on Certain Conventional Weapons agreed, among other recommendations, that !!!! militaries using them probably need to “limit the duration, geographical scope, and scale of the operation” to comply with the laws of war. The line was nonbinding, but it was at least an acknowledgment that a human has to play a part—somewhere, sometime—in the immediate process leading up to a killing.

But intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. Even the “autonomous” drones and ships fielded by the US and other powers are used under close human supervision. Meanwhile, intelligent systems that merely guide the hand that pulls the trigger have been gaining purchase in the warmaker’s tool kit. And they’ve quietly become sophisticated enough to raise novel questions—ones that are trickier to answer than the well-­covered wrangles over killer robots and, with each passing day, more urgent: What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill?

hat has all begun to change. “What we’re seeing now, at least in the way that I see this, is a transition to a world [in] which you need to have humans and machines … operating in some sort of team,” says Shanahan.

The rise of machine learning, in particular, has set off a paradigm shift in how militaries use computers to help shape the crucial decisions of warfare—up to, and including, the ultimate decision. Shanahan was the first director of Project Maven, a Pentagon program that developed target recognition algorithms for video footage from drones. The project, which kicked off a new era of American military AI, was launched in 2017 after a study concluded that “deep learning algorithms can perform at near-­human levels.” (It also sparked controversy—in 2018, more than 3,000 Google employees signed a letter of protest against the company’s involvement in the project.)

!!!A soldier on the lookout for enemy snipers might, for example, do so through the Assault Rifle Combat Application System, a gunsight sold by the Israeli defense firm Elbit Systems. According to a company spec sheet, the “AI-powered” device is capable of “human target detection” at a range of more than 600 yards, and human target “identification” (presumably, discerning whether a person is someone who could be shot) at about the length of a football field. Anna Ahronheim-Cohen, a spokesperson for the company, told MIT Technology Review, “The system has already been tested in real-time scenarios by fighting infantry soldiers.”

!!!Another gunsight, built by the company Smartshooter, is advertised as having similar capabilities. According to the company’s website, it can also be packaged into a remote-controlled machine gun like the one that Israeli agents used to assassinate the Iranian nuclear scientist Mohsen Fakhrizadeh in 2021.

Decision support tools that sit at a greater remove from the battlefield can be just as decisive. The Pentagon appears to have used AI in the sequence of intelligence analyses and decisions leading up to a potential strike, a process known as a kill chain—though it has been cagey on the details. In response to questions from MIT Technology Review, Laura McAndrews, an Air Force spokesperson, wrote that the service “is utilizing a human-­machine teaming approach.”

The Ukrainian army uses a program, GIS Arta, that pairs each known Russian target on the battlefield with the artillery unit that is, according to the algorithm, best placed to shoot at it. A report by The Times, a British newspaper, likened it to Uber’s algorithm for pairing drivers and riders, noting that it significantly reduces the time between the detection of a target and the moment that target finds itself under a barrage of firepower. Before the Ukrainians had GIS Arta, that process took 20 minutes. Now it reportedly takes one.

The range of judgment calls that go into military decision-making, however, is vast. And it doesn’t always take artificial super-­intelligence to dispense with them by automated means. There are tools for predicting enemy troop movements, tools for figuring out how to take out a given target, and tools to estimate how much collateral harm is likely to befall any nearby civilians.

Like any complex computer, an AI-based tool might glitch in unusual and unpredictable ways; it’s not clear that the human involved will always be able to know when the answers on the screen are right or wrong. In their relentless efficiency, these tools may also not leave enough time and space for humans to determine if what they’re doing is legal. In some areas, they could perform at such superhuman levels that something ineffable about the act of war could be lost entirely.

Eventually militaries plan to use machine intelligence to stitch many of these individual instruments into a single automated network that links every weapon, commander, and soldier to every other. Not a kill chain, but—as the Pentagon has begun to call it—a kill web.

!!!Israeli defense giant, has already sold one such product, Fire Weaver, to the IDF (it has also demonstrated it to the US Department of Defense and the German military). According to company materials, Fire Weaver finds enemy positions, notifies the unit that it calculates as being best placed to fire on them, and even sets a crosshair on the target directly in that unit’s weapon sights. The human’s role, according to one video of the software, is to choose between two buttons: “Approve” and “Abort.”

!!!Of course, the principle of responsibility long predates the onset of artificially intelligent machines. All the laws and mores of war would be meaningless without the fundamental common understanding that every deliberate act in the fight is always on someone. But with the prospect of computers taking on all manner of sophisticated new roles, the age-old precept has newfound resonance.

If a computer sets its sights upon the wrong target, and the soldier squeezes the trigger anyway, that’s on the soldier. “If a human does something that leads to an accident with the machine—say, dropping a weapon where it shouldn’t have—that’s still a human’s decision that was made,” Shanahan says.

This year, in a move that was inevitable in the age of ChatGPT, Palantir announced that it is developing software called the Artificial Intelligence Platform, which allows for the integration of large language models into the company’s military products. In a demo of AIP posted to YouTube this spring, the platform alerts the user to a potentially threatening enemy movement. It then suggests that a drone be sent for a closer look, proposes three possible plans to intercept the offending force, and maps out an optimal route for the selected attack team to reach them.

…a computer’s output that points to an enemy troop movement, he says, might require a user to seek out a second corroborating source of intelligence before proceeding with an action (in the video, the Artificial Intelligence Platform does not appear to do this).

In the case of AIP, Bowman says the idea is to present the information in such a way “that the viewer understands, the analyst understands, this is only a suggestion.” In practice, protecting human judgment from the sway of a beguilingly smart machine could come down to small details of graphic design.

If people of interest are identified on a screen as red dots, that’s going to have a different subconscious implication than if people of interest are identified on a screen as little happy faces,” says Rebecca Crootof, a law professor at the University of Richmond.

some decision support systems are definitely designed for the kind of split-second decision-­making that happens right in the thick of it. The US Army has said that it has managed, in live tests, to shorten its own 20-minute targeting cycle to 20 seconds. Nor does the market seem to have embraced the spirit of restraint. In demo videos posted online, the bounding boxes for the computerized gunsights of both Elbit and Smartshooter are blood red.

==>In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the “decision” will absorb the blame and protect everyone else along the chain of command from the full impact of accountability. In an essay, Smith wrote that the “lowest-paid person” should not be “saddled with this responsibility,” and neither should “the highest-paid person.” Instead, she told me, the responsibility should be spread among everyone involved, and the introduction of AI should not change anything about that responsibility.

!!!In one sense, what’s new here is also old. We routinely place our safety—indeed, our entire existence as a species—in the hands of other people. Those decision-­makers defer, in turn, to machines that they do not entirely comprehend.

It is possible, though not yet demonstrated, that bringing artificial intelligence to battle may mean fewer civilian casualties, as advocates often claim. But there could be a hidden cost to irrevocably conjoining human judgment and mathematical reasoning in those ultimate moments of war—a cost that extends beyond a simple, utilitarian bottom line. Maybe something just cannot be right, should not be right, about choosing the time and manner in which a person dies the way you hail a ride from Uber.

In matters of life and death, there is no computationally perfect outcome. “And that’s where the moral responsibility comes from,” she says. “You’re making a judgment.”

…each time a machine takes on a new role that reduces the irreducible, we may be stepping a little closer to the moment when the act of killing is altogether more machine than human, when ethics becomes a formula and responsibility becomes little more than an abstraction. If we agree that we don’t want to let the machines take us all the way there, sooner or later we will have to ask ourselves: Where is the line?”

vas pup September 2, 2023 4:17 PM

Backed by Google and Nvidia, Israel AI startup nabs $155m, soars to $1.4b valuation

https://www.timesofisrael.com/backed-by-google-and-nvidia-israel-ai-startup-nabs-155m-soars-to-1-4b-valuation/

“Israel’s AI21 Labs, a natural language processing (NLP) startup, as been valued at $1.4 billion after raising $155 million in its latest funding round backed by tech
giants that include Alphabet’s Google and Nvidia.

AI21, which has a vision to bring generative AI to the masses, said it will use the funds to fuel R&D and expand the reach of its AI natural language models to more businesses and developers. To date, the startup has raised a total of $283 million from investors, it said.

NLP is the ability of a computer program to understand human language by speech and by text. With the recent hype over ChatGPT, a so-called large language model that uses deep learning to spit out human-like text, other startups such as AI21 Labs have been quick to come out with competing AI models.

AI21 Labs created a software platform where developers can build text-based
applications like recommendation engines, chatbots, and virtual assistants. The company launched a text simplification tool called Wordtune, a Google Chrome extension that helps clients improve or streamline content, and Wordtune Read, a tool that analyzes and summarizes documents.

!!!In March, AI21 Labs rolled out its next-generation language model, the Jurassic-2
family language model, a rival to OpenAI, which is customizable to specific tasks and which the startup says allows developers and businesses to build text-based applications in a number of languages, faster and at a fraction of the cost. The features are part of AI21 Studio, an internal core that offers NLP-as-a-Service products.

“The innovative work by the AI21 Labs team will help enterprises accelerate
productivity and efficiency with generative AI-based systems that are accurate, trustworthy, and reliable.”

Clive Robinson September 2, 2023 4:51 PM

@ iAPX,

Re : It’s all malware, but…

“And I could not point to any modern software… …that I could not surely dub it a[s] non-malware.”

There are two points to start with,

1, Accidental
2, Deliberate

The more complex software is the more probable it is it has vulnerabilities that can be used to attack a system.

Taking that as a given, then is it “by accident”, “by design” or “by design made to look accidental” might be some peoples next consideration.

Me, I say unless atribution is your aim/job, just assume the best or the worst it does not matter, and move on to more important questions.

Malware in essence has a few purposes of which most fall under,

1, Block communications.
2, Gain entry.
3, Deliver payload to,
3.1, Grant control
3.2, Grant data access
3.3, Grant resource access
3.4, Grant destructive capability

The first thing to notice, is that they all require the systems to be reachable in some way. That is have a communications path an attacker can utilize.

If the attacker can not reach a system then there options are effectively none. Even if the malware comes as part of the base OS, it is very limited in what it can do, that is not already being done by major Silicon Valley etc Software Corps.

Which is the very reason those Corps are forcing you to “Go On Line” to use their modern products.

Thus the next question to ask is do you realy need those modern products?

The answer actually in most cases is “NO” either old versions are fine or less encumbered Open Software works sufficiently well.

For instance I still use very basic editors that came with very early OS’s like MS-DOS 5 and earlier and the venerable *nix ed / vi and some of their more modern “WordStar” look alikes they key bindings of which went into many early “Word Processors”(WPs) and Programers “Integrated Development Environments”(IDEs) etc.

As long as you stick with “Open Standard File Formats” work you do on an 8088 with the early IBM PC-DOS that writes to a single sided Floppy or Serial Interface in a “text format” then you can pull it up via cut-n-paste into the latest software you need to process it further.

Also and more importantly it enables you to “Archive” in a reliable way, and pull it into not just a searchable database, but also into a source control system, which funnily enough works just as well for writing papers, books and journals just as well as it does for software.

What you need is a reliable and solid OS and a good grip on command line usage. If you can have multiple terminals or screens available so much the better.

The reality is pretty though pictures are, we rarely need them.

Thus it’s only for games and multimedia entertainment that we need Windows and the like, and lets be honest should we realy be using them at work?

I mention from time to time that the peek human performance in an office was apparently back in the early 1970’s when dictaphones, stenographad & shorthand, typwriters and occasional telephones were the only equipment most offices had.

All we’ve done since is needlessly waste time “beautyfing documents” and harrasing each other with email / messaging and similar. Rather than think and become self reliant, we nag or micro manage. We call the enablers of such wasteful and stressfull behaviours “productivity tools”, but the evidence is they waste on heck of a lot of time for no actual benifit…

So ask yourself why do we run endlessly in the “Hamster Wheel of Pain” upgrade process when there are way better things we could be doing with our time?

When you can answer that honestly, you can then plan appropriately and only move forward where and when realy required.

As for “connectivity” try and avoid it like the plague. In effect it’s like telling teenage kids yes they can have a party whilst mum and dad are away for the weekend…

Clive Robinson September 2, 2023 5:18 PM

@ vas pup, ALL,

Re : AI with agency

In many countries it’s not legal for a veichle on public roads to be autonomous. That is there has to be a human “directing mind” in control so that direct legal liability exists.

The same should obviously be true for all machines with agency, especially where the agency covers the ability to kill or maim or in any way bring harm to humans etc.

But consider this,

If an AI has the ability of human level agency, it also has the ability to fabricate and lie. Thus AI at that level even if it does not have the agency to pull the trigger, will have the agency to provide fake information, so the human with agency makes a wrong decision and pulls the trigger for the AI…

It’s a subject most avoid and it’s a conversation we should be having, before we talk about even thibking of giving AI any kind of battle field capability or agency.

But you can bet that we probably won’t… Because even current near usless AI has so many advantages for the War Hawks they don’t want to do anything other than steam straight ahead and ignore the iceberg warnings.

Look at it this way, modern fighter aircraft have a lot of extra weight and performance limitations because there is a human onboard. It also makes the fighter very expensive for the actual payload capability.

Why spend 1.5billion an a limited capability fighter with a human in it when you can get five or ten higher capability AI driven drones for the same price?

Oh and remember drones will “suicide” without worry…

It’s why we developed “cruise missiles”.

Look on drones as just cheap cruise missiles that can come back home, and you start to see how War Hawks see them, as just the next gen of reusable smart weapons.

TB September 2, 2023 10:02 PM

Good idea? Beside the point.

The point is: If we’re doing this to squid, then someone else is probably doing it with octopus. And that means we cannot, me must not, have a cephalopod gap!

SpaceLifeForm September 3, 2023 1:06 AM

Okta again

Security is hard.

‘https://www.theregister.com/2023/09/01/okta_scattered_spider/

Clive Robinson September 3, 2023 3:59 AM

@ TB,

“that means we cannot, [w]e must not, have a cephalopod gap!”

OMG!!! Think of the Allonautilus scrobiculatus!!! We can not let them go extinct again…

Clive Robinson September 3, 2023 4:39 AM

@ SpaceLifeForm, ALL,

Also from Thursday, UK Intel services warn of Russia’s “Sandworm” group with the new “Infamous Chisel” has been noted as targeting / attacking Ukranian forces Android users,

https://www.ncsc.gov.uk/static-assets/documents/malware-analysis-reports/infamous-chisel/NCSC-MAR-Infamous-Chisel.pdf

“Infamous Chisel is a collection of components which enable persistent access to an infected Android device over the Tor network, and which periodically collates and exfiltrates victim information from compromised devices,”

What has not been indicated is how Sandworm are “finding and infecting” the Ukrainians…

One thing that is indicated from a quick read through the code and information given in the 35page PDF is that Infamous Chisel is not the work of just a small number of individuals.

Robin September 3, 2023 6:06 AM

Last Monday the entire UK air traffic control system collapsed. There has been a week of chaos for air passengers. It appears to have been caused by a badly formed flight plan submitted by an airline.

“The row escalated as Nats’ chief executive revealed that one piece of data received had sparked the system failure when it “didn’t recognise a message”.

Martin Rolfe said that Nats’ automatic flight planning had been designed to stop for safety reasons to “ensure that no incorrect information could impact the rest of the air traffic system”.

He ruled out a cyber-attack, and declined to confirm or deny claims that the wrongly submitted information came from a French airline.

Walsh said it was a “staggering” explanation. He told the BBC: “If that is true, it demonstrates a considerable weakness.””
(The Guardian: h ps://www.theguardian.com/world/2023/aug/30/air-traffic-control-failure-compensation-airlines-nats)

NATS (the “organisation” running the system) didn’t want to run the backup system in case it too crashed.

There are some events that just make my head spin in disbelief. How is it possible that a pilot can submit a malformed plan and totally crash such a critical system?

And could it be done maliciously?

iAPX September 3, 2023 7:49 AM

@Clive, ALL

My definition of a malware is a software that covertly act against its user best interest.
Web trackers are thus malware, as well are all the trackers in our everyday’s softwares. There are seemingly rare exceptions.

As well I love old hardware-defined platforms, and old UART (serial) communication (8250, 16550, etc.) that enable to absolutely control the exchanges, that makes them totally attractive if not required to insulate platforms while providing limited and controlled communication, that’s not what my life is made of in term of hardware (and software).
Your comment here necessitated a lot of millions lines of code, in fact very probably more than 100 millions LOC!

AFAIK the software I use now, typing this comment on my computer, as well as the softwares of my smartphone are all crippled.
And we have less and less ability to control what’s done into these devices.

Clive Robinson September 3, 2023 8:41 AM

@ Robin, ALL,

Re : UK NATS out for the count.

“There are some events that just make my head spin in disbelief. How is it possible that a pilot can submit a malformed plan and totally crash such a critical system?”

I can easily believe it, I’ve seen similar happen many times in “large systems” built from smaller systems supplied by unrelated suppliers working to contract / specification.

Firstly though note it did not as such “crash the system” what some will consider was an “over abundance of caution” caused NATS to switch to a “manual process” rather than a much faster “automated process” out of safety concerns, and so the “sticky as glue factor” kicked in.

The fact that flights still happened all be it at lower capacity attests to the fact “Plan B” worked all be it at much less a capacity due to I suspect a lack of personnel (due to managment cutting them back for financial cost reasons, inspired by certain UK politicians).

The system that stopped on incorrect input, stopped for a reason, what it was and the logic behind it we don’t know but it’s probably in the system specification.

This happens so often in reality with desktop software where you get the “Blue Screen of Death” or similar it should actually not be that much of a surprise.

There are generally three reasons for this,

1, Incorrect or no handling of exceptions or output errors.
2, Moving error checking to the left.
3, No ability to roll back erronious input.

I could go one explaining it further as I have a number of times in the past, but it’s drawn very hostile responses from certain people who just don’t “get it”.

But a fun thing for people to think about, imagine using the software you are familiar with, being used for a large database system with very high through put. Ask yourself what would happen if / when the OS driver reports “no more hard drive space”?

It’s a question I ask quite often and generally it gets a “but that can’t happen” type response, even though it can as there is nothing in place to make the assumption even close to true. Oh and you rarely see it in test let alone system specifications produced by “the software industry”. As for roll back and fail over few software developers know how to do it properly, and managment hate it because of the overhead it carries.

Me I’m aware of it as I’ve designed systems that several billions of dollars of petro-chem systems rely on, and if they did fail to work correctly… Look up Piper Alpha, to see why I’m cautious, then look up the various chemical factory disasters. My job used to be to design systems where “That Can Not Happen”.

So I know what effects can cause what looks like an over abundance of caution.

So consider, what every one would be saying if NATS had in effect ignored the problem and an aircraft or two had run out of fuel or similar causing a couple of hundred deaths?

P Coffman September 3, 2023 10:56 AM

@Luke

Good question. Typically, the concept of virus scanning involves using any available list/database of digital “fingerprints” to identify what are known as indicators of compromise.

What is important here? The same rules of an arms race apply: assume those producing/employing the malware will keep trying to evade detection. Thus, if it is happening now, assume it will happen while moving forward.

My fee is XYZ (just kidding). Whoever this is (I am unschooled on this one) might be ham-handed or very sophisticated, though nation-state malware usually implies any big spender is after you.

julia g September 3, 2023 11:11 AM

@ iAPX,

My definition of a malware is a software that covertly act against its user best interest.

This relies on a definition of “user” that you haven’t provided. For example, who’s the “user” of a botnet? I’d say it’s the remote party in whose interests the botnet is operating—which would mean the botnet wouldn’t be “malware” under the provided definition.

I disagree, because its still acting against the interests of the people providing the computational power (but not using it). However, that’s not a sufficient condition to make something malware: an anti-Amazon campaign using Amazon’s cloud servers shouldn’t be considered malware. And contrary to popular belief among the security community, no malice should be required: most malware and other security attacks are motivated by profit-seeking, not a desire to do harm (which is a side-effect; the prefix “mal-” can simply mean “bad” without implying malice).

Clive Robinson September 3, 2023 11:59 AM

@ iAPX,

Re : Heavy blot v. Lite speed.

“AFAIK the software I use now, typing this comment on my computer, as well as the softwares of my smartphone are all crippled.
And we have less and less ability to control what’s done into these devices.”

Probably true. Ask yourself a question,

“Why are big Silicon Valley Corps moving to cloud and subscription models?”

The simple answer, is few if any want to part with lots of money for features they neither want or need, that will also slow them down, make them very much more vulnerable, and make them have to upgrade hardware every 18-36months.

I have an 8088 based portable computer (or luggable ay 22lb / 10kg). With two 720k floppies. It also has a full 101 keyboard LCD display or CGA display plug in, two serial ports, parallel port and 2400 modem. I bought it for about the equivalent of $1000 back when it first came out in the late 1980’s and I still use it today.

The software it came supplied with was MS-DOS 3.3 and CrossTalk’s Mirror 11 communications software which was a VT52-220 simulator, but with a built in editor and several file transfer protocols. I also had at the time an emulator for BBC Basic and a K&R C-Compiler based on Simple C. All of the software appart from the BBC Basic emulator, which I replaced with GW Basic as well as WordStar 4 I still run on it and other 286, 386 and 486 machines.

These make interesting “gap crossers” to join higher end computers together as well as do printing and program development.

And yes I still run C-Kermit as a poor mans file and print server with an eight port serial card (and form my sins I’ve a Novell 2 system on a removable hard drive in the safe. I also have a high end 486 now with 16MByte of RAM running AT&T Sys Vr4 or an early version of Slackware on 100MByte IDE hard drives in removable caddies and six SCSI DVD drives with a couple of 10Mbit 10BaseT cards from 3Com and another 8port serial card used to run multiple copies of “DOS-Merge” that run “In Circuit Emulators”(ICEs) for a whole series of 8 and 16bit CPU’s

Yes it’s a bit niche/nice but it’s actually very productive for a lot of things I do (believe it or not but the 6502 8bit is still “current” in quite a few “System On a Chip”(SoC) devices used as “communications controlers” and some have been turned into high end Apple ][ emulators that still run “Instrumentation Control Systems”(ICS).

But you don’t have to miss out on some of the benifits… Have a look at,

https://www.theregister.com/2023/09/01/antix_23/

If you can avoide steping into the “Hamster Wheel” Microsoft and others have created, you can often avoid the shackles and pain they induce.

One of the only reasons I use desktop systems with a windows display manager, is I can get six or more “Command line” screens open and visable at the same time…

And yes I know that Python is replacing JavsScript as the prefered programing language for many and Rust fanatics jump up and down about being memory safe… But honestly they all need a few hundred MBytes or a lot lot more baggage, who needs it?

Rather than put lipstick on a warthog, I prefere to let a grayhound run.

Unfortunately I still have to pander to those that “wallow with warthogs” like Microsoft.

Like it or not you go to a meeting and they expect you to be “Microsod Ready”…

I did try making HTML 3 slide decks in the past but the “techs” at conferences and similar appeared to suffer brain freeze…

Winter September 3, 2023 12:06 PM

@iAPX,

My definition of a malware is a software that covertly act against its user best interest.

@julia

This relies on a definition of “user” that you haven’t provided.

I think you all complicate things needlessly.

Malware:

Any application developed, installed, or run on a computer with

.1 criminal intend (legal)

.2 the intend to get any advantage, profit, or consequences unbeknownst to and without consent of the users of that computer system

lurker September 3, 2023 12:28 PM

@Robin, Clive

“NATS didn’t want to run the backup system in case it too crashed.”

Uhuh, I read that as: backup had never been load tested. So the manual method used was actually Plan C.

Winter September 3, 2023 12:47 PM

@Clive

“Why are big Silicon Valley Corps moving to cloud and subscription models?”

To make distribution, support, and maintenance cheaper?

It has been long known that the main costs of a computer program are in distribution, support, and maintenance [1]. Putting stuff in the cloud cuts down most of these costs.

[1] ‘https://firstmonday.org/ojs/index.php/fm/article/view/1470/1385
The emerging economic paradigm of Open Source by Bruce Perens
First Monday, Special Issue #2: Open Source — 3 October 2005

P Coffman September 3, 2023 1:05 PM

@Clive Robinson

I get a kick out of your 8088. There are some IoT devices which might have one working close to bare metal unhindered with complex bios or what have you. Possibly, any of these newer IoT devices are less hardened, if you will.

A while ago, I was researching tinyOS. Forget the MS baby OS.

I never actually worked with TinyOS. I guess my conclusion was, I require a certain level of abstraction and any experiment might involve much more time than I felt I had, i.e., engineering trade-offs were my concern.

I really believe the gist of what you’re saying is still relevant. For example, the ad-hoc network.

JonKnowsNothing September 3, 2023 1:42 PM

@Clive, All

re: Yet another ROBODEBT type scandal bubbling in the cauldron

HAIL warning

A MSM report from UK about the use of AI to vet welfare claims. (1)

  • Over the past two years, the Department for Work and Pensions (DWP) has increasingly deployed machine-learning algorithms to detect fraud and error in universal credit (UC) claims.
  • Ministers have maintained a veil of secrecy over the system
  • The information commissioner, John Edwards, has now warned the DWP it could be in contempt of court unless it changes its approach and spells out within 35 days the terms under which it could release more information. 22 September 2023
  • The DWP recently expanded its use of AI from scanning claims for welfare advances to applications made by cohabitants, self-employed people and people applying for housing support, as well as to assess claimants’ savings declarations.

The reasons the DWP have given to not release information about the system vary over time. There is every indication that the DWP will find another reason not to comply with the 09 22 2023 deadline.

Get the popcorn ready for that deadline…

  • If you have a home, if you have electricity, if you have a microwave; if you have a cook-top w gas, or if you got a pre-popped bag from the Food Pantry. (2)

===

1)

htt ps://www.theguardian. c o m/politics/2023/sep/03/uk-warned-over-lack-transparency-use-ai-vet-welfare-claims

2)

ht tps://www.theguardian.c o m/society/2023/sep/03/british-food-banks-bring-in-counsellors-and-private-gps-to-help-exhausted-workers

  • Three years ago we gave out 30 food parcels a week. Now it’s over 150. We had 50 households on our books – now it is 1,600. We now open 10-2 five days a week, and I can see the time when we might have to open on a Saturday. We spend £1,000 a week on food items to keep up with demand.

(url fractued)

vas pup September 3, 2023 5:19 PM

@Clive said “If an AI has the ability of human level agency, it also has the ability to fabricate and lie.”

Clive, thank for Your input. I have a Q: what is the motive of AI to fabricate and lie? Does AI have own agenda in such cases?

Winter September 3, 2023 6:34 PM

@vas pup

what is the motive of AI to fabricate and lie?

That is a complex question with two parts.

What motivates an autonomous AI?

To be able to operate autonomously, a system must have a hierarchy of goals and a hierarchy of importance. This means the system must have motives that map ultimate goals into concrete actions and the ability to chose from alternative actions.

In human terms, my sugar level is low, that leads to a craving of sugar (motive) and selecting a goal of tasting something sweet. Then I have to select actions that might get me to eat something sweet. Here I have to weight different strategies with the time and effort against the sweetness and likeliness of success.

When would an AI lie, or rather, try to deceive?

A possible strategy to reach a goal could be to induce a human to help. This generally can be achieved by uttering words. An AI can determine which words will result in the desired human action and then utter these words. Just like an AI could determine that pushing a certain button would open a door.

There is absolutely no reason why these words should have any notion of “truth”. If saying that the cookie jar contains batteries would induce a human to hand over the cookie jar, then “batteries” it is.

This is important. If we try to just train an AI to supply us a valid reason for its decision, there is a high probability that it will simply learn to come up with a valid reason that has nothing to do with the actual real reason. That is just how AI training works. Getting the real reasoning behind a decision is not easier for an AI than for a human.

Dropbear September 3, 2023 7:43 PM

@Clive

One thing that is indicated from a quick read through the code and information given in the 35page PDF is that Infamous Chisel is not the work of just a small number of individuals.

I’m curious as to what makes you say that. The paper’s authors even note:

The Infamous Chisel components are low to medium sophistication and appear to have been
developed with little regard to defence evasion or concealment of malicious activity

While I am inclined to agree with you, I see no evidence in the 35 pages beyond potentially the redacted sections. As you say:

What has not been indicated is how Sandworm are “finding and infecting” the Ukrainians…

Knowledge of that could certainly improve my confidence, but for all I know, targeting information could be available somewhere on the darkweb and android privilege escalations are a dime a dozen.

Clive Robinson September 3, 2023 8:23 PM

@ vas pup, Winter,

Re : AI lie / deception.

“what is the motive of AI to fabricate and lie? Does AI have own agenda in such cases?”

First think on why creatures do things or can be trained to do things, importantly it does not require them to be inteligent just have agency, a reward system, and some kind of memory which alows a system to “hunt back and forth” to optomise a reward.

It’s actually fairly easy to design a servo system where when you adjust an input the output follows, over shoots, under shoots and so on minimising the ringing (usually exponential decay or percentage of a percentage). The frequency of the ringing is based around the “loop response time” or delay.

Now consider three servos effectively wired in a ring with the input of the first driven by the output of the last. The arangment will oscillate stably. If you put a series tuned circuit in between any two servos then whilst the ring will start oscillating at any frequency it will generally settle at the resonant frequency as in effect this gives a maximum signal around the ring. If the tuned circuit frequency is changed then the loop will follow.

That is without any kind of inteligence involved the loop will hunt out the maximum signal frequency. That is it acts like a reward process. A similar trick can be seen with two pendulums that are at similar frequencies and weakly coupled. They will become synchronized (loose locked oscillators).

Many dynamic systems exhibit this sort of “reward process” and it can get quite sophisticated, such as some plants turning their leaves to follow a light source and single cell organisms headong towards the light, or away from acids etc.

So you can have other systems with two or more reward processes. If the reward process has a nonlinear response then apparently chaotic behaviour can result (you can see a chaotic system with a doubly jointed pendulum).

How ever you can make “Dynamic filters” using summing circuits, delay lines and feedback.

As I’ve mentioned before these “Signal Processing” circuits when sufficiently complex can form certain types of simple “neural networks”.

Thus if they can “self adjust” at some point they become a basic AI ML system that selects it’s own “reward”.

This kind of behaviour was exhibited back in the 1990’s when they got randomly driven inputs to neural nets to form oscillators.

You might remember when two ML systems developed their own –weak– crypto system?

Well people have taken it further. So “non inteligent” systems can develop information hiding / obscuring techniques which brings us onto @Winters points.

Clive Robinson September 4, 2023 9:50 AM

@ ALL,

Re : Web App v. Native App.

The Register has published an article based on a paper[1] from “Vrije Universiteit”(VU) in the Netherlands,

“So you want to save energy? Ditch web apps and go native, boffins say”

https://www.theregister.com/2023/09/04/web_native_apps/

Researchers analyzed the energy usage of 10 internet platforms that offer content access through both native Android apps and the web.

Perhaps unsuprisingly the native apps genetaly need less energy and resources, as you would expectvthem to be both “closer to the metal” and “better optimised”.

But… As we all know it’s not just resource utilisation we should consider, there is Privacy and Security as well.

So when you find the following paragraph,

“While web apps use more energy than native apps, they also send and receive less network traffic. The researchers posit this may be a consequence of web devs being incentivized to reduce bandwidth usage because web apps often have to be reloaded on startup.”

You should ask yourself the question,

“Why would the native app have more network traffic?”

They don’t say “which way” so it might just be “beautification” on the web pages…

But call me suspicious or even paranoid, but we know “Native Apps” can reach further into your Private Information on a device than Web Apps can.

Further I know from long experience, any one foolish enough to trust LinkedIn is going to have their privacy violated. As LinkedIn have a very long history of doing so and something tells me they will never change their spots.

[1] The researchers at VU pre-print paper,

“Native vs Web Apps: Comparing the Energy Consumption and Performance of Android Apps and their Web Counterparts”

by,

Ruben Horn, Abdellah Lahnaoui, Edgardo Reinoso, Sicheng Peng, Vadim Isakov, Tanjina Islam, and Ivano Malavolta.

https://arxiv.org/abs/2308.16734

JonKnowsNothing September 5, 2023 6:11 PM

@ALL

re: AI/ML backup backwash

HAIL warnings

A few MSM articles of the last month are showing signs that all is not well in the world of ChatGPT and its AI algorithms. Much has already been discussed technically; the sieve will fail when new content fails to enter the hopper.

Pease Porridge hot,
Pease Porridge cold,
Pease Porridge in the Pot
Nine Days old,

Some of the items on the wiggle list

  • The MSM site The Guardian has blocked OpenAI from using its content to power artificial intelligence products such as ChatGPT
  • Reports of AI generated books on Mushroom Harvesting may contain deadly errors were/are available on Amz
  • A MSM Site Gizmodo just dumped their all their Spanish content writing and translation staff replacing it with ChatGPT generated content and ChatGPT generated translations.
  • More sites are adding GPTBot crawler blocking code to ROBOTS.TXT file

It’s as easy as adding these two lines to a site’s robots.txt file:

User-agent: GPTBot
Disallow:

Of course the issue with ROBOTS.TXT file is the implementation rests with the crawler. So it is no guaranteed that the GPTBot or any other crawler will honor the no trawl line.

It will be interesting to watch the new edition of No Crawl Wars. Much of the old search engine crawler wars are done.

  • eg: The Wayback Archive has details on their site on how it honors the No Crawl command by deleting all target site content from their archive.

So far there is no equivalent for the historical ChatGPT datasets.

===

ht tps://www.theguardian. c o m/technology/2023/sep/01/the-guardian-blocks-chatgpt-owner-openai-from-trawling-its-content

h ttps://arstechnica. c om /information-technology/2023/09/ai-took-my-job-literally-gizmodo-fires-spanish-staff-amid-switch-to-ai-translator/

ht tps://arstechnica.c o m/information-technology/2023/08/openai-details-how-to-keep-chatgpt-from-gobbling-up-website-data/

(url fractured)

vas pup September 5, 2023 7:15 PM

@Winter and @Clive
Thank you very much for Your posts/inputs.
When @Winter said “A possible strategy to reach a goal could be to induce a human to help. This generally can be achieved by uttering words. An AI can determine which words will result in the desired human action and then utter these words. Just like an AI could determine that pushing a certain button would open a door.”

Very good point, e.g. see this
Is there a sinister side to the rise of female robots?
https://www.bbc.com/future/article/20230804-is-there-a-sinister-side-to-the-rise-of-female-robots

There are references to two movies Her and Ex Machina in the article. In former AI computer program does have own strategy which evolved but don’t have effectors, so it could induce human pleasure then distress by manipulating human emotions and potentially actions. But in the latter, AI robot does have effectors and really have own goal to escape and lied to human test person regarding loving him and manipulated his behavior in such way he helped her achieve her own goal using him as a tool.

So, I see presence of effectors in AI robots more dangerous because it could combine deception to get human help with actual own actions utilizing human help.

Clive Robinson September 5, 2023 8:15 PM

New report on locating MH370.

A few days back the news of this new Australian report started to appear. However at some two hundred odd pages I’m guessing few will want to read it.

So you can read a much shortend version,

https://www.aerotime.aero/articles/malaysian-airlines-missing-plane

What they have done is analyse information in several databases run by the Amateur / Ham Radio to usually test how your signal is getting out and for doing science into radio propogation.

Known as “Weak Signal Propogation Reporter”(WSPR) there are a large number of very low power transmitters and receivers that send their data back to the databases.

What the researchers have done is in effect the same experiment carried out in Britain that showed what we now call RADAR was possible.

If you have a transmitter and receiver working in the HF bands[1] an aircraft flying between them will cause the signal to fluctuate in a recognisable pattern and all sorts of information can be found.

The researchers think they’ve found the track of MH370 more precisely than before, thus it’s fatal flight path. So may have reduced the search area.

But WSPR is not the only MH370 tracking source. Scientists have shown that barnacles on wreckage can show where it has drifted, thus it can be traced back to approximately where it originated,

https://arstechnica.com/science/2023/08/barnacles-could-hold-key-to-finding-wreckage-of-malaysia-airlines-mh370/

[1] It gets more fun in the VHF and above bands and something red/purple cyber-sec teams should be cognizant of. If the TX and RX sites are not “line of sight” then the signal will be weak. However if a passenger aircraft flies in between it acts like a mirror and strongly reflects the signals thus vastly increasing the received signal strength. If you want to spend oh between $20-200 on a “Software Defined Radio”(SDR) you can see this happening in the WiFi and other bands used for computer data communications.

vas pup September 6, 2023 4:50 PM

Israel’s Pluri wins NIH contract for drug that could save lives in nuclear event
https://www.timesofisrael.com/israels-pluri-wins-nih-contract-for-drug-that-could-save-lives-in-nuclear-event/

“Pluri, an Israeli company using cell technology to develop pharmaceuticals and food products, was awarded a major contract by the US National Institutes of Health to continue to develop its novel treatment for deadly radiation sickness.

The condition, also known as hematopoietic acute radiation syndrome (H-ARS), occurs when a person is exposed to high levels of ionizing radiation, such as during a nuclear attack or accident. Destruction of the bone marrow and blood cells ensues, leading to severe anemia, infection and bleeding.

Under the tender, Pluri will collaborate with the US Department of Defense’s Armed Forces Radiobiology Research Institute in Maryland to advance the work on the drug, which unconventionally aims to regenerate all three types of blood cells produced in the blood marrow, rather than just one.

However, Pluri CEO and president Yaky Yanay emphasized that other radiation threats are very real, including the use of dirty bombs by terrorist groups and meltdowns of old nuclear reactors in Europe and the US.

Dr. Nitsan Halevy, Pluri’s chief medical officer. “Other products will be targeting the platelets, specifically trying to mitigate bleeding events. Our product works at a higher level, on the bone marrow cells that produce all three types of blood cells.”

When the cells in the drug receive signals that there is a drop in blood cell counts, they secrete proteins that travel to the bone marrow and assist in the regeneration of both the precursor cells and the blood cells themselves.

Pluri has some preliminary data indicating that the drug would be effective not only after radiation exposure but also as a prophylactic.”

That is second attempt to post this very important information directly related to blog’s nature. I wish blog will be closer on its modus operandi to Twitter/X rather than more restrictive Facebook/Instagram.

SpaceLifeForm September 6, 2023 7:54 PM

@ Clive, ALL

Preaching to choir

Security is hard.

Out-of-band helps because the private key would not have been in a crash dump.

Microsoft is apparently trying.

‘https://msrc.microsoft.com/blog/2023/09/results-of-major-technical-investigations-for-storm-0558-key-acquisition/

Winter September 7, 2023 9:54 AM

And the winner in the category creepiest data collection is:
Your Car!

‘https://www.techdirt.com/2023/09/07/mozilla-modern-cars-are-a-privacy-shitshow/

Nissan earned its second-to-last spot for collecting some of the creepiest categories of data we have ever seen. It’s worth reading the review in full, but you should know it includes your “sexual activity.” Not to be out done, Kia also mentions they can collect information about your “sex life” in their privacy policy. Oh, and six car companies say they can collect your “genetic information” or “genetic characteristics.”

Anonymous September 7, 2023 6:48 PM

Sorry, no idea. “However, there is a strong chance that this is a machine civilization that has outlived its creators”
-Children of Ruin

SpaceLifeForm September 7, 2023 8:01 PM

Email sucks.

If you do not check it, you will not get phished.

‘https://techxplore.com/news/2023-09-scammers-abuse-flaws-email-forwarding.html

More than 12% of the Alexa 100K most popular email domains—the most popular domains on the Internet—are vulnerable to this attack. These include a large number of news organizations, such as the Washington Post, the Los Angeles Times and the Associated Press, as well as domain registrars like GoDaddy, financial services, such as Mastercard and Docusign and large law firms.

Clive Robinson September 7, 2023 10:23 PM

@ SpaceLifeForm, ALL,

Re : Microsoft is apparently trying.

It’s not “apparently” microsoft are “very trying” at the best of times, especially when tjey make security mistakes that were condidered dumb / rookie two decades ago or more.

“Out-of-band helps because the private key would not have been in a crash dump.”

The problem with crash dumps and “root of trust” secirity values such as crypto keys was well known in the *nix community back in the 1980’s. So Microsoft are fourty years late to the party.

Last century I investigated quite a few ways to make “in memory secrets” worthless to direct (memory freezing) and indirect (software tools) attacks. I even mebtioned them on this blog (try searcging for “snake eating its tail).

I even explained in depth the idea behind “data shadows” that effectively encrypted the security value with the equivalent of a One Time Pad that changed continuously.

@RobertT explained a similar idea bassed around –if I remember courtctly– Lorenz chaotoc/ strange attractors.

In effect the “root of trust” is stream encrypted with a known length of key that is considerably longer than the “root of trust” thus decrypting it requires the attacker to know several things that remain hidden in the CPU registers that change at the rate of a “fast interupt”.

In say a “block cipher” the “root of trust” is mostly considered to be the “Encryption key”. However in reality in software it is actually the “round sub-keys”. These get stored in what is in effect an array in the algorithm, so you do not need the “encryption key” just the contents of the array, which a crash dump will give you.

The way around this is to “whiten the array values in use”. That is what ever the array value is it gets XORed by a “stream cipher” value immediately prior to use. Then the array value gets XORed into a different value, thus all the stored in the array sub-keys continuously evolve.

lurker September 8, 2023 12:06 AM

@SpaceLifeForm, @Clive Robinson
going to the races?

I really liked the way they quick-smart fixed the “race condition” which left keymat in the dump. I desire no further knowledge of what when or how could race in those systems . . .

lurker September 8, 2023 12:14 AM

@SpaceLifeForm
“email blows, in the wind”

I thought we fixed all that, back in the spammer wars of the 90s, No?

And why are UCSD “computer scientists” tinkering with a problem that Descartes could have fixed with a pencil and paper, when they could be sucking in research funds on QC?

Winter September 8, 2023 4:15 AM

Yet Another Study Debunks The ‘YouTube’s Algorithm Drives People To Extremism’ Argument
‘https://www.techdirt.com/2023/09/07/yet-another-study-debunks-the-youtubes-algorithm-drives-people-to-extremism-argument/

We report two key findings. First, we replicate findings from Hosseinmardi et al. (20) concerning the overall size of the audience for alternative and extreme content and enhance their validity by examining participants’ attitudinal variables. Although almost all participants use YouTube, videos from alternative and extremist channels are overwhelmingly watched by a small minority of participants with high levels of gender and racial resentment. Within this group, total viewership is heavily concentrated among a few individuals, a common finding among studies examining potentially harmful online content (27). Similar to prior work (20), we observe that viewers often reach these videos via external links (e.g., from other social media platforms). In addition, we find that viewers are often subscribers to the channels in question. These findings demonstrate the scientific contribution made by our study. They also highlight that YouTube remains a key hosting provider for alternative and extremist channels, helping them continue to profit from their audience (28, 29) and reinforcing concerns about lax content moderation on the platform (30).

ResearcherZero September 8, 2023 5:44 AM

Finally manufacturers are beginning to patch those old outstanding firmware and software bugs. Any who are still putting it off should perhaps get a wriggle on.

…a required entry field for details of where in the code to “trigger” the vulnerability or a video that demonstrates “detailed proof of the vulnerability discovery process,” as well as a nonrequired entry field for uploading a proof-of-concept exploit to demonstrate the flaw. All of that is far more information about unpatched vulnerabilities than other governments typically demand or that companies generally share with their customers.

‘https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/

“I think it absolutely does create a perverse incentive where now you have private organizations that need to basically expose themselves and their customers to the adversary.”

‘https://www.wired.com/story/china-vulnerability-disclosure-law/

“…vulnerability reports are also shared with Shanghai Jiaotong University and the security firm Beijing Topsec, both of which have a history of lending their cooperation to hacking campaigns carried out by China’s People Liberation Army.”

Microsoft handed over it’s source code…

“TOPSEC is a China Information Technology Security Center (CNITSEC) enterprise and has grown to become China’s largest provider of information security products and services.

CNITSEC is responsible for overseeing the PRC’s Information Technology (IT) security certification program. It operates and maintains the National Evaluation and Certification Scheme for IT security and performs tests for information security products. In 2003, the CNITSEC signed a Government Security Program (GSP) international agreement with Microsoft that allowed select companies such as TOPSEC access to Microsoft source code in order to secure the Windows platform. Shortly thereafter, in 2004, People’s Liberation Army (PLA) officer Yang Hua (GSP Communications Department’s 3rd Communication Regiment, PLA 61416 Unit) was sent to TOPSEC to receive network-security training.”

https://www.theguardian.com/world/us-embassy-cables-documents/214462

Topsec actively recruits for the PLA cyber army.

‘https://nsarchive2.gwu.edu/NSAEBB/NSAEBB424/docs/Cyber-066.pdf

Back when OPM was hacked the intrusion revealed foreign activity on the OPM network as far back as November 2013. In March 2014 OPM had detected a breach in which blueprints for its network’s architecture were siphoned away…

OPM’s contractor — KeyPoint — was hacked in 2013.

OPM’s previous contractor, Virginia-based USIS, was also hacked.

“the third-party contractor (SAP) was hacked and the hacker was then able to navigate into the USIS network via the third party’s network.”

‘https://www.infosecurity-magazine.com/news/report-chinese-breach-of-usis/

X1 and X2

Exploit LSASS and Pass-the-Hash

“X2by used credentials stolen from KeyPoint to install malware in the OPM network and create a backdoor.”

‘https://fcw.com/articles/2016/02/18/opm-oig-keypoint.aspx

Innocuously named after McAfee’s software mcutil.dll to fool sysadmins, the attackers were able to read and steal the SF-86 forms that were stored inside OPM’s network.
https://asamborski.github.io/cs558_s17_blog/2017/04/20/opm.html

Which all lead back to a particular university and security company in China.

ResearcherZero September 8, 2023 6:43 AM

Confusion over whether some name is a public DNS name or another private resource can cause sensitive data to fall into the hands of unintended recipients.

‘https://blog.talosintelligence.com/whats-in-a-name/

APT28 Tried to Attack a Ukrainian Critical Power Facility

The perpetrators planned to implement their intent using bulk emails from a fake address and a link to a ZIP archive, which, when opened, could have granted them access to the organization’s systems and data. They used legitimate services such as Mockbin and standard software functions to carry out the attack.

‘https://thehackernews.com/2023/09/ukraines-cert-thwarts-apt28s.html

Surge in Gamaredon registered domains and subdomains during counteroffensive…

Fast fluxing is used by APT groups to circumvent traditional methods of threat detection that rely on threat feeds containing full domain names, including subdomains.

Gamaredon operates with an innumerable amount of IP addresses, and uses wildcard A records in place of definable subdomains to evade detection…

A large group of IPs are associated with a single Fully Qualified Domain Name (FQDN), and rotated through an attack at an extremely high frequency via automated DNS resource record (RR) amendments in the zone file.

“Using MS Word combats static analysis by hosting the payload on a template that is downloaded from an attacker-controlled server, once the document is opened and the user has met one or more conditions – such as geographic location, device type and system specification – prior to delivery.”

‘https://www-silentpush-com.cdn.ampproject.org/c/s/www.silentpush.com/blog/from-russia-with-a-71

‘https://www.rnbo.gov.ua/files/2023_YEAR/CYBERCENTER/Gamaredon_activity.pdf

Clive Robinson September 8, 2023 7:59 AM

@ Winter,

The question for some about YouTube is,

“Does it drive or just enable?”

From my experience people asking the question forget to ask the important question of,

“Do channels change with time?”

To which the question is yes.

I did not used to have anything to do with YouTube but a relative does, and they used to point me to Space related Items (it is an area I do engineering design in after all).

Because of the engineering side I also have an interest in Amateur Radio and Radio Astronomy in various areas. So I set up an independent way to view a limited amount of YouTube

Now one thing I found is that there are several types of channels. Outside of entertainment and commercial organisations and the likes of toxic MSM like Fox News and similar and those pushing politics religion and lets call it non mainstream lifestyles there are the individuals. Of which there are those that push product in various way, those the push the presenter, and those that provide information about their hobbies, crafts, and ways of doing certain types of work (including how to extend your home underground safely by digging tunnels and caverns).

The problem is that some presenters included their “life” in and thus indirectly gave vent to politics and religion. As life dealt them increasing misfortune as it does to many they sort of got more radical and moved across from being a useful educational / information source into one or two steps back from “Angry Man”.

If you look up say K6UDA he was once an entertaining and informative watch, well respected and refered to by many others. Then for various reasons he got fed up with being in California and started getting political in various ways. He indicated that he was once part of US Guard Labour and he started moving over into what some call 2A issues and YouTube pulled him up on some of his vids. Now he hardly doew anything on YouTube but does advertise his 2A talks etc on Rumble that is rather more tollerant of that.

There are a number of others that have gone from being “your helpfull informative neighbour” to other alternative religion, politics, life style, and in turn links to other channels further down that curve get formed.

Thus people do follow them down the rabbit hole, the question is how far before the pull up and go elsewhere.

Because YouTube has a feedback mechanism… You can find out which of your channels vids are most watched, earn the most cents etc. Which can distort the chanbel owners view on which direction to go.

As an example WC2F a couple of years ago did some “Ham Radio” stuff, but… Today he appears to have gone totally “out the tree” as a quick look at the torrent of “shorts” he makes spews out often with several a day, every day… I suspect this is due to YouTube trying to compete with the TickTock mentality and moving channel payments accordingly.

Dave of the EEVblog from time to time pops up a video on how YouTube is increasingly less and less rewarding to actual real content creators and has done comparisons with other outlets he uses. But he is not the only one who has complained at YouTube managment idiocy and their changes in algorithms.

Thus the overall result is yes there is increasing dare I say radicalised religion and politics on YouTube, and whilst YouTubes “recomend” algorithm may not be to blaim for getting “new viewers” into radical viewing, their other algorithms are definitely causing changes in the radical direction by channels.

Winter September 8, 2023 8:42 AM

@Clive

He indicated that he was once part of US Guard Labour and he started moving over into what some call 2A issues and YouTube pulled him up on some of his vids.

Reminds me of Talk Radio in the US. It gives everybody a voice, but not necessarily an audience. And in both cases, people say what pays.

With hindsight, Rush Limbaugh was a LLM before the term existed, making as much sense and spewing hallucinations about the US as chatGPT. Anything that filled the bank account was fair game.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.