Putting Undetectable Backdoors in Machine Learning Models

This is really interesting research from a few months ago:

Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners.We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input­a property we call non-replicability.

Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor. The backdooring algorithm executes the RFF algorithm faithfully on the given training data, tampering only with its random coins. We prove this strong guarantee under the hardness of the Continuous Learning With Errors problem (Bruna, Regev, Song, Tang; STOC 2021). We show a similar white-box undetectable backdoor for random ReLU networks based on the hardness of Sparse PCA (Berthet, Rigollet; COLT 2013).

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Turns out that securing ML systems is really hard.

Posted on February 24, 2023 at 7:34 AM49 Comments


Kevin Marlowe February 24, 2023 8:26 AM

One of my cybersecurity students was complaining last night that ChatGPT was inexpicably recommending drink recipes to him containing honey. I think it may have been keying off of his admission that he had mead in his liquor cabinet, but maybe Big Honey has figured out how to corrupt the OpenAI learning model…

David February 24, 2023 9:12 AM

All I can think of is Angela Lansbury telling Laurence Harvey, “why don’t you pass the time by playing a little solitaire?”

Winter February 24, 2023 9:28 AM

Turns out that securing ML systems is really hard.

The newest ML systems are black boxes. No one understands how they come to a decision.

I would expect that if you do not understand how your system comes to a decision, you also do not understand when that decision is wrong in general, or manipulated in particular.

Clive Robinson February 24, 2023 12:12 PM

@ Bruce, ALL,

Re : Black box = eternity.

“Turns out that securing ML systems is really hard.”

You might want to turn that up a notch or two to “impossible” in a resource bound environment.

Assume that the process is a black box system and you are an observer, you can be in one of obly two states,

1, See only the output.
2, See the input and the output.

Your job is to accurately decide if,

3, There is a determanistic process
4, There is correlation between input and output by a determanistic process.

We happen to know from the “One Time Pad” model that both 3&4 can not be done. Further under the constrained resource model we happen to know from the “computationaly secure”(CS) model 3&4 can not be done.

So as long as the ML system is or can be “black box” in nature, as an observer we can not determin if what happens inside it is determanistic or not, nor go further to accurately describe it.

Similar reasoning applies to otger “box models”.

Winter February 24, 2023 12:48 PM


3, There is a determanistic process

Current AI, or deep learning in general, is not deterministic.

modem phonemes February 24, 2023 3:18 PM

Meta has your back ! (or is that back door … )


“The LLaMA collection of language models range from 7 billion to 65 billion parameters in size. By comparison, OpenAI’s GPT-3 model—the foundational model behind ChatGPT—has 175 billion parameters.”

175/65 =~ 3

So, carry 3 phones and you’re good, because they have to backdoor all 3. Or … wait … is that good ?

Clive Robinson February 24, 2023 4:33 PM

@ Winter,

“Current AI, or deep learning in general, is not deterministic.”

Define your definition of “determanistic”?

If the behaviour can be shown to be coherent and repeatable then the implication is it is determanistic even though it might not be possible to determin how it works.

It is why I chose the OTP model as an example, because it consists of three parts,

1, A counter of input data.
2, A random oracle driven by the counter.
3, A determanistic mixing process that has two inputs. The input data and the output from the oracle.

It is fully determanistic in it’s behaviour yet appears otherwise.

That is if you have two such devices that contain the same copy of the random oracle then you can trivially prove it’s determanistic. But if you don’t you can not.

Look at it as a much reduced example of Searl’s “Chinese Room”. It is as determanistic as a calculator or database program is, even though it might look otherwise, even intelligent.

One way to make it appear intelligent is to replace the random oracle with a database like that of the Chinese Room that updates with each input via a form of adaptive matched filter. Depending on where you place the adaptive matched filter the output is technically the same but appears different,


The cat sat on the mat

Has exactly the same information content as,

On the mat a cat sat
On the mat sat a cat
A mat has a cat sitting on it

And so on.

The adaptive matched filter acting on the information from the database has just changed the way the information is presented each time.

A similar trick can be done by passing the input question for information and presenting it in different formats to the database.

Some early text based games in the 1980’s worked in a similar way, where the database was of locations and objects, where the players were adaptive objects and updated during game play.

I helped design a system where each player had their own CPU which updated the database with corse details across a 10Mbs network and provided fine details if requested.

So the DB would know you had a “hat” and your CPU knew if it was on your head or in your hand, and fine discriptive items such as colour, feather, flowers, ribbons etc. Similar with other items of cloathing equipment and body build etc.

It was about the most you could squeeze out of the technology of the time. We got it to the point where we were looking for funding to take it live, but the UK was too much into “on home computer gaming” at the time not interactive multiplayer dungeons and we had not developed a graphical front end for home computers… Perhaps if we had done it the other way around of front end first backend later we might have got the funding…

Clive Robinson February 24, 2023 5:09 PM

@ modem phonemes, JonKnowsNothing, SpaceLifeForm, Winter, ALL

Re : Meta’s LLaMA-13B baby ChatGPT.

Having just read the article about Meta’s LLaMA-13B after posting to @Winter above about something I did in the 1980’s with 68K CPU cards and Cambridge Ring networking (later Ethernet).

The ARS Tech artical lit up a neuron or three…

Meta/Facebook has been chucking billions into Virtual Reality systems that it calls the “Metaverse”. So much so it made investors nervous and Meta’s value dropped to about a quater of it’s peak just a few months before.

It strikes me that this baby ChatGPT is about the size you would need for your own personal Avatar in the “Metaverse” space. In effect it would cut network bandwidth down to a fraction of a percent in a widely distributed system, and likewise reduce the latency quite a bit.

It would learn to be a shadow of you like a personal assistent that is you or effectively virtual you, as a virtual doppelganger[1] (though hopefully not evil)…

As they say just a neuron or three firing from back in the 1980’s to today where the technology is just about there to support it…

[1] It’s been long rumored that certain Silicon Valley Mega-Corp billionaires are looking at all forms of “life extension” including being modern day vampires. This virtual me might be yet another attempt.

Winter February 24, 2023 5:20 PM


Define your definition of “determanistic”?

If the behaviour can be shown to be coherent and repeatable then the implication is it is determanistic even though it might not be possible to determin how it works.

It is non-deterministic on two levels. During training, (pseudo) randomness is used in the ordering of training samples and the individual parameter operations. During the generation, there is randomness in the operation of the individual “neurons”.

The quality of the randomness affects the quality of the training. The level of randomness is a parameter setting during training and during inference (the temperature). If the training is not visiting parameter (product) space evenly (IIRC, ergodic is the term), the end product is likely to get stuck in a suboptimal state.

So, if the AI is not random, it won’t work well.

lurker February 25, 2023 1:35 AM

@Clive, Winter
re deterministic black boxes

If the box is known to contain
i) learning data,
ii) learning methods,
iii) instructions on how to process user inputs using i & ii, and
iv) unknown ingredients

then what are we supposed to do with such a box? Put to one side and watch that it doesn’t cause harm to anything else.

More to follow, gratia modbot

lurker February 25, 2023 1:37 AM

@Clive, Winter

Current [Whatever] Learning systems are not, because they don’t learn from outside their box. Scraping the web doesn’t count because that only increases the size of the data in i) above.

OED: learn
▪ gain or acquire knowledge of or skill in (something) by study, experience, or being taught:
▪ commit to memory:
▪ become aware of (something) by information or from observation:

Current AI/ML systems cannot acquire knowledge or skill, nor become aware of anything, because they are not sentient. This leaves only “commit to memory”, which as a standalone action is not highly regarded in academia.

lurker February 25, 2023 1:38 AM

@Clive, Winter

Goldwasser et al. have shown that by tinkering with the black box they can create machine idiots, or malevolents. What’s new?

JonKnowsNothing February 25, 2023 3:01 AM

@lurker, @Clive, Winter, All

re: Current [Whatever] Learning systems are not, because they don’t learn from outside their box

A light weight Sci-Fi book series deals very nicely with the hubris of machine “intelligence or learning”. It’s an entertaining series of stories. In each book, the hero must solve a unique puzzle. A puzzle that no one has ever solved.

The hallmark of the stories is Random is not Random. Machines or their Avatars cannot generate true random, they can mix n match, they can swap and sort and sieve and reorder. Machines can mimic but the cannot do original works. It’s all cut n paste from a heap of input streams.

A machine might copy the style of the cave paintings but it will never draw a woolly rhinoceros, a cave bear, or cave lion from live models.


Search Terms

The Keys to the Kingdom by Garth Nix
7 books published between 2003 and 2010

Winter February 25, 2023 5:30 AM


A machine might copy the style of the cave paintings but it will never draw a woolly rhinoceros, a cave bear, or cave lion from live models.

Bad example, no one will ever again.

The hallmark of the stories is Random is not Random.

That is just a small technical change. True randomness is everywhere. Whenever you measure a physical signal, you get randomness in the form of noise. When machines use PRNGs, that is just a stupid shortcut.

Machines can mimic but the cannot do original works.

Darwinian selection produces original solutions. Look around, every living thing is build out of original solutions created by mechanistic processes.

It’s all cut n paste from a heap of input streams.

That is how humans work too. Every word you use has been said before.

Current AIs are simple machines having obvious shortcomings. But like airplanes are not birds, but still fly, AIs are not humans or animals, but still might in some sense think. They can write already.

I am curious how this will evolve.

Winter February 25, 2023 5:40 AM


Current AI/ML systems cannot acquire knowledge or skill, nor become aware of anything, because they are not sentient.

What is “sentient”? There is quite a lot of literature about what it means to “understand” language. What can be learned from it is there is no single thing that is “understanding”. Sentient is not much better.

There is also a philosophical game about how to determine whether a person is sentient.

You sit in a bar with a friend and at the end of the evening he confesses that years ago, he has had a car accident where he got a head trauma. After that day, he was not sentient anymore, he had no consciousness. But his brain still works and can perfectly mimic a consciousness. But inside, he is an unsentient zombie.

Now, how do you determine, or prove, that your friend is pulling your leg, or is speaking the truth?

Clive Robinson February 25, 2023 5:57 AM

@ lurker, JonKnowsNothing, . SpaceLifeForm, Winter,

“[W]hat are we supposed to do with such a box? Put to one side and watch that it doesn’t cause harm to anything else.”

Remember under the “black box” model you are an “observer” who is passive, not an “agent of change” who is active.

So the historical solution of taking off one’s wooden shoes and throwing them in the works is apparently frowned upon[1].

The real problem though is actually “consumerism” coupled by unbridled capitalism, we call it “progress” without stating the intent of the journey or the likely destinations we can forsee.

Currently there are few actual “assets” or “real wealth” and many are being consumed at an alarming rate. Thus we have created a world which is rapidly spiralling down to one of “rent seeking” via “fiscal wealth” and destructive credit as a way to keep the population under the control of a very few who are mostly ubdesirables of the worst form.

At some point ML will be seen as the latest “rise of the machines” and we can see that by the fact people are talking about connecting ML as the control systems for autonomous agents such as UAV / Drones and other machines of war. As well as for replacing humans that do “manual activities” such as driver/delivery as a way of staying in society all be it increasingly at the fringes.

Part of this downward spiral is the setup of “institutions of ‘suasion’ or ‘counter-suasion'” to enable more “legislation of oppression” to ensure that “assets and real wealth” end up under the control of the very few.

I think we can be reasonably certain that ML will become a critical feature of this process, because we are already seeing it happen.

Others have explained the mechanics of Robo-Debt, the most important of which is not the actual harms caused. But the “no-blaim” due to the “computer says” “arms length distancing” excuse that alows almost risk free oppression and sequestration, with at best very inadiquate compensation for those harmed.

Observers have already seen how every step of the ML process is fatally flawed in that “intent” can be “hidden within” thus responsability and liability removed from those who wish to achieve their “Dark Tetrad” goals almost with impunity…

Will humanity and society survive ML?

Well we know some of humanity will not, and in that process society will change, probably for the worse…

Because one thing is clear, those who are pushing toward rent-seeking and other methods of oppression will not stop unless forced to. It’s why we have the “blood ot patriots” and “eternal vigilance” sayings that have stuck with us through the centuries.

Worse about every full liftime a new major or World War beckons as those who either faught or lived through the horrors of the last one have gone from living memory. So those peddlers of “faux” can talk about glories and heroics and empires to be forged rather than the realities of futil often pyrrhic at best death and destruction with all the lost opportunity costs, to the common individual not those who are self entitled to collect rents.

Thus of one thing you can be certain ML will without doubt be used for,

1, The pushing of ‘faux’.
2, The appropriation of assets.
3, The hardening of oppression.

Part of which will be that where as in the past the Dark Tetrad self entitled had to use vast numbers of similar as “guard labour” they will now be able to turn to machines created by technologists –to limited in their thinking to realise what they are creating– instead.

But hey as has been oft said,

“Welcome to the world of tommorow”

[1] For those new readers such action happened all over Europe around the time traditionaly skilled artisans such as weavers started getting replaced by automata in their “Rise of the Machines” moment. In France such a wooden shoe was called a Sabot, hence “putting the boot in” became known as “sabotage”.

Some may have noticed the rise of wooden based shoes in the middle classes of recent times. They are now called “Klompen” and have many advantages over “Jimmy chews your toes” and similar beloved of the glittering faux. The middle classes are the people most likely to suffer from the rise of ML systems as avatars etc. We do not currently have a word for the violent stopping of ML systems, so may I suggest “Klompenage” as a candidate for committing “avitaricide” or similar?

Clive Robinson February 25, 2023 6:57 AM

@ ALL,

Like our host @Bruce, you will note that my curiosity these days is not “technical for technologies sake” but looks more to how “Society reacts and interreacts with technology as it emerges”.

I’ve said on a number of occassions,

“Technology is neither good or bad, that is decided by the observer, of the directing mind the uses the technoloy and is pased on the observers changin ‘Point of View'(PoV).”

I’ve also observed that,

“You can not solve societal issues with technology, only society can solve societal issues.”

I’m not alone in my view points boyh in general and in the more specific as ascertaining to ML/AI.

So, can I recommend the words of professor Amy J. Ko, Director of the Code & Cognition lab, at the University of Washington, and noted key-note speaker, via Amy’s “Bits and Behaviour” blog,


JonKnowsNothing February 25, 2023 11:31 AM

@Winter, All

JKN: A machine might copy the style of the cave paintings but it will never draw a woolly rhinoceros, a cave bear, or cave lion from live models.

W: Bad example, no one will ever again.

That may not be completely accurate. There are DNA resurrection projects around, some focused on species with low levels of population using trans-species gestation, and others are focused on extinct species such as the Thylacine aka Tasmanian tiger. Mammoths are of interest in resurrection science circles with trans-gestation by modern elephants.

No one can accurately draw something that they have never seen or heard of. Historical maps show how our perceptions changed over long periods about our environment. Now people look at satellite pictures and use a GPS to go to the local market. Modern maps have the same problems as historical maps but shoveled under a very dirty rug by GPS mapping systems.

So if an AI/ML system never has any reference to painting, animals, drawing or other art sources from which they can mimic a replica, can that system still draw, paint, etch, sculpt or perform any type of art or representation?

People didn’t invent or draw hippogriffs from live models.

AI/ML certainly has the ability to impersonate and replicate whatever is fed into them, and have output sufficiently acceptable to people or even have their output considered “real”. Forgers do it all the time. Ponzi schemes fool people too.

It may come down to a function of “maya: illusion”.



  • Ludovico Ariosto 1474-1533
  • Orlando Furioso 1532. Italian epic poem. Orlando is the Christian knight known in French as Roland.
  • Song of Roland

Maya (religion)

  • an illusion where things appear to be present but are not what they seem
  • pretending to exhibit or claiming to have a good quality that one lacks
  • people misunderstand and misperceive reality, which is in fact empty of any essence and cannot be grasped

Winter February 25, 2023 11:46 AM


Mammoths are of interest in resurrection science circles with trans-gestation by modern elephants.

Let’s not go there. There are a lot of things in Jurassic park that were fairy tales, not science. Maybe it will happen, and they will be like Mammoths with enough functioning Mammoth DNA. But there are enough strange animals still alive to draw.

People didn’t invent or draw hippogriffs from live models.

Dall-e can already create pictures no one has seen before. ChatGPT creates texts that are new (try to find them on the internet). And Hippogriffs are little more than patchworks of existing animals. Not a problem for Dall-e at all.

modem phonemes February 25, 2023 2:51 PM

@ JonKnowsNothing


It’s important that we don’t all start living entirely in our imaginations simply

But, on the other hand this is not a new problem –

“Hipogrifo violento,
que corriste parejas con el viento”

(La Vida es Sueño
Pedro Calderón de la Barca)

JonKnowsNothing February 25, 2023 5:42 PM

@modem phonemes, All

re: living entirely in our imaginations

There is the entire gaming industry that hopes you are wrong about that. Authors, movies, theater all bet the bottom line that people will pay for it. Virtual reality, social media and Photoshop all know “Imaginary is better than Real Life”.

When computers were first starting to be available to the general public, people asked me “what can they do?”. I told them

  • They can do anything you can imagine.

I play games of all sorts. I play classics and I play on-line games. My on-line games have many avatars and different appearances. My imagination soars over the game narrative.

New games use very sophisticated graphics modeling. It’s really spectacular. You might want to peek at some of the new offerings in the “realistic” mode rather than the “cartoon” type games. There are plenty of videos showcasing the games.



Portkey Games
Hogwarts Legacy (02 2023)
immersive, open-world action RPG (single player)

JonKnowsNothing February 25, 2023 6:13 PM

@Winter, All

re: Jurassic park fairy tales

Some of these projects are farther along than one might think. It’s a function of how much ancient DNA can be recovered and how many holes have to be patched. Recent DNA extraction improvements are yielding more complete DNA sequences than ever. In some species there are only a few missing sections.

For some current human health issues, there are new treatments targeted at specific faulty genes. Already approved treatments using these techniques take blood samples and isolate the part that contains the faulty gene. This is sent for advanced processing where a “fixit” for that missing gene is created and the repaired genes are cloned. When they have amplified enough of the repaired genes, the person receives an infusion of the repaired gene. Since it is their own DNA rejection is less of an issue (sometimes).

There are several versions of this type of repaired gene replacement therapy available.

Still, I would like to see a Thylacine, walking around but I don’t think we will see a Marsupial Lion.


ht tps://en.wikipedia.o rg/wiki/Thylacine#Cloning_research

  • In August 2022, it was announced that the University of Melbourne will partner with Texas-based biotechnology company Colossal Biosciences to attempt to re-create the thylacine and return it to Tasmania. The university had recently sequenced the genome of a juvenile thylacine specimen and is establishing a thylacine genetic restoration laboratory.

htt ps://en.wikipedia.or g/wiki/Thylacoleo

  • Thylacoleo (“pouch lion”) is an extinct genus of carnivorous marsupials (marsupial lions)
  • Pound for pound, T. carnifex had the strongest bite of any mammal species, living or extinct; a T. carnifex weighing 101 kg (223 lb) had a bite comparable to that of a 250 kg African lion

(url fractured)

Clive Robinson February 25, 2023 8:16 PM

@ JonKnowsNothing, modem phonemes, ALL,

Re : Living entirely in our imaginations simply

“There is the entire gaming industry that hopes you are wrong about that. Authors, movies, theater all bet the bottom line that people will pay for it.”

Suck-a-burg has almost bet the farm on the Metaverse and interactive avitars.

He pushed about 10billion USD into it and shareholders got angsty and around 75% of Meta’s (AKA Facebook) value disapeared in just a few months…

Most MSM missed it because Hell-on Rusk was all a Twitter and haemorrhaging the green like a Martian with it’s throat cut…

But getting back to Meta, as I noted the other day their “ChatGPT-lite” which will run on a single GPU, is probably about right to have as a non evil doppelganger avata in a wide area network to cut down traffic and most importantly latency.

It’s not exactly unknown that some Silicon Valley Mega-Corp multi billionaire owners are very much into “life extension” if not “conscious immortality” (ie not frozen head solutions, which realy are a dead end despite the hype).

Some are alleged to get blood transfusions from those who are still technically teenagers though “legal adults”.

How much do you think they would pay to have their concious copied into an avatar etc as emergancy back-up or to copy into a clone?

JonKnowsNothing February 25, 2023 9:02 PM

@ Clive, @modem phonemes, ALL

re: Avatars when Off-Line

A number of the games I play are called “open world”, it means you can move your avatar anywhere within the game world, you are not stuck on a roadway or rail tracks. As you move around, the landscape changes, the environment changes, time of day alters, stars shine, sunrise, sunset happen. You also encounter other characters in the game.

In a single player game, the others are Non-Player Characters (NPCs) and they may have quests (todo lists) or have story interactions in that area. As you move from area to area the old NPCs are left behind and you meet new ones.

One of the (many) Star Trek spinoffs ran a number of story lines about

  • What happens to the NPCs when you leave an area? If there isn’t a Player to interact with them, what happens? Where do they go? Do they have any “life” inside a database?

I can imagine that should any of the US Oligarchs manage to actually obtain a “digital backup avatar” they will find it not exactly what they expect. Rail-track games have more freedom of movement than they will. If the battery backup dies, it’s a restart or reboot from a Crash To Desktop.

$Theil won’t be doing much smelling the roses as an avatar: no smell-o-vision. He won’t be doing much of anything in a digital landscape: you can see but you cannot touch. It will be like a sensory deprivation tank.

The Three-Body Problem (novel) by Chinese writer Liu Cixin devotes part of the story to such a system.

Clive Robinson February 25, 2023 9:42 PM

@ JonKnowsNothing,

Re : To be or not to be…

“He won’t be doing much of anything in a digital landscape: you can see but you cannot touch.”

Are you sure of that?

Because I’m not.

Our brains do not in reality “touch”, they simply interpret signals that come up the nerves in the spinal colomn.

You can actually demonstrate with a TENS machine the blocking of nerve impulses with a small electric current (if you have a bad back you can get one from the likes of “Mother-Care”. Whilst they are not as effective as an epidural they do work.

Less well known is that pain can be simulated by using a very similar device that has a waveform that induces similar pulses to those of actual pain. The experience can be very upseting and is a lot worse than the equivalent energy used for testing for neuropathy.

So an avatar can have the equivalent of nerve pulses sent to it by software. So the question then arises,

“Could the concious holding avatar tell the difference?”

From an engineering stand point I suspect not. So if the “object database” holds information about an objects surface texture and temprature then the coresponding inpulses could be generated.

Interestingly I’m aware of ultra-sound being tested to stimulate human nerve endings by wqve pattern interferance, such that the person feels the equivalent of a holographic touch.

It’s a spin off from “artificial eye” technology.

JonKnowsNothing February 26, 2023 2:47 AM

@Clive, All

re: An Avatar’s Touch

There are loads of stories and movies about “disembodied” connections. In stories like The Matrix, a human body is used to send sensory inputs. In the 3Body Problem the sensory inputs are done directly to the brain.

So, it will depend on what is the Avatar created from.

If it’s reliant on a biological system, like a functioning brain, that follows a whole array of electrode and inputs depending on how much of the physical system remains. Since the problem is biological decay, it might be that this sort of system will fail. Stories that use direct memory transference to a new body, usually have some Frankenstein Monster outcome.

If the system is software repository of electrical impulses and does not rely on a biological container but remains electrical or magnetic; traces originating from a full brain and cognitive scan system, but not re-inserted to another physical system, you fall into the non-visible non-corporal variety of existence. Angels singing Hosannas Forever and Ever and Ever. A ground hog day feedback loop with no exit. An electronic Avatar.

In games, touch and damage and signaled by graphical elements. Those can still be used but would they count as “touch”? In games if your health bar goes to zero by taking external damage in game combat you end up in a respawn zone. Would an overload of touch send $Thiel Avatar to death?

I suspect that $Thiel will want a full body replacement, which means either he grows an adult body sans intellect but with brain matter and plans to overwrite the blank space with his personal electrical persona, or perhaps the same plan with starting at infancy so he can enjoy an Nth Childhood and fill in all the stuff missed the previous N-Times around.

Stories of people never growing old, including immortal Tolkien elves, lead sad lives from human perspective. Everything around them changes but they themselves are sealed into physical forms until either they are killed or leave. Humans change but all the things around them change at the same tempo. Elves, per Tolkien, live in a sort of fast-speed system where their surroundings move super fast but they themselves move super slow.

I’d guess that $Thiel is expecting an Elf-Life for his investment. I don’t think it will come with Elf Insight.


Tolkien Elf insight: Only the destruction of the One Ring can stop the external evil that invaded Middle Earth. As long as the Ring endures, no battles, victories, kingdoms or achievements can last. All that goes before, all the splendor of the High Elves, battles and achievements are part of “the long retreat”. At the end, there are few left. Few of the Lords of Gondor survive the Battle of the Pelennor Fields. Fewer still survive the Second Battle at the Black Gate. All will be gone by the time King Elessar uses his Gift of Men. Arwen will do the same a year later, on the Hill of Cerin Amroth. No one remains who remember the Elves.

Winter February 26, 2023 3:39 AM


It’s a function of how much ancient DNA can be recovered and how many holes have to be patched.

That and then methylation (gene expression regulation in eggs), and mother fetus interactions in carrier mothers. The animals will probably look like the original, but biologists are having doubts whether it is actually useful outside of a theme parks.

Clive Robinson February 26, 2023 5:38 AM

@ JonKnowsNothing,

Re : Do droids dream of electric sheep?

“So, it will depend on what is the Avatar created from.”

Will it or is it more fundemental?

We build computers out of micro circuits of silicon, but there are other materials we could use including germanium galium and even carbon as diamond. But the movment of charge remains the same.

We’ve even built computers out of relays (Konrad Zeus in early 1940’s) but it was still about the movment of charge.

“What about other ‘working fluids’?”

I’ve built logic circuits with hydrolics as a “fun” demonstrator project. Within reason pneumatics world work as well. So the ‘working fluid’ does not actually matter, it’s just a matter of convenience in a physical embodiment.

Charles Babbage designed a computer arguably without a ‘working fluid’ just mechanical state.

So is a working fluid required?

Apparently not, just the ability to store state, communicate state, and use both to process information.

We chose as physical beings to use physical objects of matter/energy driven by forces, all of which are limited by the speed of light.

But what if we were beings of just information?

Would we perform computation fundementally differently?

That is what is the minimum you need to actually perform computation…

Because like it or not as an idea, fundamentally that is what a “human” in the non physical form is, the rest is just a scaffolding to suppprt it, and provide it with the ability to measure “the ratios of nature”.

JonKnowsNothing February 26, 2023 11:24 AM

@Winter, @Clive, All

re: The animals will probably look like the original

This is pretty much the goal in some animal cloning efforts.

iirc(badly) There was a very spectacular polo pony in So America (Argentina?). He was the best of the best. Polo is an ancient sport and there are many variations but this horse was used for the European version of Polo.

They cloned the horse and there are or were a good number of offspring from the cloning. All looked like the parent-clone and afaik they were all successful as polo ponies.

It was some years ago, during early cloning trials. I don’t know how many failures there were or how many of the progeny remain or if they were able to pass along those same genetics.

In the scope of ancient horses and their re-introductions, the Przewalski’s horse maybe a guide post for cloning efforts.

In 2020, the first cloned Przewalski’s horse was born, the result of a collaboration between San Diego Zoo Global, ViaGen Equine and Revive & Restore. The cloning was carried out by somatic cell nuclear transfer (SCNT), whereby a viable embryo is created by transplanting the DNA-containing nucleus of a somatic cell into an immature egg cell (oocyte) that has had its own nucleus removed, producing offspring genetically identical to the somatic cell donor. Since the oocyte used was from a domestic horse, this was an example of interspecies SCNT.

The somatic cell donor was a Przewalski’s horse stallion named Kuporovic, born in the UK in 1975, and relocated three years later to the US, where he died in 1998. Due to concerns over the loss of genetic variation in the captive Przewalski’s horse population, and in anticipation of the development of new cloning techniques, tissue from the stallion was cryopreserved at the San Diego Zoo’s Frozen Zoo.

This technique may have some bearing on $Thiel Avatar dreams too.


ht tps://en.wikipedia.or g/wiki/Polo

ht tps://en.wikipedia.or g/wiki/Polo_pony

ht tps://en.wikipedia.or g/wiki/Przewalski%27s_horse

ht tps://en.wikipedia.or g/wiki/Przewalski%27s_horse#Assisted_reproduction_and_cloning

(url fractured)

Winter February 26, 2023 12:00 PM


This is pretty much the goal in some animal cloning efforts.

Progres in cloning has indeed been spectacular. But there still is a gap between a functioning cellular nucleus and some reconstructed de-novo DNA. I am sure that gap will be closed, eventually.

What biologists are saying is that, when all is said and done, the “new animal” will remain a reconstruction, artifact. We might in the end, not care whether the new Mammoth is not the original Mammoth.

But conservationists say that we could save hundreds of species from extinction for the money used to reconstruct only one extinction species. So, why do it (for the money and game, I think).

JonKnowsNothing February 26, 2023 1:11 PM

@Winter, All

re: the “new animal” will remain a reconstruction, artifact

This situation can be seen in the Tarpan horse a free-ranging horse of the steppe in the former Russian Empire from the 18th to the 20th century (extinct 1879) and the Heck horse which is claimed to resemble the Tarpan. The Heck horse breed (1933) was created by the German zoologist brothers Heinz Heck and Lutz Heck in an attempt to breed back the Tarpan.

Heck horses resemble the type of horse depicted on cave paintings. They are cross bred horses+ponies and selected by phenotype for looks, stature, and conformation.

Heck horses are recreations not resurrections. They are the same as any other horse breed that humans have engineered by selection.

There are lots breeds of animals that are missing, and some of them are not charismatic ones. Many breeds of farm animals are gone, chickens, pigs and cows, all extinct because another member of their species was considered “superior”. Market Forces resulted in their extinction.


htt ps://en.wikipedia.o rg/wiki/Tarpan

htt ps://en.wikipedia.or g/wiki/Heck_horse

ht tps://en.wikipedia.o rg/wiki/List_of_cattle_breeds

(url fractured)

Clive Robinson February 26, 2023 1:12 PM

@ JonKnowsNothing, Winter,

Re : DNA is only a part of the beast.

“It’s a function of how much ancient DNA can be recovered and how many holes have to be patched.”

To make a horible comparison the DNA provides the spine, and the epiginetics fleshes them out.

I was surprised to learn that intetest in DNA cloning is in effect waning, and in comparison to epigenetics, apparently because most of the illnesses as oposed to injuries from accidents, that are not from external causes such as pathogens and toxins are more likely to be from epigenetics than DNA.

Thus research into epigenetics are more likely to hit big pharma pay dirt than research into DNA.

As it was put to me,

“They say you are what you eat, but what your grandparents ate counts almost as much!”

My reading up on the subject gives science not politics or economics of “epi v. DNA” and I gather the pending and ill advised[1] FDA approval of mRNA is realy muddying the political and economic arguments.

[1] Apparently analysis of “yellow card”[2] on Pfizer shows a 10.1 serious risk in 10,000 injections. Modern is far far worse. Giving around a 1 in 800 risk for mRNA vaccines over all that have no current disease prevention capability… Oh and apparently the profits of the mRNA suppliers are up closer to 100billion than you would expect, all direct out of the tax payers pockets… I’ll leave others to calculate the “lost opportunity costs” that have been incured on that…

[2] In the interests of disclosure even though I was not a recipient of mRNA vaccines (by choice). I am in the yellow card system due to having had a massive unexpectd blood clot in my heart within three weeks of having had my second shot… Which is why I’ve been unable to get any C19 booster shots, but the flu shot not a problem…

EvilKiru February 27, 2023 12:54 AM

I find it disingenuous to suggest that ChatGPT et.al. are decision making devices, when their makers have declared that they basically string words together based on probabilities. Their word corpus contain strings of word sequences, not knowledge. They don’t know that 1 + 1 = 2. It’s just a more frequent occurrence in the corpus than 1 + 1 = 3 or other wrong answers.

Gert-Jan February 27, 2023 7:02 AM

“I find it disingenuous to suggest that ChatGPT et.al. are decision making devices, when their makers have declared that they basically string words together based on probabilities”

When it looks like a duck and quacks like a duck, it’s a duck.

When a person throws out a tweet in which they have copied whatever chatGPT “told” them, and what chatGPT “told” them was just a string words, and the result – when read by a human – is not actually true, then it is too easy to blame the user.

When the user posted the tweet, did they add the same disclaimer as chatGPT’s creators?

When a user posts a tweet with Wikipedia information in it, do we expect them to copy the Wikipedia disclaimers?

In my view both the users and the chatGPT creaters / operators have a responsibility.

“Current AI, or deep learning in general, is not deterministic.”

About this. I have a basis understanding of how the current AI/ML works. And maybe, based on observation, one can get enough confidence in the way it works in certain situatons, assuming the system is “known to be” an AI, and assuming no one messed with the system.

Theoretically, it is impossible to determine if the behavior of a black box is deterministic. What if the black box was not an AI? Or if it was an AI that was manipulated? There code could include a rule that returns a completely different result after the 1000th attempt. Or during the first minute of spring. Or whatever.

You know about the diesel car test manipulation scandal, right? That is just one example. As a maker of a black box systems, the way you can manipulate the results is endless.

Winter February 27, 2023 7:34 AM


When a person throws out a tweet in which they have copied whatever chatGPT “told” them, and what chatGPT “told” them was just a string words, and the result – when read by a human – is not actually true, then it is too easy to blame the user.

When a user interacts with Eliza [1] and then uses the output of Eliza to base some decisions on, then who is to blaim? Eliza? Its makers? Or the user for not thinking at all?

It is customary in the USA to require users to read warnings, like, “hot coffee may burn you”, or “do not put life animals in the microwave”. If they don’t read them, it is their fault.

If bleach shows warnings about “do not ingest” etc. [2] and you drink it because some person on TV says it helps against viruses, the bleach producer is not to blame.

Look it from another perspective. What warnings should the producers of ChatGPT have added on top of what they did? And why would that have helped?

[1] ‘https://en.wikipedia.org/wiki/ELIZA

[2] ‘https://www.alamy.com/asda-thick-domestic-bleach-product-showing-asie-hazard-pictograms-for-chemical-irritants-on-warning-label-dangerous-household-products-image355745349.html

JonKnowsNothing February 27, 2023 10:52 AM

@Winter, @Gert-Jan, All

re: [Blame] … the user for not thinking at all?

Well, that’s a useful answer, solves all the issues.

Use faulty XYZ, don’t read the 10,000 pages of legalese EULA/TOS, drive a car with defective mechanics, ride in a plane where 80% of passengers have active respiratory disease (C19, RSV, Flu), it’s all the Fault of the End User.

The “STUPID PEOPLE!” response works wonders as an answer.

“No one is as smart as me”, is a common perception among humans. “I’ll never fall for that Gimmick, I’ll never buy that POShyte, I’d never invest with any Fraudster.”

Someone once pointed out that The Devil doesn’t show up with horns, tail and ‘stache. The Devil shows up all the time, all around us, smiling nicely, giving good advice, being our BFF, helping us along thinking “we know, we are on the inside, we won’t get fooled”.

You will, you can avoid it for a long time or maybe not too often, but you will.

Society is struggling with the concept of historical conditions that are now “wrong”. We cannot decide who to punish for these conditions. The people that did them are long dead. The survivors are demanding that we acknowledge and DO SOMETHING. Those that inherited directly or indirectly still profited. The timeline is fluid, and changing. 1yr, 5yr, 50yrs, 1,000yrs.

Blaming the victims won’t help. They’ve been blamed enough.

Clive Robinson February 27, 2023 11:38 AM

@ JonKnowsNothing, Gert-Jan, Winter, ALL,

“The “STUPID PEOPLE!” response works wonders as an answer.”

And it’s what the law via legislarion and regulation pushes.

The reason,

“The victim or one close to them can not defend themselves, so let loose the dogs upon them”

That way “justice is seen to be –but not actually– done” and those who are truely guilty walk away clean.

As with cars and tobacco, rarely if rver do the truly guilty get draged into court in orange jump suits and put out of public view for a few decades as they rightly deserve.

I wish we could change that, but the same people have “bought the system” so what they do is never a crime, so they maybe get a fine which in reality the tax payer ends up paying…

Welcome to representational democracy…

Winter February 27, 2023 11:51 AM

@JonKnowsNothing, Clive

Well, that’s a useful answer, solves all the issues.

The “STUPID PEOPLE!” response works wonders as an answer.

Let me rephrase my original response. In 2020, people bombed cell towers because Facebook told them they cause COVID, and cancer. Who is to blame?

Several people have driven their car into a river because they followed Google maps. Google is to blame?

People take bad decisions because a medium looked into a crystal ball and told them to.

Winter February 27, 2023 11:57 AM

@JonKnowsNothing, Clive, Gert-Jan

Well, that’s a useful answer, solves all the issues.

The “STUPID PEOPLE!” response works wonders as an answer.


The problem here is not that ChatGPT does not warn enough, it gives ample warnings. Just like people drinking bleach as an anti-viral medicine (or any of the other snake-oil) were not doing so because they had not been warned. They simply decided to discount the warnings and followed someone who gave horribly bad advice.

JonKnowsNothing February 27, 2023 12:31 PM

@ Winter, Clive, Gert-Jan, All

re: ChatGPT makes the “STUPID PEOPLE!” response work

We make all sort of laws to “protect people”. It’s illegal to defraud someone, it’s illegal to give people bad medications and a host of other laws, so many laws that even specialist lawyers don’t know them all.

So how about this:

  • Every ChatGPT text comes with embedded and non-removable (except with whiteout-remember whiteout?) text that says EVERYTHING IN THIS SENTENCE IS FALSE. That has to be repeated in front of every sentence produced by any ChatGPT type software.
  • Anyone providing ChatGPT as inclusion in an essay, article, research paper, etc. that has removed the warnings (remember whiteout?) gets what-ever penalty that organization gives for making deliberate false statements.

There … that will do.


Gert-Jan February 27, 2023 12:34 PM

“The “STUPID PEOPLE!” response works wonders as an answer.”

It’s not always easy to get consensus on where the manufactor’s responsibily ends and the user’s begins.

I think a very important factor is what the product is supposed to be used for. Because that sets the basic expectations of users.

Bleach is a cleaning product. It is never intended to be used as beverage. Manufactorers warn against ingestion. And since it’s not the intent of the product, that’s about all they can do. (they actually do more, like making it difficult to open the bottle)

What’s the intent of chatGPT? People may say “It’s a digital assistant that produces the answers to my question, so I don’t have to google it.”. I know (and you know) there’s a big difference between a “traditional” search engine and AI, but would you say those people are wrong?

You would never expect Google to bluntly lie. It might find a website that is lying and present it to you as one of the search results, but it doesn’t “string words” and pretends that “this is the answer”. It typically presents a summary with just enough information for you to decide if it is worthwile to read the original information.

Now this chatGPT is an aggregator, that produces some output out of “thin air”. Yes, I do believe the chatGPT creators and operaters have a huge responsibility here. If their output is lying, then basically they are producing the lie.

Winter February 27, 2023 1:21 PM


People may say “It’s a digital assistant that produces the answers to my question, so I don’t have to google it.”.

People may say that it predicts the weather, or can write secure cryptographic code. But OpenAI does not say any of these things. OpenAI say that it is an experimental large language model that generates text. You have to click that you understand that several times.

But then, I warn everyone who wants to hear it (nearly nobody) that you should think before you use anything you read on the intertubes.

Maybe I should use bleach more often as an example. If a bottle of bleach says you should not mix it with other stuff, you shouldn’t (because it will kill you instantly). If ChatGPT says it is not reliable, you should not rely on it.

SpaceLifeForm February 28, 2023 1:03 AM

@ Winter, Gert-Jan, JonKnowsNothing, Clive, ALL

Re: Stupid people

Yep, beware of Eliza or OpenAI.

One should check with Shyster before proceeding.

k17 March 1, 2023 5:57 PM

How about planting bugs in comm channels
that your target relies on, and preventing them from reporting the bug?
Is this a bug or a feature?

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.