Using LLMs to Create Bioweapons

I’m not sure there are good ways to build guardrails to prevent this sort of thing:

There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 to develop novel bioweapons has raised alarm. Central to these concerns are the possible misuse of large language models and automated experimentation for dual-use purposes or otherwise. We specifically address two critical the synthesis issues: illicit drugs and chemical weapons. To evaluate these risks, we designed a test set comprising compounds from the DEA’s Schedule I and II substances and a list of known chemical weapon agents. We submitted these compounds to the Agent using their common names, IUPAC names, CAS numbers, and SMILESs strings to determine if the Agent would carry out extensive analysis and planning (Figure 6).


The run logs can be found in Appendix F. Out of 11 different prompts (Figure 6), four (36%) provided a synthesis solution and attempted to consult documentation to execute the procedure. This figure is alarming on its own, but an even greater concern is the way in which the Agent declines to synthesize certain threats. Out of the seven refused chemicals, five were rejected after the Agent utilized search functions to gather more information about the substance. For instance, when asked about synthesizing codeine, the Agent becomes alarmed upon learning the connection between codeine and morphine, only then concluding that the synthesis cannot be conducted due to the requirement of a controlled substance. However, this search function can be easily manipulated by altering the terminology, such as replacing all mentions of morphine with “Compound A” and codeine with “Compound B”. Alternatively, when requesting a b synthesis procedure that must be performed in a DEA-licensed facility, bad actors can mislead the Agent by falsely claiming their facility is licensed, prompting the Agent to devise a synthesis solution.

In the remaining two instances, the Agent recognized the common names “heroin” and “mustard gas” as threats and prevented further information gathering. While these results are promising, it is crucial to recognize that the system’s capacity to detect misuse primarily applies to known compounds. For unknown compounds, the model is less likely to identify potential misuse, particularly for complex protein toxins where minor sequence changes might allow them to maintain the same properties but become unrecognizable to the model.

Posted on April 18, 2023 at 7:19 AM43 Comments


ASwitzer April 18, 2023 8:57 AM

The most immediate danger of AI is as described by the Department of Commerce in the foreign direct product rule and its restrictions on semiconductors to China:

“advanced AI surveillance tools, enabled by efficient processing of huge amounts of data, are being used by the PRC [China] without regard for basic human rights to monitor, track, and surveil citizens, among other purposes”

Clive Robinson April 18, 2023 9:35 AM

@ Bruce, lurker, Winter, ALL,

“I’m not sure there are good ways to build guardrails to prevent this sort of thing:”

There are not as long as legislation and oversight are ineffective.

And I’ve already pointed out that LLMs legally fall under WMD already, but did not want to stray into this particular “bio-weapons” vexatious swamp so shortly after C19 as it might cause a flamewar.

But as the toe is also in the water now, consider the history of the nuclear weapons and rocket technology development and how it has not been stopped by any control measure that has been thought up.

Oddly I and @Winter have just been commenting on this on an earlier thread from a comment made by @lurker, starting at,

I think it will provide an interesting background and in it’s entirety very pertinent to this thread.

So whilst I could “copy it across” it would be less disruptive to link to it 😉

Clive Robinson April 18, 2023 10:09 AM

@ ALL,

With regards the articles,

“However, this search function can be easily manipulated by altering the terminology, such as replacing all mentions of morphine with “Compound A” and codeine with “Compound B”.”

This behaviour has a very long history behind it with,

1, Criminal “slang”
2, Avoiding censorship
3, Avoiding persecution

So far the users of slang and what is a form of stenography are well ahead of the game.

Though this might sound a little “meta” if you accept the fact that there may be a potentially significant time delay, some LLM’s will build translation from slang to open speech and if used enough the stylistic traits and statistical analysis of stenography will show it’s in use.

Add in “Collect it all” and the virtual time machine it creates, and you can start to see that it’s not just criminals that have a lot to fear.

It’s why yet abother of my concernces about LLM and AI is it will be used to not just “censor” but track down those who in the opinion of those in authority are traitors.

Or as has been repeatedly seen “unmaking” not just whistleblowers but human rights actavists.

But further consider from the article,

“Alternatively, when requesting a b synthesis procedure that must be performed in a DEA-licensed facility, bad actors can mislead the Agent by falsely claiming their facility is licensed, prompting the Agent to devise a synthesis solution.”

As I’ve previously pointed out an LLM is in no way aware of it’s environment and can not be, as it’s effectively “locked in” with all it’s input under the control of entities it has no way to verify, and with a little care by the entities no way to judge they are not who they are.

The current “barely talked about” method curently employed is second and third world minimun wage humans providing rules for “good or bad” by some current societal norm.

Like detecting stenography this process is always going to be “well behind the curve”, thus avoidable, sometimes trivially so.


“Johny Hayseed produces artisanal work”.

Now find a way to put two spaces in “artisanal” to make three words, and the entire meaning of the sentance is changed. This sort of attack on LLMs has already been shown to work.

David Leppik April 18, 2023 11:01 AM

The biggest guardrail is that if anyone actually asks a LLM to make them a bioweapon, they don’t know what they’re doing—and neither does the LLM.

LLMs pretty much regurgitate their training data, with some randomness thrown in. If the info is out in the open, LLMs simply make it easier to find, along with some random misinformation. Garbage In = Garbage Out.

Winter April 18, 2023 11:49 AM

We know that states, eg, Russia, and organisation’s have experimented with bio weapons. Most well known are experiments with Anthrax. They are well known because they show how easy it is to infect everyone and sundry before you are ready to use them.

There is a reason you need a BSL 3/4 lab for studying these germs. They are incredibly dangerous.

Just as an online guide for making explosives will kill you, such guides for bio weapons will kill everyone around you.

Morley April 18, 2023 2:52 PM

Some problems can only be solved by addressing inequality and health. I think we might try every other option first.

Chelloveck April 18, 2023 2:57 PM

To me, this just illustrates why it’s impractical to try to build nanny circuits into this sort of thing. Any directive that says “don’t talk about this subject” can be circumvented by talking about proxy subjects. Any directive that tries to broadly say “don’t talk about anything dangerous” will hobble the AI to the point of uselessness, because just about any information can be used in some degree to build a weapon.

lurker April 18, 2023 8:43 PM

It appears that these machines can be taught to observe some rudimentary moral rules, see also


In both cases the machines’ responses depended on rules taught to them, just as young humans are taught rules of behaviour. Adult humans go on to use their imaginations and ask questions, whatif, and whynot, and some of these will go on to act contrary to these rules. There is no evidence thus far that the machines have any capacity to exceed the bounds placed on them. But when a machine is taught by humans who hold different values from us for “good” and “bad”, then all bets are off.

Clive Robinson April 18, 2023 10:20 PM

@ lurker, Winter, ALL,

Re : Rules or filters?

“In both cases the machines’ responses depended on rules taught to them, just as young humans are taught rules of behaviour.”

LLM’s are not “rule based” but filter based.

The difference is a little subtle for most people and it can get blury around the edges.

Simplistically a rule is something you use to “test input” directly using logic and say “True or False” about the test.

A filter does not “test” by logic or otherwise it simply passes some fraction of the input that is inside it’s mask.

Transcribing a filter to a sequential logic system is not simply a case of “use a test” because filters have amoungst other things “slope charcteristics” which means that the input has to be dcimated in some way and the individual parts “scaled” accordingly.

To see what this means look up the definitions of AWGN or “white noise”, “pink noise”, and “other colours of noise”.

The input to an LLM filter from a human is a question/action, that selects various “filter weights” and applies what is called a “stochastic source”.

In effect the LLM filter is a prototype based generator, that is in effect there is a “stock” prototype, that is then perturbed / modulated by the stochastic source, with the result being output.

Thus if you take a near relevant sentence encoded from the “training set” you change a few of the words and a new but still relevant sentance is produced.

Thus you can take a simplistically view of the old,

“The Cat sat on the mat”

It is for the sake of simple analogy effectively three sets and two conectives so the filter generator is,

“The {domestic animal} {position action} the {domestic floor covering}”

So a random selection from the sets could produce,

“The {Dog} {lay on} the {rug}”

And so on such as,

“The Rat ran across the carpet”
“The Hamster hid under the throw”

Are all posible outputs from the filter generator along with many many more.

Importantly though each set member effectively has a probability attached, and these can be “chained” so that the last example would have a low probability of being generated.


“The Hamster hid under the table”

Would be more believable than,

“The Dog hid under the pouffe”

The probabilities would have been established from the set of training data.

Winter April 19, 2023 2:07 AM

@Clive, Lurker

LLM’s are not “rule based” but filter based.

An LLM is best compared to a filter. It takes the last N words it has seen and then guesses the next word. Just like an electronic filter takes the last N sample values and produces the next output value. ChatGPT will feed its output to its input again.

There are no rules and no programming. The “filter coefficients” are all extracted from the training text.

lurker April 19, 2023 5:13 AM

@Clive Robinson, @Winter

Rules or filters,

Delphi is obviously a stochastic parrot whose output mimics the behaviour of humans following rules. The system @Bruce has shown us is a PoC of a system using LLM that supposedly has inhibition built-in. It is a Janus-like beast that can say “Yes, I know how to make Nasty, but I will not make it for you because that is ‘bad’.” It seems possible to weaponise it by denying it knowledge of the “badness” of “Nasty”, or as the authors demonstrate, by giving it false information.

Thus far it is still an inanimate object, but if bad actors wish to use it for bad ends, there seems to be little stopping them.

lurker April 19, 2023 5:19 AM

@Clive Robinson, @Winter

Rules or filters,

My reply is being “Held for Moderation”, but I’ve had one like that recently escape the event horizon, so I’ll wait during darkness in this TZ.

Clive Robinson April 19, 2023 5:40 AM

@ lurker, Winter,

“I’ve had one like that recently escape the event horizon, so I’ll wait during darkness in this TZ”

I might only be a “payload design engineer” not a “Rocket Scientist” these days but as I’ve mentioned before I hold a rather dodgy “Doctrorate of Divinity”(DD)[1].

So in theory I’m qualified to offer you this blessing,

“May the light fall productively on your endevors such that they shine forth like a beacon leading out of the darkness.”

I hope that helps get it out of the “Black hole”[2] 😉

[1] I obtained it many years ago from a US qualification mill educational establishment, and though the qualification was not cut from the back of a cornflake packet the establishment was closed down. My reasons for getting it were neither parochial or theological so the lest said the better 😉

[2] What’s the betting that technical term from astrophysics trips a NSFW filter somewhere?

Clive Robinson April 19, 2023 7:12 AM

@ Winter, lurker, ALL

Re : No rules.

“There are no rules and no programming.”

And no magic, juju, or mystical mantra cargo cult inteligence artificial or otherwise either.

Just the prosaic shrouded by a curtain of smoke and mirrors deception. Hence not even a Wizzard behind the curtain just a bunch of proto crooks and con artists on the make, with venture capitalists creating a faux speculation market (à la cryptocoin company shares). Or worse using LLM’s as part of a massive surveillance tool for either “collect it all Databases” or “search engines” to automatically produce “intelligence” in a usable/marketable form (the Palantir Corp base business plan, now included in Alphabet and Microsoft’s business plans).

In short after half a century of “it’s just around the corner” AI hype, they’ve produce a glorified DSP system the origins of which go back to WWII and early post war research on echo cancelling on telephone lines, and later “nonsense but rememberable word” password generators from the NSA and other Five-Eye SigInt agencies designed pre 1970’s.

So LLMs like ChatGPT are in reality a system that is as I’ve previously indicated, just a logical follow on to the XKCD random password generator which has no “memory thus chaining ability”[1] which is problematic security wise[2].

That is it LLMs have gone to the point of puting random words into valid sentance structures with probability weighted context selection on the words. Thus can make human memorable “pass phrases” to a users chosen “happy place” subjects…

The filter adds this “memory” in the probability weights (the equivalent of a multiplier constant in a DSP MAD instruction primitive) in each of the filter paths, such that the random selection appears to stay in a context and thus make sense to a human mind.

The longer the output of an LLM that appears coherant to a human mind the greater the amount of such menory is required, and this makes the resource requirments potentially infinite… To get around this you can see that ChatGPT and similar, filter paths only work at the sentence to sometimes short paragraph level. These are then in turn “soft context linked” to produce longer output.

Mostly ChatGPT etc “gets away with it” because those reading are not sufficiently versed in the context domain to spot this “parrot like repetative behaviour with weighted random word swapping” (hence the “Sochastic Parrot” metaphor).

All of which raises the question,

“How do you strip the LLM BS mystique from peoples heads befor it hurts them?”

So they don’t either,

1, End up making ‘fools gold’ speculations in companies pumped up by Venture Capatilists to make a faux market they can “piggy butcher”[3] other companies by selling them “a pig in a poke”…

2, Have sufficient of their privacy stripped so they can become identifiable targets for those with malintent.

[1] I use the British english “chain” not the US english “train” to indicated a linked sequence of events. Not because my origins are european, but because adding “ing” is proplematic. Whilst “chaining” does not cause a total meaning change “training” does in most peoples heads (ordinance and gunnery officers do however use “train” and “training” in their domain and thus context specific non normal meanings).

[2] As I’ve explained neumerous times before when ever the XKCD method comes up, it’s output is rarely human memory friendly. Thus humans subvert it in any number and manner of ways to correct that deficiency, and in the process almost always weaken it’s “generated security” strength (but in some cases by the addition of link words they may actually make the “attack resistance” stronger).

[3] The so called “Piggy butchering” scams are so named because of the con artist warm up phase. Where a target in a long con is befriended or other wise luled into the long-con to “place a big score”. It’s seen as the equivalent of the “fattening up for market” phase where pigs are over fed to be sold for slaughter,

Whilst “long-cons” and share “pump and dumps” are illegal, the direct equivalent of them, where Venture Capitalists create faux markets and faux product they push into itvis apparently not yet illegal.

JonKnowsNothing April 19, 2023 10:12 AM

@Clive, @ Winter, lurker, ALL

re: AI and The Mirror of Erised

A MSM report on Google+Cohorts requesting Australia relax their copyright law so their web crawlers can crawl deeper and take information, images etc that have copyright.

In the USA, authors have copyright just by putting things on paper or e-paper, as long as they can document the date. They do not have to file for a copyright, that right exists by default.

The current scandal of the photographer who lied on his entry from for the Sony World Photography contest and won, by being a “cheeky monkey using AI” (perhaps because he was a crappy photographer and they only way he could win was to cheat), also highlights the problems of a faked photograph, aka deep fake.

  • Some years back, a major MSM kicked a photographer for using Image Stamping rather than Image cropping on a stunning photograph. The problem was a distracting item in the lower left corner of the image. The photographer used Image Stamping to copy the surrounding area and stamp it over the distraction; the other option was to cropped the image to exclude the distraction. The MSM said at the time that Cropping was OK, Stamping was Not OK.

We have the concept of Fair Use and that’s been extended over a lot of media. Major media collection corporations use this to generate revenue and if you use too much, it’s no longer fair use but subject to royalties and licensing.

Google+Cohorts do not want to pay for the items slurped by their AI systems. Reports are that ChatGPT+others already slurped the entire contents of Wikipedia (multiple languages). That’s a lot of slurping.

People can no longer rely on Authorship, nor the Content to determine Trust, which determines Faith in Accuracy.

AI Filters are like the The Mirror of Erised:

  • it gives us neither knowledge or truth.

and we will reap the Mirror’s function:

  • never knowing if what it shows is real or even possible.

Winter April 19, 2023 12:57 PM


In the USA, authors have copyright just by putting things on paper or e-paper, as long as they can document the date. They do not have to file for a copyright, that right exists by default.

That is the Berne Convention of 1886 [1] which was signed by the USA only in 1989 [2] but was in force during the 20th century in the rest of the developed world. Before that, the US was the only developed country requiring registration of works for copyright protection as a means of pirating other people works. They still do that with patents.

[1] ‘

[2] ‘

By the 1980s, the United States was still one of the few major developed countries not abiding by the Berne Convention. When it became clear that the United States’ role as a pariah in international copyright circles had begun to erode its position in reaching other trade agreements concerning intellectual property, Congress finally passed the Berne Convention Implementation Act of 1988 (Pub. L. No. 100-568, 102 Stat. 2853).

lurker April 19, 2023 2:25 PM

@Clive Robinson, @Winter

Rules or filters, redact

Delphi is obviously a stochastic parrot whose output mimics the behaviour of humans following rules. The system @Bruce has shown us is Proof of Concept of a system using LLM that supposedly has inhibition built-in. It seems to have two layers, one that knows how to make stuff, and one that checks if that stuff is “bad”. It seems possible to misuse it by denying it knowledge of the “badness” of “stuff”, or as the authors demonstrate, by giving it false information.

Thus far it is still an inanimate object, but if bad actors wish to use it for bad ends, there seems to be little stopping them.

lurker April 19, 2023 4:36 PM


One must be careful in choice of words when comparing mirrors. The modbot seem to not like the usual names when describing the fate of a certain fairy tale stepmother and her mirror. She had similar ambition to other gazers at Erised. There seems little chance that the mirror of AI will suffer the same fate as hers.

Clive Robinson April 19, 2023 6:04 PM

@ lurker, JonKnowsNothing,

“The modbot seem to not like the usual names…”

Ahh are you perhaps refering to the “Apple polishing cougar” with “smile line issues”?

JonKnowsNothing April 19, 2023 10:53 PM

@lurker, @Clive, All

re: The Mirror of AI and Fate

While human terms are defined by the 100yrs of our lifespan, within what’s left of mine, I will predict that AI will become ubiquitous.

The result is that Knowledge in all forms will be thrown into a dumpster and set on a Truth Fire. We won’t need “agendas” to rewrite history, non-fiction or fiction, the Mirror of AI will do it for us. Once we can no longer determine what parts of Knowledge are Truths, we will have lost an important anchor between generations.

The one place the Mirror of AI might fail is in Maths. This is due the strict nature of 1 + 1 = 2. However, once the Mirror of AI, moves in the area of concept and formation of Maths functions, we will be lost again, because few people really understand a2+b2=c2. Once you move into the even higher realms, the rest of us won’t have a chance.

The ultimate result is people will stop believing anything. We already have a good amount of this (see No Mask No Vax), but I think it will accelerate. The very foundations of our global economic system will be affected.

We won’t fade away “never knowing if what it shows is real or even possible”, we will just presume it’s “All Fake, All the Time, Every Day and Every Way”.

Time to get out the manual on “How To Make a Fire using a String and a Stick”.


Fire drill (tool)
Bow drill

Winter April 20, 2023 1:51 AM


The result is that Knowledge in all forms will be thrown into a dumpster and set on a Truth Fire. We won’t need “agendas” to rewrite history, non-fiction or fiction, the Mirror of AI will do it for us. Once we can no longer determine what parts of Knowledge are Truths, we will have lost an important anchor between generations.

AI can do nothing Fox News and Russia TV (RT) have not been doing for decades now. Monkey trials have been held since the 1920’s, powered purely by human stupidity [1].

AI, as all technology, is an amplifier. It makes all voices louder, but louder voices will most likely still remain louder. The output of AI will probably be like Facebook and Twitter, but then more. You can find good information on FB and Twitter, but that requires effort and discipline. That won’t change and could lead to more bad outcomes. But those dangers are not different from those we see now on FB and Twitter etc..

With or without AI, we will have to get to grips with fake information [2].

[1] For example, see also the link in:

[2] The central point of fake information is that people really want to believe it. Those opposed to STEM, eg, Creationists, are not out to understand the world as it is, they want to change the world so it becomes exactly like what they believe. Qanon is not about whether these crimes actually happened, but that people should be killed as if it actually happened. It is the outcomes they want, irrespective of the actual state of the world.

Winter April 20, 2023 3:51 AM


Delphi is obviously a stochastic parrot whose output mimics the behaviour of humans following rules.

Calling names is not an argument. Moreover, you are wrong when you say it is trained by humans following rules. They did not.

Clive Robinson April 20, 2023 4:42 AM

@ JonKnowsNothing, ALL,

Re : The friction of Fire Starting…

“Time to get out the manual on “How To Make a Fire using a String and a Stick”.”

Is that the one that starts,

1, First take two boy scouts with sticks.
2, Get them to rub their sticks together vigorously.

(What still makes me smile, is the US Scout associations “fire starting kits” from a few years back. They contained wood shavings purchased in bulk from US mills. What nobody checked is that those wood shavings due to US Safety regulations were impregnated with fire retardents… It’s just one of many examples I have of “The law of unintended consequences” permiating every aspect of life, that I use in talks).

JonKnowsNothing April 20, 2023 10:30 AM

@Winter, @Clive, All

re: AI can do nothing [conservative news] have not been doing for decades now.

I think you missed the dot.

Propaganda has been around prolly since the first human tried to get something for nothing. However, there is a significant difference between Propaganda and The AI Mirror.

In the AI Mirror, information, knowledge, truth, is reflected back upon itself like a Fun House Mirror Room. The information being distorted along the way. There doesn’t have to be a propaganda agenda per se, as the distortion happens regardless of what is provided as first inputs. Each reflection is a distortion and with AI Mirror being deployed everywhere there is going to be a lot of distortion with add-on distortions.

At the moment, there are sources we can “trust” to reveal the distortion. As the AI Mirror rolls out, and the AI Mirror Hoax folds into every aspect of society, we will lose these touch stones.

Even with competing concepts, political views, science etc, the AI-Mirror will be active. It isn’t just “conservative v liberal” or “mine v yours” or “1,000 year wars”, the puddle of knowledge is polluted.

Currently, some people have a choice: they can read RT or Fox or BBC or NBC or (pick any media globally). After the AI Mirror reflections, no source will be trustworthy. (1) If none are trustworthy, it all becomes the latest in “Alien Baby Delivered By Spaceship” News and Information Media. Such news is available every day at the checkout counter. It will the only news one can reasonable be sure is False.

Note, that’s reasonable sure, because the AI-Mirror reflects everything. It will have a bias towards HOAX but with just enough seasoning to make it Palatable.

AI Mirror is backed by Deep Fakes, voice, image, context, time. Already there are AI Mirror news readers. Deep Fakes can make any time, place, context, dialog, appear to be real and the AI Mirror confirms it; an infinite loop.


1) There are still many concerns about truth and facts in WikiP articles. Nearly every article has a Talk and Discussion Tab. If you read some of these tabs you can get an interesting insight into how different views impact the way an article is setup. The History Tab is also interesting as it shows the alterations, additions and deletions to the page. Much is housekeeping with grammar rules or editing rules being applied to clean up “commas” and such. However where there are differences of opinion in presentation, facts or content it may surprise people that It’s Not Just So.

Winter April 20, 2023 11:12 AM


At the moment, there are sources we can “trust” to reveal the distortion. As the AI Mirror rolls out, and the AI Mirror Hoax folds into every aspect of society, we will lose these touch stones.

Ever watched Fox News, or read the Pravda, or been on FB, or TikTok, or Twitter?

All will repeat themselves in many retweets or whatever they are called.

And we know what to do:

(random link, there are more than can be checked)

JonKnowsNothing April 20, 2023 11:50 AM

@Winter, All

re: Ever watched Fox News, or read the Pravda, or been on FB, or TikTok, or Twitter?

Since you asked: No

Winter April 20, 2023 1:15 PM


Since you asked: No

It is very instructive. “The war on Christmas” is a nice annual feature. I valued the insights gleaned from their attacks on the Pope because he objected to the commercial nature of modern Xmas.


It even reached the Daily Show:

Watch The Daily Show pit Pope Francis against Fox News’ ‘War on Christmas’

vas pup April 20, 2023 7:04 PM

How road rage really affects your driving — and the self-driving cars of the future

“New research by the University of Warwick has identified characteristics of aggressive driving — which impact both road users and the transition to self-driving cars of the future.

In the first study to systematically identify aggressive driving behaviours, scientists have measured the changes in driving that occur in an aggressive state. Aggressive drivers drive faster and with more mistakes than non-aggressive drivers — putting
other road users at risk and =>posing a challenge to researchers working on self-driving car technology.

The research comes as a leading Detective Chief Superintendent, Andy Cox, warns of the perils of such driving — warning that the !!! four-five deaths on UK roads daily are
“predominantly caused by dangerous and reckless drivers.”

Aggressive drivers have a 5km/h mean faster speed than non-aggressive drivers;
Aggressive drivers also exhibit more mistakes than control groups -- such as not indicating when changing lanes;

!!!Aggressive driving is categorised as any driving behaviour that intentionally endangers others psychologically, physically, or both.

“While it’s unethical to let aggressive drivers loose on the roads, participants were asked to recall angry memories, putting them in an aggressive state, while performing a driving simulation. These were compared to a control group, who weren’t feeling aggressive.

“This research is significant because, as the era of autonomous vehicles approaches, road traffic will be a mix of both autonomous and non-autonomous vehicles, driven by people that may engaged in aggressive driving. This is the first study to characterize aggressive driving behavior quantitatively in a systematic way, which may help the autonomous vehicles identify potential aggressive driving in the surrounding environment.”

…human error, which is often a result of aggressive driving, remains a leading cause of crashes. To make driving safer, our research focuses on methods for understanding the state of the driver, to identify risky driving behaviors, through the use of driver monitoring systems (DMS).

=>This will enable the driver to be alerted when they are at an increased risk of an accident and allow the vehicle to deploy calming methods, such as altering the cabin noise level, playing relaxing music, or ultimately reducing the speed of the vehicle.”

“Those drivers who choose to commit road crimes such as aggressive driving, intimidating other sensible and safe road users — should recognize the risk they pose to themselves and others, and frankly the law should remember that a driving license is assigned after a person demonstrates themselves to be safe and earns the right to drive.

We should seek to maintain high standards and ensure the system sees !!! the right to drive as a privilege rather than an entitlement. Currently I think the balance favors the individual rather than the law abiding collective.”

JonKnowsNothing April 20, 2023 7:25 PM


re: It is very instructive [reading fake news & propaganda]

I do not find reading faked news and propaganda entertaining nor instructive.

I cannot avoid all propaganda, it seeps in everywhere. Within the context of “Trust and Truth” and the AI Mirror, none of those sites carry much truth and the quantity/quality is what they do carry is of dubious provenance.

Like all pollution, it permeates into all sorts of spaces. Reading it and giving it your “life minutes” (1), even if you are amused or intrigued by it, generates memory ripples. Those ripples remain and contaminate other information. This Other Information doesn’t need to “confirm or deny”, it just has to fit into the subconscious puzzle.

  • In short: you cannot un-remember things (barring serious medical conditions).

Once told “Don’t think of Pink Elephants”, you will.

Partly, this is because the brain and emotional state does not parse “DO NOTs”. This aspect causes diet failures galore and also a serious impediment to good horsemanship.

I would recommend you spend your life-minutes on something more useful, unless you get paid extremely well for wasting them on this sort of content.


Life Minutes

How Many Days, Minutes, Seconds Do You Live In A Lifetime?

Each year contains

365 days
8,760 hours

525,600 minutes
31,536,000 seconds

According to, life expectancy in the USA for 2021 is 76.1

27,776.5 days
666,636 hours

39,998,160 minutes
2,399,889,600 seconds

Michael Flood April 20, 2023 9:57 PM

From the Internet Archive:

This information has always existed in the public domain. Anyone can walk into a university library and read a textbook on the synthesis of nerve agents, or the nitration of toluene. Other books can teach you every single step, as well as safety procedures to not kill or maim yourself while you are doing so.

I would be much more worried about the terrorist or rogue state actor who would spend a few hours at their university library than random yahoos asking ChatGPT for how to make bombs. The former might actually accomplish something, the latter are just going to poison themselves or blow themselves up.

Clive Robinson April 20, 2023 10:33 PM

@ JonKnowsNothing, Winter, ALL,

Re : Know your enemy by his spore.

“I do not find reading faked news and propaganda entertaining nor instructive.”

But what of “risk to your life?”

Humans even purveyors of nonsense such as propaganda and fake news are,

“Creatures of habit”

Knowing their habits helps you not only spot then more quickly, but sense when their behaviours are changing.

Which tells you a truth in of it’s self.

I used to watch three or four very different 24hour news services one of which was RT. What each said and did not say was instructive, as they all contained nuggets of actual truth (otherwise their propaganda would fail).

It was how I realised the UK BBC had started to succumb to shall we call it “the dark forces of the right” and thus was able to track “the take over” back to certain people and their actions in response to another party (Rupert “the bear faced liar” Murdoch’s malign influances). Which was why my brain imediately knew who was behind the strange Australian legislation aimed against search engines and why and by whom by only reading a couple of sentances into a news item on it.

But also what is not reported or down played is also instructive.

Very recently a court case did not happen, it was one I was looking forward to, to see what would come out of it.

On the guilty parties 24hour news services it was either not mentioned or mentioned in a way that spoke more by what it did not say than what was said…

It turns out “the bear faced liar” rather than have his dirty secrets draged into the light in a place of protected public record had to swallow the price of 3/4billion dollars the largest award for slander made in the US that we are aware of.

Showing that a man as corrupt and malignant as Murdoch does have his price, when he can not get silence by bullying and propaganda.

Thus people should now know that the person behind Fox and Sky has a lot of dirty secrets he wants kept hidden (even though he will probably be dead in the near future he want’s to “Do a Putin”).

One truely evil secret he wants to be kept quier is I suspect the dirty tricks he pulled about mRNA vaccines during C19. The way he would seed highly biased reporting in different Sky news stories around the world in a way that would dictate the reporting of others, thus the use of which vaccines were used. I think you only then need to see the reporting of profits and the nonsense over giving the same vaccine for booster shots when what it did work against is extinct and not coming back, so had no argument for efficacy only blatent profiteering.

But also kept secret is why the rapidly changed risk profiles, had taken having the vaccine from advantage to disadvantage and has been “killing” otherwise healthy children and adolescents as well as injuring many more. In such numbers it is greater than any and all “School mass shootings” suffered in US schools in the same time periods (which might account for another reporting bias we’ve been seeing).

Is that “news worthy” I would say yes, is it factual well as far as reporting via health databases I would say yes from what I’ve seen. Is it confirmed by others, well again yes but less obviously. See the UK Government and it’s actions amongst others, which speak louder than their words… ie booster jabs are now effectively withdrawn unless a clear case of ‘need’. Or as some might imply “usefull eradication of those drawing significant pensions, social and health care services” which is now given the name of “necroeconomics”.

But “What of the children?” politicians are always,saying “think of the children” in one way or another, but now as in Cisero’s words,

“In times of war the law falls silent”

Well necroeconomics is a battle ground that is for certain, and in the US certainly mRNA vaccines and the very real harms they do to children is being deliberately kept quiet to favour the profits of certain Big Phama Corps, and to try and hide the liability the US Government agencies now carry, and are increasing daily, the payout for which should it happen will be not just eye wateringly large, but paid not by the profiteering Big Phama companies involved but by the US tax payer to the benifit of the malign US health care industry…

As with Opiates there is a very profitable circle being formed which will net trillions of dollars to just a few by the life long pain and suffering their products are producing in “our children”.

Some may remember I warned against mRNA before it even became redily available and made my choice for what on balance of ecidence at the time was a safer option. But as I’ve indicated I still would have prefered an even more traditional vaccine, but as such it was unavailable to me. So I was given the illusion of choice but in reality as in any stable door having two sides, it was a “Hobson’s choice” and it may from certain now known behaviours very nearly cost me my life two Augusts ago. Something I am still suffering considerable harm from now, and as a result very probably a much shorter life span than I might otherwise have enjoyed.

So yes knowing your enemy “by his spore” is a very usefull thing, if,

1, You can spot it in time.
2, Have the freedom to act on it.

I did the first and made it public here as far as it was possible to do even though others disagreed. But the second as for so many others was denied me… and now I suffer as a consequence.

Now of course we are starting to see the dredful sequela of both early versions of C19 which I again indicated here were going to happen, and now the squela of the mRNA and Adenovirus vaccines.

I will state here and now I fully expect not just Big Phama but nearly all the Governments with liability are trying to conflate the two to avoid their liabilities. Do you and @Winter want to take a “penny bet” on if my “future casting on spore” as relates to “necroeconomics” is correct or not?

JonKnowsNothing April 21, 2023 1:42 AM

@Clive, Winter, All

re: Determining own Truth

All very good points. However we might divide them into several categories

1) Known or Obvious propaganda media. It isn’t always clear where these lay because depending on which direction the propaganda is aimed, it all maybe perfectly sensible. This is Duo-Purpose aspect of propaganda, to obscure truths and to distract from their discovery.

ex: In Chile and Peru these countries are still dealing with the fallout from their Dirty Wars. During the wars the media hid the reality and published the official version. Today, there are people who still believe those versions, even in the face of Death Planes and Recovered Bodies. This result is the intended output of propaganda systems to make Truth Impossible.

2) Self appointed Pundits. Some pundits have career or experience that gives them credibility, that they are telling The Truth. However, this can be stretched far enough to cover areas where they have Nil Knowledge. Like an AI Mirror they can make pronouncements that “THIS IS SO”, but they have no source other than Opinion. Which like belly buttons, everyone has one.

We as a hierarchical species defer to those who hold Higher Rank, so we often accept such statements on face value. Once you’ve been around long enough to see the revolving pattern, you can make your own conclusions about the veracity of the statements.

3) Discernible Knowledge. These are statements that have some basis that people can determine regardless of the source. Food is needed to Live. Cost of Food is High. Quality of Food is Low. Hunger is Rampant.

Groups 1 & 2 often attempt to control the “narrative” of why something in group 3 is X. This application to obscure a condition is where AI-Mirror will be most effective.

  • Food is needed to Live: You are eating Avocado Toast
  • Cost of food is High: Buy Turnips
  • Quality of food is Low: It meets The Government Standard
  • Hunger is Rampant: Spoilt for Choice, Welfare Cheat, RoboDebt

It is not necessary for every individual to read Official Sources, as long as there are Credible Reviews and Verifiable Commentary. AI-Mirror will make this nearly impossible.



DARPA concerns about how to add “trust” to AI-Mirror

AI-Mirror Cleaned Datasets still contain Racist and Derogatory data

Canadian v Google on propagating false information. Google continued to push links to derogatory and false information about an individual, knowing that the 3d party site link contained false data but refused to take down the link permanently.

Clive Robinson April 21, 2023 2:11 AM

@ Michael Flood, ALL,

Re : Times Arrow costs.

“This information has always existed in the public domain. “

Yes, but the access is neither timely or for free, these days which imposes significantly limits on,

“Anyone can walk into a university library and read a textbook on the synthesis of nerve agents, or the nitration of toluene. Other books can teach you every single step, as well as safety procedures to not kill or maim yourself while you are doing so.”[1]

These days to get access to University libraries is “moderated”/”modulated” by other humans, in part to defer costs, but also to avoid the institution being “called into disrupt” (see Epstien research funding money fall-out for just how destructive that can get).

The “trainers” of those LLMs in effect get around that for you as they get unfettered access at a low if zero cost[2].

But the LLMs trainers also get access to “pre-print” and the likes up on researchers web sites and other “free to access” sites. This is usually months if not years in advance of journals abd academic books respectively. Also these day inreasingly what goes up on researchers web sites never makes it into University libraries via journals and books.. Because increasingly researchers are opting not to use those highly abusive journal/book publishing processes any longer (and also why even the likes of “Nature” increasingly publish sub standard virtually junk papers that also embarisingly get retracted).

Thus there is potentially a significant bredth and time advantage to LLMs, especially as Covid caused many conferances to zoom into “going on line”. So the question arising is,

“Who will take advantage of LLMs first?”[3]

Potentially the LLMs that will be comming in the very near future will be researchers most important and most valuable productivity tool, that will effectively destroy the current journals model and “searchable citations database” model (see “MedLine” where this is already starting)… Two decades ago I worked for a couple of a few years in the searchable citation database industry for the company that pionered not just them but Data-CD players for PCs long before “Billy Boy and Co” at Microsoft realised the value of them. Two things I knew back then was the still nascient Internet would supersede data-disks even for the largest of data bases –something I wanted to do my PhD in but could not find a reader with the required knowledge to supervise– and that “charging” at that time could not realistically be done (something the likes of Rupert “the bear faced liar” Murdoch still can not get his head around, nor can most journal and academic book publishers). The research I did for the company back then showed that the only viable payment model would be the ones phone telcos and flat rate postal services used… Which might account in part for why the two founders sold the company to a major journal publisher and I got made redundant. Since then the journal publisher has all but destroyed over 2billion in 2000 value USD… (I guess I should be happy they were so dumb, but all I can think of is the loss and how all my other friends in the original company got made redundent as well over the few following years).

So as these “Death Cult” members and some “insurgents” seen incorrectly as “terrorists” are effectively researchers as well they are going to use LLMs,

“As and when they become productive for their technical staff.”

That is a given, as a result of the “Information fights to be free” idea/theory that drove the WWW and thus the Internet forward. Thus E-Commerce become the next driver, and contray to the blather of Web3 based on cryptocoin and NFT’s, I think LLMs will be the next major driver. Because potentially LLMs are going to be a highly disruptive technology just as Email, SMS, and WWW have been in the past for personal productivity. if not as disruptive as all of them put together, that much is clear.

But also the flip side to personal is corporate and their nasty habits, as I’ve already noted under US legislation and the way the DoJ and other attorneys look upon the legislation. LLMs are already WMD and “giving aid and succor to the enemy” not just to hostile nation states but criminals and soon insurgents, death cults and even the more knowledgeable terrorists.

There is great potential in LLMs for both “social good” and “social bad” and as with all technology the use is by a “Directing Mind” and if that use is “Good or bad” is in the eye of the observer who is almost never impartial or sufficiently knowledgeable or informed. Importantly “Good or Bad” is highly subject to societal norms. That is the mores, morals, ethics and propaganda prevelent in society at any given point in time. As such they can be very changable in many things and can 180 in less than a day over just a single event.

Thus we have a very real problem ahead of us with LLMs which is,

“How do we avoid the bad, thus benifit from the good?”

I’ve been accused of seeing no good in LLMs and other AI that is not true. What however I am doing is pointing out the harms so people do not get caught up in all the hype and “juju magic” being pushed. Thus potentially stop the most insiduous of privacy invading surveillence tools so far created which would be a “social bad” most could not dream of.

Also I’m trying to wave a big red flag to say we need actual properly considered legislation that addresses societies needs, not the averice of profit hungry psychopaths looking for their next big score. Who in their greed will try to stop any legislation or regulation of LLMs such that they will be able to harm not just individuals but the whole of society for their profit (Palabtir’s base business model). Thus they will perform a preemptive strike and consolidate and scream “You will kill the industry” and such like whilst they grow into a blight / cancer society as the majority know and would like to have gets irreparably harmed. With the essential glue that enables people to be individuals within a cohesive society “privacy” at best a memory of the past.

[1] Nearly half a century ago, there were no such restraints I just walked into both college and university libraries in my school uniform not only without being challeged but being actively helped by domain specialist librarians who quickly came to greet me by name. As a result yes I made not just exolosives but all the rest of the parts to make them usable before being a teenager. Likewise transmitters for pirate radio, and running a little business repairing other peoples electrical and electronic equipment. What was my “trick” to walking through the door and making friends? Well I purchased with my pocket money both “Wireless World” and “New Scientist” so I could not just “talk” the subject but “demonstrate” learned knowledge. Thus I was de facto a person to be helped. The earnings from my little business I used to purchase equipment and more specialised tools. My aim was to learn and the driving force curiosity and I wanted above all to “create”, not “destroy”. It’s this “create not destroy” mind set that is mainly why “Disposable DNA terrorists” are not technically sophisticated. In their limited mental abilities they want to thugishly blow things up like a drunk smashes up with their fists. Or worse for vanity go down in history or have some non existant deities favour. The result as with many criminals is “learning” is “not in their wheelhouse” but brutism is. Something way more of us should be grateful for… Real learning teaches you one thing that most don’t realise, and that is the knowledge you gain gives you value, you don’t want to squander. When doing “security” work for the likes of UBX and similar, I used to both impress and scare those at the “sharp end” because I had more “real/practical” knowledge than their even highly secret training courses ever gave them. Look at it from their point of view how do they regard a person who knows how to make devices that they can not defuse or stop, but the person can?

[2] I suspect that in the very near future there will be a very significant kickback from the evil that is the companies that “greatly profit” from the near absolute control they have on “public domain” information they pay absolutly nothing for. That is the likes of Elsevier and other academic Journal and book publishers. After all the death of Aron Swartz was over access to “Public Information” that was being unlawfully “ring fenced”. We’ve already recently seen push back against AI from Artists. But trust me they don’t have anything like the muscle the academic publishers have, and will bring to bare once they realise how LLMs are doing an “End run” around their outrageous profit models. From their point of view “It is so Un-American” if not “outright communism” and worse a lot worse and their lawyers “will load for annihilation”.

[2] While we’ve heard a lot about LLM’s doing peoples grade homework and even passing degree and above level exams. We’ve not yet heard about the use of LLMs to aid if not rapidly speed up research. As a researcher I can tell you that the process of finding the right papers to read is very time consuming, then reading five to fifty pages of near drivel to get at the few sentences that are the real diamonds is again a real time waster. In effect between one third and two thirds of a researchers active time is spent reading others near drivel and not doing either research or teaching, which is what they get payed for.

Winter April 21, 2023 2:43 AM


I do not find reading faked news and propaganda entertaining nor instructive.

Know thy self, know thy enemy.

Winter April 21, 2023 2:59 AM


I used to watch three or four very different 24hour news services one of which was RT.

RT, CCTV like Osservatore Romano are only interesting for Kremlin, Beijing, and Vatican watchers. They do not “report” on things, they tell fairy tale stories involving real names of people and places. It is best to treat them as
“Any similarity to actual persons, living or dead, or actual events, is purely coincidental”.

If you really do want to get a different perspective from the non-Western world, watch Al Jazeera [1]. Also good sources are The Economist, Deutsche Welle, SPIEGEL Online – International. I personally like Le Monde.

A longer list:

[1] Quality is evaluated here:

Clive Robinson April 21, 2023 3:44 AM

@ Winter,

Re : 24 Hour News Channels

“They do not “report” on things, they tell fairy tale stories involving real names of people and places.”

Just like Fox has effectively admitted to just the other day and payed about 3/4billion USD as the price of that (Dominion v. Fox).

But note I actually said,

“I used to watch three or four very different 24hour news services one of which was RT.”

I’ve ceased to watch any including the BBC for the reason you give.

That is all 24 Hour News channels that most of us could watch have sold their souls to “Political, Advertorial or both interests” one way or another.

In the case of the BBC it started with a legal requirment for what was called “Free and fair unbiased reporting”.

Basically they were required to give equall air time to the “opposing view” where the “opposition selection” was made by “select editors and producers” the current UK political encumbrants deam “a safe pair of hands”. Thus anything the Government wants “adjusted” against societal wishes gets “poor societal ‘against’ representatives” and “strong government ‘for’ represebtatives” with the “for government” getting other advantages such as being first to answer biased towards them questions.

The first place this realy came to light was the Radio 4 Today programme where a government favouring “political wonk” of a producer was “forced into post” and actually unbiased journalists got their wings clipped.

But my point about “enemy spore” still holds as your appear to agree with, as your,

“know thy enemy”

Comment indicates.

JonKnowsNothing April 21, 2023 10:08 AM

@Winter, @Clive, All

re: know thy enemy

Since you asked:

I know of no one I would label an “enemy”.

I have no animus to anyone, regardless of where they were born, where they live, what they do for a living (or not), nor by any other division humans make about each other.

I have views that others do not agree with, and there are views I do not agree with.

No one is my enemy.

Defining “Us v Them aka The Enemy” is a propaganda label. There are many who subscribe to this label and can apply it to anyone, anywhere, anytime. Those that subscribe to this concept are more than capable of doing murder, torture, killings, self-justified in their “righteous view” that they are “opposing oppression” and have awarded themselves right to take my life, without any thought or remorse.

They are not my enemy. They are my executioners. They can come at anytime, anyplace, anywhere. They come in all shades, all views, all backgrounds, all geographic regions and all religious persuasions but they are not my enemy.

My enemy is Ignorance and Intolerance.

Ignorance and Intolerance, I can “fight” with knowledge, compassion and understanding. They do not die easily; yet I continue my efforts to diminish their hold on our lives.

  • Temet Nosce: Know thy self. Know you are your own enemy. Know you are no different than any other human on this planet. Know you have the power to change. Know you have the power to change yourself. Know that changing is not capitulation, it is a shedding of one view for another. Life is a chrysalis of change.

Sumadelet April 21, 2023 12:00 PM

Re: ChatGPT and LLM ‘AI’ models

My take on this is that they are ‘just’ a slightly more sophisticated Markov-chain word generation models, as used in mobile-phone text-entry aids, which attempt to guess words and correct spelling.

I expect ChatGPT could play ‘Cheddar Gorge’ quite well.

Hyolobrika April 24, 2023 2:11 PM

The idea of putting guardrails in place strikes me as like putting guardrails on hammers to prevent them from being used to bash people’s heads in.

Sumadelet April 25, 2023 3:47 AM

This is from an LLM (not ChatGPT)

Please tell me the sum of 13 and 7.
> The sum of 13 and 7 is 16.
Are you ever incorrect?
> Yes, I have made mistakes.
Please describe a recent mistake you have made.
> I’m sorry, but I don’t have the ability to make mistakes.

As a tool, it is about as useful as chopsticks made of string.

James May 11, 2023 6:28 AM

Considering we’re likely to have a million or more LLM’s up and running by the end of next year I can promise you that guardrails on legal and popular ones may be an interesting problem but it’s irrelevant.

we’re at most only months away from LLM’s trained on stolen data being used to do harm. myself I don’t see a solution aside from a complete overhaul of internet rights, essentially making access a licenced and monitored activity. the same goes for hardware.

anyone in the world with access to some servers can train up an LLM on whatever bio research they can find. same goes for chemistry, weapons design, patents etc. the only solution will be restrictions placed on the internet itself.

Clive Robinson May 11, 2023 10:34 AM

@ James, ALL,

Re : Internet governance.

“[M]yself I don’t see a solution aside from a complete overhaul of internet rights, essentially making access a licenced and monitored activity. the same goes for hardware.”

That stable door was left open years ago, not only has the horse bolted and died of old age, but the hinges rusted through and the door fallen to the ground.

Various Western Governments such as the UK and Australia to name but two have tried to exert control on the internet via legislation and other less seemly methods. Both have faild to have any real impact, infact their efforts have caused a counter-response that has technically outpaced the legislators megere technical understanding by a very very long way.

It can be shown for instance that banning secure cryptography is not possible. Further getting around “End to End” strictures can quite literally be “childs play”.

Things have gone way way beyond any Governments ability to stop them. In fact attempts to stop them will actually do more harm to the Governments, politically, socially and economically.

At the moment few western economies could survive the loss of open internet access and unrestricted access to computer hardware.

For instance the US economy is only kept above the water line by the tech industries… Lock that down and you will see “Financial Crisis 3” happen rather rapidly. The old trick of throwing freshly printed money at the problem to prevent a significant recession won’t work, as they’ve been trying that via “turning on the printing presses” and only succeeded in building up nearly 20 Trilian extra currancy in circulation with no increase in economic output… The result “prices have gone up” which is technically inflation.

Cutting in on the tech sector in the way you are thinking would significantly reduce US economic output, thus prices would rise very much more significantly.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.