On the Catastrophic Risk of AI

Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The press coverage has been extensive, and surprising to me. The New York Times headline is “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” BBC: “Artificial intelligence could lead to extinction, experts warn.” Other headlines are similar.

I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said.

In my talk at the RSA Conference last month, I talked about the power level of our species becoming too great for our systems of governance. Talking about those systems, I said:

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

That was what I was thinking about when I agreed to sign on to the statement: “Pandemics, nuclear weapons, AI—yeah, I would put those three in the same bucket. Surely we can spend the same effort on AI risk as we do on future pandemics. That’s a really low bar.” Clearly I should have focused on the word “extinction,” and not the relative comparisons.

Seth Lazar, Jeremy Howard, and Arvind Narayanan wrote:

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there­—ne that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

I agree with that, and with their follow up:

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom.

This is what I wrote in Click Here to Kill Everybody (2018):

I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future. AI and intelligent robotics are the culmination of several precursor technologies, like machine learning algorithms, automation, and autonomy. The security risks from those precursor technologies are already with us, and they’re increasing as the technologies become more powerful and more prevalent. So, while I am worried about intelligent and even driverless cars, most of the risks arealready prevalent in Internet-connected drivered cars. And while I am worried about robot soldiers, most of the risks are already prevalent in autonomous weapons systems.

Also, as roboticist Rodney Brooks pointed out, “Long before we see such machines arising there will be the somewhat less intelligent and belligerent machines. Before that there will be the really grumpy machines. Before that the quite annoying machines. And before them the arrogant unpleasant machines.” I think we’ll see any new security risks coming long before they get here.

I do think we should worry about catastrophic AI and robotics risk. It’s the fact that they affect the world in a direct, physical manner—and that they’re vulnerable to class breaks.

(Other things to read: David Chapman is good on scary AI. And Kieran Healy is good on the statement.)

Okay, enough. I should also learn not to sign on to group statements.

EDITED TO ADD (9/9): The Brooks quote is from this excellent essay.

Posted on June 1, 2023 at 7:17 AM51 Comments

Comments

fib June 1, 2023 8:48 AM

Nothing desensitizes one more to the romanticism of the Great Artificial Intelligence Takeover than spending days on end painstakingly placing squares around bulls and cows grazing in photographic images of a squalid savanna(*). In the drudgery of labeling you get to see the guts of the system. It is interesting that greater attention is not paid to this somewhat shady part of the industry, where real humans are involved, often outside the acceptable bounds of human dignity[1]. By experiencing this painful process a clearer perspective unfolds.

Missing from the media commentary is decent ethical analysis of the whole situation created by the LLMs. Blinded by the brightness and magnitude of events, we stand paralyzed in the ethical field. It is essential to discuss in greater depth the possibility of emergence of a possible sentient entity in a lab, and maintain the ethical horizon. We cannot escape responsibility.

What is happening with ChatGPT does not at all resemble what was prophesied by Nick Bostrom[2] regarding the sudden emergence of AGI: from an initial spark the entity would grow exponentially like a Big Bang, quickly taking over all connected networks. If what we see is AGI it is of a type not yet described in any popular scenario, certainly not in Bostrom’s.

As a materialist, I am inclined to conclude that consciousness is born out of the laws of physics and that it is independent of the physical substrate3. I consider neural networks to be an impressive intellectual achievement. It seems clear to me that AGI will emerge from neural networks, since it is the way it does so in the life forms we know.

The layered architecture of the typical artificial neural network is a pretty apt analogy to the real thing. From the explorations carried out so far via brain scanning, we learned that neural connections occur in specialized areas of the brain, not exactly in neat stacks or queues, as in a convolutional network graph, but in neural topologies arranged in 3d, in the most diverse configurations.

Other brain structures are involved in the neural processes, so, in order to replicate the functioning of brain NN, it would be also necessary to provide the analogs to those structures such as glial cells, which certainly play a a big role in the of activation and moderation of synapses [weights].

In order to be human-like an NN-based AGI needs to receive information from sensors of all kinds [it has to be able to sample at least five large categories of physical stimuli, like us], and combine them dynamically as the frames of reality arrive. In order to learn it has to tokenize and annotate the complexity of the information into hundreds of thousands, even millions, of classes. Daunting tasks that will demand copious resources. Here I see another parallel with natural intelligence, as we humans also learn by ‘labeling’ the information from the senses. We call the labels we learn ‘concepts’ [ML classes?]; Our conceptualization of the world is equally obtained through learning reinforcement.

Neural networks seem to be, in fact, the way to AGI. But we are not close to achieving it, as most regulars knows. If we want to reach the quality level of the neural networks that we carry in our heads, we have to learn about the role of other structures [the aforementioned glial cells and many others] and then learn to replicate them in computational models, in addition to the hardware requirements [the substrate] .

Neural networks, even those currently based on statistics and computing power, will cause drastic corrections in various aspects of life, starting today. However, if you want a more powerful horseman of the apocalypse [and with a better timing] look to the social networks. Perhaps the unraveling of civilized society on account of the relentless action of social media has already begun and we find ourselves hopelessly beyond the event horizon.

(*) Strong words hinting the author is human

[1] https://time.com/6247678/openai-chatgpt-kenya-workers/
[2] https://nickbostrom.com

yet another bruce June 1, 2023 9:04 AM

As time goes by machines become capable of doing more and more of the tasks that once defined a individual’s profession, livelihood, and often self-esteem. Chess grandmasters and 10-dan go players are a brave and noble few. Deep Blue and Alpha Go were greeted as a technical marvel and a curiosity rather than as a doomsday threat. Only a handful of people, if any, make a living telling cats from dogs so that, quite miraculous, breakthrough ruffled few feathers. Now we have machines that can write prose and the pundit-sphere explodes. Everyone loses their minds. Well, everyone who writes prose for a living loses their mind. Present company excluded.

It seems possible to me that in 1983 NORAD had computer systems that could handle most of the functions of the WOPR program depicted in the movie Wargames. What no program could do in 1983 is converse in natural language. Poor access control to WOPR was a security risk. An interface that supports natural language was not.

Clive Robinson June 1, 2023 9:16 AM

@ Bruce, ALL,

Re : Technology is agnostic to use.

“I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over.”

As far as I’m aware no technology mankind has made so far actually poses a direct risk of human extinction, nature on the other hand…

And in all honesty AI is currently more a hype-bubble to fleece investors with than anything else (though that will change eventually).

It’s those designing technology as well as those using technology we realy should be watching, and watching intently.

Because at the end of the day technology does what it does at the behest of a “Directing Mind” with “Sufficient Agency” to do harm. They are the ones that will eventually be the cause of us going extinct before our natural time.

Oh and to be honest, I suspect if a human designed and used piece of technology does get to cause our extinction it won’t be a deliberate action as such. It will be down to greed, and lack of foresight more than anything else.

It will also probably be a slow extinction over several generations, that we could easily have prevented, but nobody would because XXX stupidity in the self appointed.

Vesselin Bontchev June 1, 2023 10:14 AM

I am not afraid that AI will become as smart as (or smarter than) humans. I am afraid that humans will become as stupid as AI.

Clive Robinson June 1, 2023 10:26 AM

@ Bruce,

Another one to read, is from “Katherine Fidler” in the UK’s Metro “give away paper”.

The thing is the printed version is quite a bit different to the online version,

https://metro.co.uk/2023/05/30/artificial-intelligence-poses-risk-of-extinction-experts-warn-18868186/

The following paragraph is in the print version (as para 4),

“The experts are worried about the threat to jobs posed by the machines –and resulting political and economic upheaval– rather than that they will make up their minds to kill us.”

But not in the online version, I was just served up (which might change, as has happened in the past woth orher UK “online newspapers” from a certain stable…)

LeeC June 1, 2023 10:55 AM

… the “AI is Scary” Chicken-Littles are an amusing but tiresome crowd of emotional fretters.

AI is somehow a big catastrohic threat to them, but they daily ride care-free in automobiles — which statistically are by far the biggest risk to their lives.

“AI” is simply an artificial set of inert Zeroes & Ones — totally risk-free to humans. … very unlike pandemic viruses & nukes.

Software is not inherently risky — it all depends upon how HUMANS choose to use it.

If you must worry, focus solely on the HUMAN risks.

N June 1, 2023 10:57 AM

Hi Schneier, have you considered the threat of cascade failure that might occur as a result of AI driven inequality stalling the economic cycle between product and factor markets?

Historically, when people are unable to get food, unrest ensues, and by the time its noticed it can’t be stopped. In almost every case the driving forces are not noticeable until it was too late to stop it. We also as a species are very bad about how to handle cascade failures in processes which we do not have a full understanding. In almost every documented case I’ve seen (and full disclosure I haven’t gone looking in a research capacity but I read a lot), cascade failures are generally considered non-issues until something happens. In the case of an internet spread free replacement for labor; that seems pretty hard to stop once the genies out of the bottle.

Kind of like a dam which cracks unnoticed for a time before suddenly collapsing. We didn’t necessarily start looking at those as critical failure signs until death resulted from those cascade failures.

If corporations stop hiring and instead replace those people with software, and people can’t get work to get food. Societal forces will do what they have always done which will correct the issue one way or another.

There’s such a large incentive to eliminate labor to drive profit that many places with freely published and available tooling will do this even if there are fines, and this particular outcome doesn’t require anything much more than what is already publicly available in my opinion.

Those other things you compare with are limited in scope whereas this can be disseminated globally in a relatively short order.

markm June 1, 2023 11:19 AM

If ChatGPT is the current state of the art for AI, the only thing to worry about is fools and lazy people taking the output seriously. It’s just ELIZA with a bigger database, still manipulating words with no idea what the sentence means.

Winter June 1, 2023 11:48 AM

@markm

If ChatGPT is the current state of the art for AI,

The current state of the art is GPT4 which is two orders of magnitude bigger than ChatGPT. It is also reported to be much more powerful than ChatGPT.

‘https://medium.com/geekculture/gpt-4-100x-more-powerful-than-gpt-3-38c57f51e4e3

Erdem Memisyazici June 1, 2023 12:22 PM

Sounds like you got caught in a soundbite. I mean what we call A.I. is really just human data enumerated with some context. I came across an author who wrote a project called nanoGPT and it really breaks down the concept for the average person as an introduction to A.I. More people understand that funnier this “open your third eye with A.I.” culture sounds. 😄

Winter June 1, 2023 12:38 PM

@Erdem Memisyazici

Not too long ago we called that Big Data.

Bigger data?

I mean what we call A.I. is really just human data enumerated with some context.

And human brains are just connected neurons with adaptable connection strengths.

GPT just learns to guess the next word in a text, any text. GPT4 does the same also for pictures. But I think no one expected that the resulting LLM could write essays based on questions.

Something did happen between “just some artificial neurons” consuming “just human data” and writing new essays on random subjects students can hand in to pass exams.

What would that something be?

Louis Conover June 1, 2023 1:36 PM

Everyone is focusing on the wrong thing. It’s not what AI does that’s dangerous, it’s how it does it. A neural net is inherently uninterrogatable. What it knows and how it behaves is the result of a pattern over a very large number of connection weights which are individually meaningless. No one, not even the system’s creators, can examine those weights and predict or understand what they mean or how they are individually connected to the output of the system. It’s not possible to query the system or ask the system to explain itself in detail. When we hand over any decision making power we effectively surrender our ability to understand how those decisions are made. Corporations and bureaucratic organizations that pay for AI in order not to pay humans to make decisions aren’t going to be paying anyone to run a parallel decision making process that can be queried or challenged.

Erdem Memisyazici June 1, 2023 1:46 PM

@Winter

“And human brains are just connected neurons with adaptable connection strengths.”

Not quite. A neuron is a living creature. GPT is just tokenized short sentences with some entropy calculation. Not even comparable.

Winter June 1, 2023 1:50 PM

@Louis

It’s not possible to query the system or ask the system to explain itself in detail.

Explainable Machine Learning investigates how AI can be constructed for which humans can determine how decisions are made.
‘https://en.m.wikipedia.org/wiki/Explainable_artificial_intelligence

Contrary to common lore, scientists and users of AI are not fools nor stupid. Many professions refuse to use AI if the reasoning behind decisions are not clear. For instance, most clinical specialists [1] refuse to delegate decisions about patient’s treatment to opaque expert systems.

Before such systems can be introduced in clinical settings, safeguards about their “reasoning” must be implemented. The current crop of systems is thoroughly opaque and, therefore, unacceptable.

[1] This does not include US Health Insurers and Hospital CEOs as these are willing to dump patients at bus stops mid winter. They do not care about health and lives, only about legal responsibility.

Mexaly June 1, 2023 1:51 PM

Cellular technology – the Internet in most people’s pockets – has aready started a transformation on the scale of Gutenberg and Luther. For the same reason: mass communication became available to the masses.

In the era of George Floyd, accountability for systemic violence has increased dramatically.

LLMs are an extension of the revolution that is already in progress. For better or for worse, but eventually, for better.

Winter June 1, 2023 1:55 PM

@Erdem

A neuron is a living creature.

So is a E. Coli bacterium. But that has no relevance for whether GPT can make intelligent decisions. I think the level of intelligence displayed by E. Coli bacteria is fairly limited. The same can be said about individual neuronal cells.

lurker June 1, 2023 5:07 PM

There’s another physical limit to what these machines can do, and it’s their efficiency, or lack thereof. The human brain runs on a few tens of watts. Up to a couple of hundred watts more are used for the fuelling mechanism. Current “AI” machines consume megawatts. Where’s this power going to come from? Will the AI just write a cheque?

morganism June 1, 2023 6:48 PM

I am still very concerned on the privacy aspects. Large AI leads to the Panopticon of surveilling everyone on the planet.

just having the word “pregnant” in a post or email is going to put you on the list to watch soon, and i think it’s still a 3 link chain authorized on your contacts. Even if you have legal abortion in the state or country you live, that word going to trigger surveillance to make sure you are taking vitamins, or your friend who just bought a test kit, is going to link you from her contact list.

When the bio-printers get here, is going to be worse.

And there is already a Dark Web for market movers, that doesn’t show in the Exchanges. How will algorithmic trading work with that?

It is also prob going to enable life extension treatments for the very rich.
And you think capital and corporations are making the decisions now, just wait till some 300 year old, who has there own personal AI, is gonna vote.

And if the AI are on-orbit, who is going to stop them…

Frank Wilhoit June 1, 2023 6:53 PM

Any harm that AI does (and there are no inherent practical limits on such harm) will be because humans believe it. This is a problem that in principle has a solution.

modem phonemes June 1, 2023 6:57 PM

Re: neural networks

The popular AI seem to deserve the characteristic “neural” minimally, as they resemble natural neural structures only to the extent that they have “lotsa connexions. Real neural structures resemble certain classes of parametrized recurrent dynamical systems whose parameters’ time evolution in training moves without supervision to a fixed state, after which system output is a stable convergence based on initial data [1]. These are understandable and have little of the instability in presence of new data and statistical optimization “black box” inscrutability of current AI designs. See Stephen Grossberg’s work.

  1. An interesting example was provided by Grossberg in 1995 where a stereo camera equipped robot running a naturally derived dynamical system taught itself by means of repeated toss/catch attempts to reliably catch a a ball thrown from any angle and speed.

Impossibly Stupid June 1, 2023 7:07 PM

I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over.

I don’t think you are using parity classifications. Machine Learning (which is all we’re talking about, and is a far cry from AI) is just a broad tool, whereas the other topics are identifiable threats that come from other tools (medicine and physics). ML is only a “risk” in the same way that any other tool is a risk: when it is wielded as a weapon by humans.

To that point, all these new ML systems fundamentally require human editors at every stage to keep them from “hallucinating” completely inhuman nonsense. It really is Eliza all over again. Anybody who uses these things know how far away they are from replacing thousands of jobs (or whatever) in any meaningful sense. Nobody selling this as an existential threat should be branded as an expert in anything but hype.

Clive Robinson June 1, 2023 9:28 PM

@ Louis Conover, ALL,

Re : Who to watch most, the corporates or the self entitled?

“Corporations and bureaucratic organizations that pay for AI in order not to pay humans to make decisions aren’t going to be paying anyone to run a parallel decision making process that can be queried or challenged.”

You left out “guard labour” of Law Enforcment, judiciary, and to a lesser extent the millitary, MIC and IC.

There is already an alleged case of a lawyer using an LLM to make a legal argument and getting caught out.

Apparently he was a defence lawyer. Now ask yourself if the same questions which led to this LLM use would have be made of a prosecution lawyer?

The obvious answer is,

“It depends on where the resources are”

Which strongly suggests that in most cases the prosecution would be way more likely to get away with it than the defence.

But also consider politicians and their agendas, especially those with strong “ism” bias. The “RoboDebt” and similar that have forced many quite innocent people into loosing homes, possessions, and liberty as well as driving them into depression, and some suicide.

In essence these are all about “arms length” discrimination for benifit of a self selecting few.

That is the AI is deliberately biased in it’s learning process to have an inbuilt “ism” that the police, politicians, etc want to prosecute against parts of society for their benifit in some form.

When it’s so blatently obvious –and it already is– it gets called into question in even the more highly biased MSM. Those responsible, however still get away with it, because of the “get out of jail free card” they play of,

“Because the computer says”

Argument, which realy will become another “Only Following Orders” question in the not so distant future.

But in the mean time a lot of irreparable harm, injury and untimely death will happen, because a few self entitled people think it should happen and want it to happen to strengthen their position etc.

As the self entitled have control directly over budjets or how they are spent, those corporates will supply what they want, as it’s a “buyers market”. Some call it “corruption” others call it the “Free Market” or “Capitalism” at work. Whatever you call it is fairly irrelevant as it’s just a fig leaf / label name to try to hide what is a very calculated harmful process used on most of society, for the gain of just a very few…

Clive Robinson June 1, 2023 9:47 PM

@ Erdem Memisyazici, modem phonems, Winter, ALL,

Re : A part is not the whole.

“A neuron is a living creature”

It’s not.

The use of “creature” implies a “whole entity” capable of independent existance, not something that is just “part of an entity”.

That is unlike a bacteria, a neuron is not capable of the basics we consider necessary for a living creature.

That is it does not,

1, Exist as a single cell.
2, Gain nutrition independently.
3, Reproduce sexually.

And a number of other things, means it is part of a much larger multi-cell organism, which can do these things we say are necessary for a creature to live.

You take a neuron out of it’s host creature, and you’ve started the count down on it’s apoptosis.

frankly June 1, 2023 9:58 PM

Dangers of AI:
* use in warfare, e.g. drone swarms learn to attack better with each attempt
* use in phishing, so that the AI designs emails or texts, learns which wording works better, and constantly improves
* surveillance of a large population using AI, supercomputers, and data from our digital lives
* use in hacking to find new vulnerabilities
* use in investing; an AI might find a way to make a large profit while also crashing the stock market
* same problem if businesses use AI to guide decision making; it might ultimately crash an industry or the whole economy

modem phonemes June 1, 2023 11:16 PM

@ lurker

physical limit to what these machines can do, and it’s their efficiency, or lack thereof

Federico Faggin remarked in a talk on neuromorphic computing that if the human brain were as inefficient as computers, it would vaporize itself the moment it was “turned on” 😉 .

Winter June 2, 2023 2:13 AM

@Clive

There is already an alleged case of a lawyer using an LLM to make a legal argument and getting caught out.

Apparently he was a defence lawyer. Now ask yourself if the same questions which led to this LLM use would have be made of a prosecution lawyer?

It was a civil case, no prosecutor in sight.

The point here was not that the lawyer used an LLM. The point was he submitted the brief with “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,”.

As far as the judicial system is concerned, a lawyer can create a text any way she or he wants, as long as any legal matters, cases, and citations are correct.

As he submitted fabulated case jurisprudence, he has broken many ethical and legal rules.

To appreciate the level of incompetence at play here consider:

“Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing: he asked the chatbot if it was lying.”

Winter June 2, 2023 2:17 AM

@Clive

The use of “creature” implies a “whole entity” capable of independent existance, not something that is just “part of an entity”.

Not in the country of the “pro-life” crowd. There a “part of an entity” that cannot exist outside a body can even have personhood.

modem phonemes June 2, 2023 2:53 AM

@ Winter

part of an entity

Medical science now recognizes that from the moment of conception there is a complete human being which integrally directs its own development. There is no point after conception where anything but growth occurs, no later point or interval at which there occurs a “not-human” to “is-human transition. By continuity this human being is then a person just as it is at any later time. It is for a time dependent on the mother for nourishment, but that does not distinguish it in any essential way that would bear on its humanity from the dependence of all human beings on obtaining nourishment.

Winter June 2, 2023 2:55 AM

@modem

Real neural structures resemble certain classes of parametrized recurrent dynamical systems whose parameters’ time evolution in training moves without supervision to a fixed state, after which system output is a stable convergence based on initial data [1].

Stephen Grossberg’s ideas and models are certainly interesting. The problem is that there are currently not that many ways to feed a big neural network a lot of data and get it to learn something.

The algorithm to get the network to update the connection weights is the bottleneck. The current deep learning algorithms were the result of a breakthroughs in error back-propagating some 10 years ago. There simply are no halfway efficient algorithms that can train deep networks to do anything useful.

Anything that allows loops in the networks runs into trouble. Resonances are completely out of the question.

Winter June 2, 2023 3:10 AM

@modem

Medical science now recognizes that from the moment of conception there is a complete human being which integrally directs its own development.

That is literally how every organ develops and works. If it is a complete human, it can survive outside the body of another human. If not, then it is part of that body. As an embryo has an even shorter time of surviving outside the body as a liver or heart, it is certainly not more of a complete human being.

Religions conjuring up other value judgements is nothing new. They used to burn women at the stake for being devils, taking away the ownership of their bodies for fun of profit is nothing new.

modem phonemes June 2, 2023 3:47 AM

@ Winter

If it is a complete human, it can survive outside the body of another human. If not, then it is part of that body.

The parts of the containing body (the mother) are organs like hands, lungs, blood, liver etc. They are integral to that body. Clearly the infant human in the womb is not of this type. Its presence inside the mother is purely at the accidental level, not integral to the mother. Although nature provides a beautifully adapted environment as the typical environment for the infant, the infant could in principle survive outside the mother if arrangements were made for its nourishment and other necessities. And the integrity of the mother and the infant are preserved either way.

Your mention of religion is a non sequitur. The discussion is a scientific one about the real character of the being in conception and afterwards and its relation to the environment that sustains it.

Winter June 2, 2023 4:06 AM

@modem

Although nature provides a beautifully adapted environment as the typical environment for the infant, the infant could in principle survive outside the mother if arrangements were made for its nourishment and other necessities.

That is still to be seen. However, it can already be done for individual organs. If a liver or heart can be kept alive outside the body, is it a creature? If an embryo is a creature, why force a woman to carry it against her will?

The discussion is a scientific one about the real character of the being in conception and afterwards and its relation to the environment that sustains it.

It is most definitely not a scientific discussion. This is about human rights, eg, freedom. Article 1 of the Universal Declaration of Human Rights:
All human beings are born free and equal in dignity and rights.

It is telling that you refer to a woman as “the environment that sustains [the embryo]”, instead of as a human being that is not free to decide about her own body.

modem phonemes June 2, 2023 4:14 AM

@ Winter

there are currently not that many ways to feed a big neural network a lot of data and get it to learn something.

The multi-layer deep learning networks and back-propagation weights update are a kind of general purpose ad hoc nonlinear analog of linear multi-factor models. It’s not surprising that they tend to learn inefficiently. Also the mere fact that they eventually model training data adequately is not a guarantee of their soundness. Even linear models have this problem. The model should be adapted to and reasonable for the data it is attempting to reduce [1].

Grossberg’s approach tends to produce more efficient and more stable learning since it uses the design principles of several recurrent network types that occur prominently in nature. The resulting artificial neural networks tend then to be adapted to the data they are analyzing. The weight updates occur automatically, in analogy to natural learning, inside the model, without recourse to out of model update schemes like backprop.

  1. Statistical Models and Causal Inference, David A. Freedman et al.

Winter June 2, 2023 4:22 AM

@modem

Even linear models have this problem. The model should be adapted to and reasonable for the data it is attempting to reduce

The weight updates occur automatically, in analogy to natural learning, inside the model, without recourse to out of model update schemes like backprop.

I am sure you will have a brilliant career ahead of you when you demonstrate such a network learning, e.g., automatic speech recognition using one of the existing speech corpora.

So, what is holding Grossberg or you back to simply demonstrate the superiority of this approach. Or are there some reasons no one is trying to harvest such low hanging fruit?

Clive Robinson June 2, 2023 4:51 AM

@ Austin,

Re : Junky Hypothetical.

“A simulation using AI decided to kill the operator so it never got “no” to missions.”

Firstly and importantly the claim is “no such test happened”.

But hypothetically lets assume something similar did happen.

Imagine if you can –and such people do exist– a drug addict / junky with mental disorders equivalent to what used to be called a psychopath.

Humans like many biological systems run on a pain-pleasure reward process for various actions.

If you stop breathing you quickly start feeling uncomfortable then pain, which only goes away if you start breathing (or die). It’s been said that some people feel a form of enhanced state of euphoria after starting to breath again. Further that encorages some to repeate the process of obtaining that euphoric feeling.

There are similar processes for all the bodies basic needs air, water, food, rest, sleep, etc.

As an entity with agency we can chose to follow or not such a process. That is we can put things off in time to achieve other more important goals such as survival from preditor attack etc.

These control processes have been assumed for quite some time to be a chemically induced[1] system built into our bodies. So it’s not surprising that the question of introducing chemicals into the control loop was investigated from very early in man’s history[2].

Thus the external interuption of the control process via the use of chemicals is well established.

With the development of mechanical “Control Theory” via feedback loops starting in the late 1800’s people began to wonder if the pain-pleasure loop was not just another control loop but one subject to being underdampd or over damped[3] in humans. This apparently appeared to explain various non typical responses in humans, but as we now know, all such explanations are over simplifications or wrong.

But the question was asked what if the control loop in an individual was defective such that either the pain sensitivity was to high or the response to pleasure too high…

As is known there are certain types of people who have a significant dependence on pain relief and similar medications. Some so much so and their impulse control so low they will commit all sorts of anti-social acts upto and including murder to have their perceived needs met.

So we need to ask the question of,

“If an AI system is given agency, and a reward system based on the equivalent of a pain-pleasure control loop, which is then not correctly damped, what can we expect to happen?”

It’s a question that I and I suspect many other engineers asked back in the 1970’s when microprocessors made the idea of “AI with physical agency” appear not just possible but as we thought back then probable within a decade (yup AI is always next decade…). Even non engineers including politicians and legislators asked it, due to that film “Wargames” with the “War Operation Plan Response”(WOPR) computer being a central protagonist to the plot.

Unfortunately a film that was released three weeks earlier that portraid the human side of advanced weaponary being used to control the civilian population “Blue Thunder” appears to have not generated more important quesrions about the development and control of advanced weaponary…

[1] Till someone asked the question “Is wearing clothes a learned addiction?”.

[2] We know for instance that when building the Pyramids in Egypt, a form of beer made from bread was given to those involved with the work, at the end of the day and various assumptions right or wrong made on this knowledge. Likewise certain plants have been used in what we chose to call religious ceremonies for longer than records have been kept. As interestingly some later religions baned the use of intoxicants.

[3] In control theory damping referes to the output response of a servo system to what is called a “step input”. There are three basic classes of damping based on the loop response, under damped, critically damped and over damped. Under damped systems show an initial fast response, but “overshoot the mark” thus have to correct in the opposit direction, giving a sinusoidal ripple in the response known as “ringing” which can take a long time to settle, and in the process do quite some damage. Over damped, does not ring, but takes a long time to get to settle. Critically damped is effectively the shortest time to reach and settle on the mark. In more recent times the damping has been made adjustable in the control loops of the likes of “Phase Locked Loops” such that they are initially undamped and very rapidly approach the mark, however as the mark is approached the damping is increased past critical danping to over damped which eliminates the over shoot. Thus the loop can reach and settle on the mark faster than critical damping of the loop. In some cases the loop is then further over damped as this can reduce certain types of noise in the loop output caused by random noise in the loop.

Coolest Hottie June 2, 2023 5:11 AM

The only thing that is scarier than the rogue AI drones would be those designing them
IF they happen to have an EGO that is soooo BIG that they do not care to write code
that runs the drone’s OS even though they know that they CANNOT SPELL for shyte.
Blah blah blah – I’m a superstar in a comment section of a liberal blog.
Not knowing basic things such as correctly spelling words in English, in an English-speaking forum/online community – yet bragging about knowing everything about everything just makes me laugh hysterically. Thank you for that, oh you almighty clive robinson.

Clive Robinson June 2, 2023 5:34 AM

@ Winter,

“Not in the country of the “pro-life” crowd.”

Douglas Adams indirectly refrenced this in one of his books. Where he imagined someone so sick of the insanity he saw around him, he built a metaphor. That is of an inside out building and invited people to come in outside of the asylum…

Clive Robinson June 2, 2023 6:09 AM

@ Coolest Hottie,

Re : Irony

“yet bragging about knowing everything about everything just makes me laugh hysterically. “

Two things for you to consider,

1, I base what I say on reason, something that most should be capable of.
2, Many who don’t reason are simply holding up a mirror and saying what they see.

Thus each time, you make one of your gaslighting attempts, who are you actually burning in the flame?

The fact you keep changing your handle to do this is not just transparently embarrassing to others but a form of “sock-puppet” behaviour that is against the stated blog rules.

As for your,

“CANNOT SPELL for shyte”

What can be said?

Remember spelling is just a convention imposed only recently in history, as a method of excluding thus controlling people…

For instance in the US well within living memory it was used to deny people not just the right to vote, but to have a future any better than slavery. It still happens in many other parts of the world right at this present moment.

Such times and behaviours are obviously keenly attractive to you.

But also remember even our host @Bruce occasionally makes spelling mistakes, are you also denigrating him?

What about the many others who post here with spelling mistakes?

You appear not to understand language is about communicating ideas and emotions and for that spelling correctly or not,

“s nt rqrd”

Nor is spelling, grammar, etc in any way optimal for the transmission of information… They only exist because the spoken word happens in a very noisy channel for which significant error correction is needed.

So arguably they are compleatly unnecessary in a channel with little or no noise. An argument Claude Shannon made probably before you were born.

So for you, obviously a usless echo from the past. In your desire to hold “Cancel Culture” high in the air as some trophy to well…

Clive Robinson June 2, 2023 6:24 AM

@ modem phonems, Winter,

“Medical science now recognizes that from the moment of conception there is a complete human being which integrally directs its own development. There is no point after conception where anything but growth occurs, no later point or interval at which there occurs a “not-human” to “is-human transition.”

The same argument applies to “malware” such as worms and viruses, and any other self replicating automata such as Conway’s codes for his “Game of life”. So unless you are arguing such code also has “a soul” then what is it’s relevance as an argument?

But it also has a downside as an argument. Conception happens outside of the human body, then it attaches and feeds off of the body. This is the same description as a parasite’s early life cycle and subsequent growth.

We use chemicals to prevent parasites attaching and growing and this is considered a bebifit to humanity.

What does this in turn say about the birth control pill, that stops the fertilised egg attaching?

Anonymous June 2, 2023 8:56 AM

@LeeC

Software is not inherently risky — it all depends upon how HUMANS choose to use it.

That sounds a lot like “Guns don’t kill people; people do”.

I certainly think one of the risks is people mistaking chatbots for intelligence (or sentience, a word that soneone upthread used about AI).

Clive Robinson June 2, 2023 10:09 AM

@ Anonymous, ALL,

Re : Agency of entities.

“That sounds a lot like…”

Whilst inanimate object do the damage, the question of who or what caused the object to do the damage.

Law follows the path back from the object to either a directing or negligent mind that has “agency”. If they can not then the old “act of god” or “accident” is invoked instead.

The fact you are holding an object that causes harm, does not automatically make you guilty of any crime etc.

However lets assume it’s the stearing wheel of a vehicle and a tire has a blow out and the vehicle not unexpectedly crashes into another.

In general blow outs are not finely controlable upon denand. Thus it’s unlikely the event was designed to cause harm to the occupants of the vehicle that is hit. So either it’s aimed at thr driver of the vehicle that has the blow out or as a result of some potential negligence.

So the investigators in effect walk a decision tree graph moving from node to node along edges untill they arive at one or more potential leaf nodes. They then examin the actuall probabilities, and see if there is abything further from events that might change them (ie investigate non direct circumstances).

The result of these investigations are then handed to others who examine them to see what is the appropriate path or action to take.

Surprisingly often the result is no criminal activity of consequence is found, thus the case gets passed over to a civil investigation by an insurer etc.

Occasionaly the civil inveatigator will go with “negligence” and potentially civil action. The investigation of which could unearth further information that flips it back to criminal investigation.

The level of investigation is almost always based on the level of harm and to a lesser extent if other evidence is discovered.

But… As I point out from time to time,

“There is no such thing as an accident, all physical events are predictable and can be mitigated, provided sufficient information is available within a time period to take action on the information.”

So yes, if the event happens, then it is the failing of a human or other entity with agency. The question then becomes could they have mitigated or preventiond the event?

Every day we do multiple things that have the potential to alow harm to happen…

I was some years ago attacked by a burglar with a length of trellis fencing that had been taken down by a neighbour and left at the edge of the property. Who was to blaim, for leaving what became a weapon available for the burglar to use?

Does it change your mind when I say I’d repeatedly asked the neighbour to remove the trellis because I considered it to be a danger?

The fact the danger I was considering was “wind” blowing the trellis over and hurting me or causing damage to my property and not some psycho burglar does not realy make much difference.

pattiM June 2, 2023 11:31 AM

It’s not surprising that the press would pick up on an Existential Risk that is nebulous and sounds far-off or difficult to imagine. Climate Breakdown (and its associated problems, such as disease vectoring and zoonoses) is increasingly killing folks, and AI poses a great, abstract, and, mostly, apolitical distraction from our by-now very good knowledge on what the future holds for Climate Breakdown. The press knows that it has to make people feel engaged and help them defuse their building existential terror, FWIW.

Anonymous June 3, 2023 11:40 AM

@LeeC

Software is not inherently risky — it all depends upon how HUMANS choose to use it.

That sounds a lot like “Guns don’t kill people; people do”.

I certainly think one of the risks is people mistaking chatbots for intelligence (or sentience, a word that soneone upthread used about AI).

@Winter

That WP article on “explainable AI” wasn’t mainly about explainable LLMs; it treated explainable LLMs primarily as a goal, not as something anyone has actually achieved. I was hoping that it might explain how an LLM can be made explainable; but it doesn’t do that at all, and gives zero examples of an explainable LLM. Which surprises me not at all, because I think LLMs are not explainable in principle. That is, “explainable LLM” is a term with no referent, like “The horns of the rabbit”.

Winter June 3, 2023 1:55 PM

@Anonymous

That is, “explainable LLM” is a term with no referent,

Explainable AI/ML is a research area that requires researchers that have access to models and data. LLMs are secretive projects that are inaccessible to outsiders and researchers. Hence, no explainable LLMs.

But it is very much investigated in visual and audio recognition and especially in medical AI.

Who? June 5, 2023 11:57 AM

CEOs of data brokers and large corporations fear AI because they know they have lost control over it. The same people that thinks AI is a risk now feel all is ok while they had own it.

Look at Elon Musk, he thinks AI is bad but wants to start his own project on this area.

I understand it is terrible [for they] that a powerful technology like this one is not under their control, but on the hands of people.

fib October 27, 2023 10:03 AM

@ Clive, All

Yes, I noticed your mention of the sensors/senses problem in AI, in this and other threads recently. It would not be fair, or productive, or wise, to argue with you, even because we are on the same side on this issue. Likewise, I share your opinion regarding the role of big tech. I have nothing but despise for the tech brotherhood.

As for the empirical experience problem in AGI, I also tried to approach the issue a while ago(*) – in a somewhat haphazard way.

But as a materialist I postulate that consciousness is independent of the material substrate, and that once the conditions are met it will emerge. I have reasons, however feeble, to conclude this [for now].

Anyway, it’s always gratifying to discuss this matter.

(*)
https://www.schneier.com/blog/archives/2023/06/on-the-catastrophic-risk-of-ai.html

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.