AI Risks

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Posted on October 9, 2023 at 7:03 AM46 Comments

Comments

Medo October 9, 2023 8:30 AM

Reading your article, I may come away thinking that AI doomsayers prioritize a potential catastrophe that is still many decades or even centuries away over the actual problems of the next years because of their longtermist views. I don’t think that holds up anymore.

AI Risk has historically been treated as a longtermist concern because AI powerful enough to be dangerous in this way seemed far off. However, the recent astonishing speed of progress has changed predictions for when we will reach human-level artificial intelligence across a wide range of fields so much that many now think it will happen within 10 years (see https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/). Whether that is a good prediction is of course debatable, but it is what drives me, at least, to view it as an urgent issue, not a longtermist project.

A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Let’s turn this sentence around: Would you be unwilling to sacrifice any well-being of people today to stave off a possible future catastrophe? Isn’t this pretty much what we are doing (and need to do) because of climate change (or do you think moving away from fossil fuels has no downsides in the present?) Sure, we can discuss how much well-being we should be willing to give up in exchange for how much danger reduction in the future, how likely the catastrophe is to occur and so on, but I don’t think attacking the general principle is warranted.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future.

Isn’t it much more of a bet with our future to disregard potential dangers? Also, what does the example of policies proposed by Elon Musk on completely different subjects in this paragraph have to do with AI Risk? It reads to me like an attempt to discredit the concern by association.

It is of course fair to characterize the group which most people making a particular argument belong to, and while many of the things you wrote are spot-on, I think the points above are not helping to paint a fair picture for the reader.

Erdem Memisyazici October 9, 2023 8:33 AM

I think the perception that we are a nation divided on this is part of the problem. As long as we can stick to the objective science at hand and start educating kids about what sort of future awaits them concerning various applications of new technologies we can outline sensible policies. There is a funny Turkish saying which translates to “a fresh pile of dung attracts the most flies” and I think this is an applicable scenario. Give it time I’d say.

Vesselin Bontchev October 9, 2023 11:30 AM

I really need to write a paper about the risks of autocorrect and how it can lead to the extinction of humanity…

JonKnowsNothing October 9, 2023 12:09 PM

Genies don’t go back in the bottle willingly.

The issue will be decided by Money and Power. Not every country will use the same currency. If you have Power and Purpose you can use Money to get what you want. Not all coins are equal.

The presumed article underpinning is Western European Capitalism. The current assumptions make no allowances for other systems, yet those systems do exist as counter point.

The nature of programming AI is that it is, in fact, uncontrollable. Whatever goes in the scrabble bag gets churned up and spit out on the open board. Potential Regulatory Systems have no way of blocking anything that goes into that hopper; there is always a way around the fence.

It is too late to do much more than enjoy the show:

  • Purple cow
  • Pink Elephant
  • White Rabbit
  • Wind Horse
  • War Dog
  • Blue Rose

===

ht tps://en.wikipedia.o r g/wiki/Purple_cow

htt ps://en.wikipedia.o r g/wiki/Seeing_pink_elephants

ht tps://en.wikipedia.o rg/wiki/White_rabbit

ht tps://en.wikipedia.o rg /wiki/Wind_Horse

ht tps://en.wikipedia. or g/wiki/The_dogs_of_war_(phrase)

  • “havoc” is a military order permitting the seizure of spoil after a victory

The Blue Rose (A folktale from China)

  • The Princess stood. “My people, let me tell you what I see….”

(url factured)

JonKnowsNothing October 9, 2023 12:25 PM

@Vesselin, All

re: auto-correct blunders

You can include the blunder of: Send All.

There is also writing what you really think of your boss and actually pressing: Send.

Those have been combined by “social media”, which is really “non social”, by omitting the modal confirmation box before posting to World+Dog: Are You Sure?

There’s the blunder of phone-video instant uploads. Some might think they have an invisibility cloak while live streaming: Go Live.

The “end of humanity” may simply be the end of “you”. Dire outcomes happen.

Star October 9, 2023 12:44 PM

Human level AGI is a technological singularity that we can’t see beyond. Thats the point where instead of us training the dogs, the dogs are training us.

Clive Robinson October 9, 2023 12:52 PM

@ Bruce, ALL,

There are two sailent points people realy need to consider about AI,

1, It is simply a technology that uninvolved human observers say is Good or Bad.

2, It is the current “bubble” of “human greed” being used to seperate people from their money.

But the reason it’s a bubble is down most of all to human stupidity, and those who put greed, averice and grabs for power and status above all else.

Such people care not a jot for the future of mankind, in a month, year, decade or even century. Their view is based on the idiotic notion of “money left on the table”. It is of such a short term view point that the current nonsense about “Large Language Models”(LLMs) and the alleged adaptiveness of “Machine Learning” will be seen as a joke within a year or two.

I’ve already pointed out that the big dangers of the current AI nonsense are,

1, Use as a surveillance tool, for profit.
2, A way to exploit uneducated and weak minds for profit.

Just a day ago I posted about LLM’s being used for the joke that is “astrology” which whilst it is mostly treated as a joke and harmless entertainment, it is also extreamly usefull as a way to exploit significant sums of money out of the gullible. Thus it is like gambling, destructive of those who invest there resources.

Now LLM’s are being used to extract wealth from those who believe in Astrology, how long before Deities ?

Well there is already a numpty system to supppsadly give spiritual guidence on the bible… So expect one comming to your religion any time soon.

But as I’ve pointed out in the past Religion has been historically used for “Political Control” of populations. Well LLMs are we know being used for Propaganda and there is evidence a recent national ellection was manipulated in favour of Russia.

But what people perhaps do not realise is LLMs and associated ChatBots are little more than the likes of those joke Computer Psychanalists from back in the old Teletype days of the 1960’s and early 1970’s

The problem with such devices is that they are extreamly dangerous due to reinforcing “Cognative Bias” by acting as a personalised but automated “Echo Chamber”.

In the UK we have just had a sentance handed down on a man who’s “AI Girlfriend” allegadly caused him to try to kill the thrn Monarch with a crossbow,

https://www.theregister.com/2023/10/06/ai_chatbot_kill_queen/

The point is the ChatBot by it’s echo behaviour caused some one with what is probably best described as “Mental Health issues” to in effect become “self radicalised” and thus carry out an act not just of Treason but Terrorism as well.

It’s the “echo chamber automation” in a personalised way that is probably the most immediate threat from AI systems as they manipulate “Cognative Bias” thus move people from rational behaviour.

Do I realy need point out what we saw with C19 and similar as an indicator of just what percentage of the population is at risk of this when it’s non personalized?

What multiplier should we use for when it’s “personalised” and people use ChatBots and similar instead of cultivating real life lovers, friends, colleagues, and acquaintances that atleast tend to anchor them within society and it’s mores and ethics.

Matt October 9, 2023 1:28 PM

The doomsayer arguments were always predicated on a fast takeoff winner take all FOOM scenario. It seems pretty clear now that that assumption is wrong and we are in a slow multipolar takeoff world.

I see them as not admitting they were wrong and pushing the takeoff date another 10-20 years even though we are beyond the capabilities they claimed were needed for a takeoff. They should spend some time updating their priors and reassessing the risks.

Clive Robinson October 9, 2023 2:54 PM

@ JonKnowsNothing, ALL,

“Genies don’t go back in the bottle willingly.”

As importantly, they always “get out” one way or another, and history suggests at the worst possible time.

There is a moral behind the “three wishes” as there are with all old tales, that few these days appear to know,

“The third wish is to undo the first two wishes.”

A sad fact of life is that when faced with unlimited power, money, status, prestige, greed etc most do not stop and think, they have not learned and they show no sign of actual intelligence…

The old Midus “Every thing you touch turns to gold” is no fun with food, or the people you love…

These LLM’s are not inteligent and they have no more learned than an encyclopedia or dictionary have.

There is the odd belief that if we just keep adding storage and CPU cycles, that somehow intelligence will magically appear…

It won’t…

But also when you ask a human to define knowledge, learning and intelligence the answers you get are at best odd.

Worse each time they go through the loop to try and refine their answer, all to often their answers show their cognative biases against other types of human.

Everyone should have a serious think about that…

But more importantly as was found with the Turing Test, intelligence is actually mostly based on not just the views of society, but being a part of society as a functioning member. Worse almost everyone gave answers intimately tied up not with knowledge, learning, etc, but raw “emotion”…

If that continues then AI can never happen, but others will sure make it “fake it well” because they will be able to get control, power, wealth, status, etc from other people.

Whilst “Black Roses” do exist they are very rare and need special environmental conditions. “Blue Roses” have never existed and so far attempts to make them have failed. You can however fake them with dye, just dipping a white rose in ink will for a while give you a blue coloured surface. Also cutting a “stem rose” from a white rose plant and putting the fresh cut end into certain types of natural dyes will make the white flower go blue, but both these methods are fake/faux blue roses of a con artist.

And in reality that is where we are with LLM’s and their alleged ML, a cleaver confidence trick by which certain people will make a lot of money / power and gain much control and prestige over others who fall for their cons.

Mags October 9, 2023 3:16 PM

ChatGPT has been out for almost a year now. The hype merchants said it will change everything, and fast. Funny, even with the recent competitors to ChatGPT (like Bard) and multimodal chatbots (image/audio/video as well as text), it still feels like a year ago to me.

Daniel L Speyer October 9, 2023 3:40 PM

I beg you to take the existential risk arguments seriously.

Stop engaging with the demographics and engage with the arguments. If you can find a flaw in them, by all means, trumpet it loudly, but please stop mocking.

Ralk October 9, 2023 4:18 PM

Speaking of ideologies, it appears that our esteemed host is gulping the Koop-Aid from more than a few.

He managed to shoehorn ‘climate change’ into this article more than once, for example.

R.

Medo October 9, 2023 4:21 PM

There is the odd belief that if we just keep adding storage and CPU cycles, that somehow intelligence will magically appear…

It won’t…

The term intelligence is, as you say, hard to define in a way that people will agree on. However, I can tell you with certainty that GPT-4 at least does not merely regurgitate things from the training data or cheaply pattern-match, it actually learned a degree of abstraction over various concepts. This is imo most easily seen with programming tasks, where it can apply changes to code it has never seen before in order to achieve a given goal, which requires at least some understanding of the semantics of program code.

There are other experiments, like the one where GPT-4 was able to generate a graph of the rooms in a simple text adventure from a log of a player navigating through the rooms, which go beyond what is implied by the term “stochastic parrot”

Granted, I agree that its perceived intelligence rides mostly on the breadth of its training data, but it would be wrong to say that it has no ability to abstract or extrapolate, and imo these abilities are a lot stronger in GPT-4 than GPT-3.5, so it seems to me that the idea “bigger system” -> “more intelligence” has some merit

pattiM October 9, 2023 5:51 PM

Hah! We do nothing about destroying the Holocene, and we do nothing about AI precautions. Same old human story.

Clive Robinson October 9, 2023 8:13 PM

@ Star, ALL,

Re : Artificial General Intelligence

“Human level AGI is a technological singularity that we can’t see beyond. “

I don’t happen to believe in “Artificial General Intelligence”(AGI), it’s an expression like “fairy dust” that turns up in the make believe not in reality.

As I’ve already said several times people can not define intelligence, learning, reasoning or even knowledge. Worse information has more meanings than most can understand the existance for.

So AGI as a statment or argument has no testable foundations, thus is not just immeasurable but unplaceable at any point so actually meaningless. Much as the many squawks that the Stochastic Parrots make that naive users of LLMs think are some kind of prophetic statment[1].

As for the “singularity” it has more ill defined meanings than a hundred handed alien could shake sticks at. It’s just another way of saying “failure of reason”. Others though claim it’s when a machine will be unstopable or concious, neither bare scrutiny as definitions.

“Do you understand Moore’s Law and how that will affect an AGI?”

First off Moore’s Law is not a law but an observation about the number of elements needed in a sequential processing unit to give a certain rate of improvement over time. The simple fact is we are never going to get there in usable semiconductor materials due to power density and the speed of light.

But AGI is not constrained to sequential processing so Moore’s observation is actually not that relevant. The type of algorithm and how independently it can work is. The two of importance are fully independent and cascade independent. This has been known about since the earlier days of “Digital Signal Processing”(DSP). The design of the networks in LLM’s look like they are fully dependent but actually they are mostly not, thus can be rearranged advantageously to some extent.

@ Matt, Star, ALL,

“The doomsayer arguments were always predicated on a fast takeoff winner take all FOOM scenario.”

I don’t like the way people loosely equate “Winner Takes It All”(WTIA) and “Fast Onset of Overwhelming Mastery”(FOOM) with each other, especially as neither is representative of the economics of markets.

“It seems pretty clear now that that assumption is wrong and we are in a slow multipolar takeoff world.”

The issue is usually that economists make gross assumptions about markets and things like “Distance Cost” and similar metrics that they sweep under the carpet of “axiomatic” to hand wave them away.

However in this case the issues are not as clear cut due to the quite deliberate nonsense being pushed out by various parties intent on profiteering on LLMs.

The profit comes from,

1, Use as a surveillance tool.
2, Use as share inflator.
3, Use as a bubble market driver.
4, Use as a secondary market driver.

I won’t go into the use of LLM’s as surveillance and similar tools, as increasingly this is becoming apparent to the MSM as a side effect of the recent election propaganda in Europe.

Venture Capitalists(VCs) and certain Angles are looking to make money out of investors and speculators. Some of the Investors like MicroSoft are looking to gain “lock-out” that is they push for legaslitive and regulatory rules that in effect give them a monopoly or cartel much as the likes of IBM used it’s patent portfolio as a way to keep competitors out or tax away their profits. Other investors are the more knowledgable speculators looking to get in on the technology at an optimum investment point and also gain a seat at the top table.

However many VC’s are basically working a “Start-up Pump and Dump”. They throw money in to rapid grow some two man or similar startup into what looks like a hundred or so man business in nice offices and the like. They then rapidly sell off the shares to people who feel they have to be in the “AI business” but don’t have the chops or ability to start their own devisions.

Part of that pump and dump is buying in lots of shiny boxes in racks with blinking lights etc. This is where the secondary market can be so profitable and why NVIDA has a gloss on the stock market that makes it a star turn.

NVIDIA has a lot to thank VCs for because their cards worked nicely in crypto coin / blockchain market bubbles and would have been the engines begind Web 3.0 / NFT’s if the bottom had not dropped out of it. But as the cards are general as opposed to specific they just as happily drop into those LLM build rigs, and also user facing systems that use about a pint of water per user query…

Which begs the question,

“Now crypto coin mining is not profitable, will the mining rigs get repurposed to AI systems?”

[1] The outputs of LLMs are at best “style over substance” and thus frequently factually in error even in very simple tasks that a child could accomplish with no difficulty. The outputs of LLM’s are based on statistics probabilities, perturbed by bias induced by a “stochastic source”. It’s the same result you would expect from a cracked bell being stroked by a wire brush. Unless the training set to an LLM is “clean” there will be a number of unclean peaks in the output spectrum of it’s weighted filter network. The less clean the worse the problem. Then there is the issue of bias, feeding back LLM output into a training set is going to bias it in various ways. As is the latchup effect caused by the order you feed input data in.

Clive Robinson October 9, 2023 8:31 PM

@ Ralk,

“He managed to shoehorn ‘climate change’ into this article more than once, for example.”

Climate change is a very serious issue and LLM’s are certainly not helping in that respect. Like crypto coin mining / blockchain rigs the systems needed for the initial LLM models use energy and water so fast they consume more than some European Countries.

Have a read of,

https://www.theregister.com/2023/09/27/datacenters_nuclear_power/

To see what could be a 1/2GW per AI system data center before the end of the decade.

ResearcherZero October 9, 2023 8:48 PM

@Ralk

Greed is quantity over quality. There is a limit to how much quantity can be extracted from a finite resource. Push past that limit and you invoke systemic collapse. Increased conflict is an indicator of approaching those limits.

Climate change is a good example of how a large company can clear fell more forest in 5 years – while employing only a small number of people – than an entire community can fell in 100 years. Where there were once 50 small companies operating in the space, there is only 1. Reduced employment follows the closure of smaller operators who cannot compete with the lobbying efforts of the much larger competitor.

The company lobbies government and pays it a royalty, and the government licenses it to take the communities’ resource, then the company pulls up stakes and shifts operations once the resource is depleted. Company public relations call this process “sustainable forest management”.

The community is left with a largely depleted resource, with a significantly reduced value, which once had sustained a rich and vibrant economy. Substantial knowledge, skills and craft that developed along with the communities’ once sustainable way of life are largely lost.

The diversity of that depleted resource is permanently reduced, even after a substantial lapse of time, it will never provide the same value that it once did, nor the quality of life that it also provided to the community.

Other industries in the community are also affected…

Wet season rainfall is reduced as much as 0.6 millimeters a month lower for every percentage point of forest clearing, along with greater unpredictability and variation in annual rainfall and temperature.

People living near deforested regions often report hotter and drier climates after forests are cleared, and for every percentage point of reduced rain, crop yields can fall by 0.5 percent.

‘https://www.nature.com/articles/s41586-022-05690-1

Deforestation has increased the severity of extreme heat in temperate regions of North America and Europe.

‘https://www.nature.com/articles/s41467-022-33622-0

Restoring tree cover can reverse the effects of deforestation on local temperatures. However, that effect can vary, depending on the latitude at which the deforestation occurs, among other factors.

‘https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016JD024969

Trees planted during the 1930s Great Plains Shelterbelt initiative, which spanned from Northern Texas to the U.S.-Canada border, lowered regional temperature averages by 1.7-2.1%, reduced the number of extreme heat days by 12.9%, and increased precipitation by 4.4-8.0%. Ultimately, these changes increased corn yields by 54.3% and influenced farmer decisions about which crops to grow.

The change in climate extended to adjacent unforested land up to 200km away.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4333147

ResearcherZero October 9, 2023 9:13 PM

@Ralk

Also, hippies show up. And they are not always the most pleasant people to have a conversation with. Hence an increased level of conflict. More than the usual enjoyable amount (fisticuffs on a Friday night at the worker’s club).

lurker October 9, 2023 9:32 PM

@Med
re “at least some understanding of the semantics of program code.”

No, it’s just molding the input to fit a format it has built from past known “good” examples.

re “betting on our future …”

You quote (who?) saying the doomsayers are “… making a misguided bet with our future” but go on to suggest the bet is to “disregard the potential dangers.”

This looks like two subjective orthogonal interpretations: misguided vs. potential danger. If humans can’t agree on the form of the question, how can a supposedly objective machine give them a meaningful answer?

ResearcherZero October 9, 2023 9:51 PM

People are disinformation’s biggest problem. Many of the misleadingly labeled videos were shared by verified users on X, who are eligible for monetization of their content.

‘https://www.poynter.org/fact-checking/2023/how-to-spot-fake-news-israel-hamas-war/

‘liar’s dividend’

Amplifying disinformation with AI tools.
https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence

“There simply is no easy fix to the problem of computational propaganda on social media.”

‘https://www.technologyreview.com/2020/01/08/130983/were-fighting-fake-news-ai-bots-by-using-more-ai-thats-a-mistake/

“lack of large numbers of annotated training instances”
https://arxiv.org/abs/2004.01732

…”An account controlled by the attackers might also reply to the initial tweet by asserting that it is true. …Over short time scales, it would be exceedingly difficult for an algorithm—or a human—to know which label to trust. Responding quickly to disinformation thus requires addressing the twin hurdles of limited data and unreliable—and in some cases, intentionally wrong—labels of that data.”

‘https://www.brookings.edu/articles/how-to-deal-with-ai-enabled-disinformation/

ResearcherZero October 9, 2023 10:09 PM

On “rationality will not save us”

  • “The fact that everybody is a liar, the fact that people are hopelessly self-deceived about their own actions and intentions doesn’t mean that there isn’t truth. Just makes it more difficult to ascertain.”

‘https://www.nytimes.com/interactive/2023/10/08/magazine/errol-morris-interview.html

Anonymous October 9, 2023 10:41 PM

Similar to @Daniel L Speyer I found the argument against Doomers dissatisfying. It seems to boil down to “some of believers are weird/bad people”, and “they believe that the extreme risk necessitates some unpleasant actions” – but for any sufficiently popular idea, some of the followers are weird/bad people, whether the idea is true, or not – and if there is an actual risk of human extinction 20-30 years down the road – wouldn’t it be appropriate to consider some drastic actions? And as for “surely there will be warning signs”, many doomers consider the largely unexpected recent progress of generative AIs to be a huge warning sign already.

Winter October 10, 2023 2:13 AM

@All

@Ralk
He managed to shoehorn ‘climate change’ into this article more than once, for example.

This is a perfect example of the disinformation comments AI can produce by the thousands for pennies.

I think Neal Stephenson’s Fall, or, Dodge in Hell gives a good description of how such a dystopia will look. [1]

I do not see an easy way out. A lot of the world’s fiction describes how many people believe what they want or fear to be true (eg, [1]) even if it destroys them.

[1] ‘https://www.theverge.com/2019/6/16/18659718/neal-stephenson-fall-or-dodge-in-hell-science-fiction-cyberpunk-thriller-book-review

Winter October 10, 2023 2:39 AM

AI increases efficiency enormously where it matters:

Deepfakes of Chinese influencers are livestreaming 24/7
With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
‘https://www.technologyreview.com/2023/09/19/1079832/chinese-ecommerce-deepfakes-livestream-influencers-ai/

Now, all the human workers have to do is input basic information such as the name and price of the product being sold, proofread the generated script, and watch the digital influencer go live. A more advanced version of the technology can spot live comments and find matching answers in its database to answer in real time, so it looks as if the AI streamer is actively communicating with the audience. It can even adjust its marketing strategy based on the number of viewers, Sima says.

These livestream AI clones are trained on the common scripts and gestures seen in e-commerce videos, says Huang Wei, the director of virtual influencer livestreaming business at the Chinese AI company Xiaoice. The company has a database of nearly a hundred pre-designed movements.

Scroll down to the video lower in the page and I have a no clue whether this is a video of a real person or not. Blinking, breathing, lips, as well as head, eye, and brow movements, all look perfectly in sync.

Clive Robinson October 10, 2023 2:41 AM

@ ResearcherZero, ALL,

Re : Quote from interview with “David Cornwall” (author John le Carré, of Spy Novels and British Intelligence reports).

“The fact that everybody is a liar, the fact that people are hopelessly self-deceived about their own actions and intentions doesn’t mean that there isn’t truth. Just makes it more difficult to ascertain.”

You can look up on this blog my past comments about how to lie without lying and how to change the truth in other peoples minds, and find I’ve said almost exactly the same.

As an investigator / interrogator you need to realise that for any observed event by N witnesses there is actually N+1 truths, where only the “+1” is what actually happened.

That is all your witnesses will not tell the truth in some way, even if they have no intention to deceive.

The reason has three basic parts,

1, All N witnesses have different vantage point points of view, thus only see a part of the +1 truth, and ‘their minds abhor a vacuum’ so fill in blanks for a continuous narative.

2, All N witnesses will change their story with time because ‘their minds are imperfect’ and what is recalled over and over gains a strength / bias over that which is not.

3, All N witnesses will change their story with time because ‘their minds are malleable’ and what is recalled will be changed by the questions asked of them so skews towards what the interrogator asks.

Knowing this you can see why interviewing a witness properly is a very delicate process, and how easily it can be changed just by asking what appears to be neutral questions.

In the US the abusive “Reid technique”[1] interview process is still used even though it is known to be a tool of coercion especially in the third stage.

In the North of Continental America especially it’s why you should never talk to the police, as it’s a deliberately adveserial process from which you gain absolutly no advantage by saying anything.

In the UK the Reid technique is supposadly nolonger used… However it’s replacment “PEACE”[2] still uses the likes of repeated and leading questioning to abuse a persons memory processes and similar is still used. But it’s actuall a process of forcing you to talk thanks to Ex UK PM Tony Blair and several of the current political incumbents. Because if you are interviewed by the UK Police you will first be “Cautioned” and you’ll hear “your rights” that include a rider inserted by the 1994 Criminal Justice and Public Order Act, after the,

“You do not have to say anything”

point –where most innocent peoples brains are overloaded and they stop comprehending– you get,

“But, it may harm your defence if you do not mention when questioned something which you later rely on in court.”

In short,

“Don’t talk have no defence, if you do talk it will count against you.”

Various legal advisors give different advice but it’s basically along the lines of do not even acknowledge you understand the caution simply say,

“I do not have a competent representative of my choice to represent and advise me present. I therefore have no comment to make at this time and this interview is over.”

It won’t stop the Police trying to get you to be “guilty of something”, as the job thay are actually payed for is,

“To be seen to get convictions, not to do justice.”

All to keep their political paymasters happy.

[1] The “Reid technique” is actually even when carried out correctly by neutral investigators a coercive and abusive system that distorts an interviewees mind and is thus regarded by some as a form of tourture technique,

https://en.m.wikipedia.org/wiki/Reid_technique

What is known is that the Reid Technique was a failure from the first time John Reid used it and resulted in a false conviction, which eventually resulted in substantial damages being paid to the innocent person. The fact the Reid technique still causes false convictions and the John E. Reid & Associates firm continues seven decades later to still pay out substantial damages upwards of several million should tell you why it should not be used…

[2] The English and Welsh police in Britain developed the so called “PEACE method of interrogation” back in the late 1980’s and started using it in the early 1990’s. This was done because of the issues to do with the Reid and similar “rent a thug” interrogation techniques that had led to significan numbers of false confessions, mental break downs and suicides and ensuing public outcry. However even when carried out correctly PEACE still distorts a witnesses mental view of the truth. As such it is still not a reliable system in any significant way even when carried out by properly trained interrogators. Though some involved with it’s training like to think it is better,

https://www.fis-international.com/assets/Uploads/resources/Schollum-PEACE.pdf

Clive Robinson October 10, 2023 3:31 AM

@ Bruce, ALL,

There are some unwarranted disparaging of the “closer to reality” comments on this thread…

Perhaps the doom sayers and evangalists should read what has to be the AGI / Singularity “Shill of the month” from the large IT-Tech investor / bubble over-inflator Masayoshi Son CEO of Softbank,

“He declared that the intelligence delta between those who will use AGI and those who won’t is the same as the difference in intelligence between monkeys and humans.”

But read the whole article because two things will pop out at you,

1, The potential environmental disaster AI can cause.

2, Softbank via ARM is a secondary market wannabe mass profiter that is going to sell you a solution…

https://www.theregister.com/2023/10/09/softbank_ceo_masayoshi_son_ai_vision/

Anonymous October 10, 2023 8:26 AM

Artificial Intelligence is …artificial; however the reason I accept the simulation is because I know it’s a simulation but maybe the machine never needed our permission.

lurker October 10, 2023 12:52 PM

@Clive Robinson, ALL

From the Softbank Chief Wiz:

“Nobody but me believes AGI will be a reality in ten years. I am the first one that clings strongly to this belief. Whether it’s right or wrong, I believe it,” admitted Son.

And there’s the current difference between man and machine: man has belief, faith. Will AGI as a mechanical device, believe something that is provably wrong? When that happens it will be too late to be afraid.

Clive Robinson October 10, 2023 3:12 PM

@ lurker,

Re : Does AI dream of electric sheep?

“Will AGI as a mechanical device, believe something that is provably wrong?”

If you think about a “mechanical device” being “determanistic” it will not be capable of independent action thus it can not “believe” or be “responsible” or “liable” as these only come about as a consequence of “free will”.

But other than the ability to behave contrary to the expectation of an observer, what is “free will”?

You quickly get into that “hidden variables” argument and like counting the ever larger turtles as you descend the infinite stack you don’t realy get anywhere as there is always atleast as many turtles to go as there has already been.

Tariq October 10, 2023 10:45 PM

Reading this post and the comments have been a trip, but all I have to make note about this matter is that there was mention of “stochastic parrots”, but literally barely any mention of Emily Bender and Timnit Gebru, the authors of the paper (Gebru gets mention, but only in her context of being fired from Google). Which is a shame, since Bender riffs off Schneier’s “Refuse to be Terrorized” in her own way.

I find it interesting, because the stochastic parrot paper makes a claim that barely gets a look in the discussion here both post and the comments: that… you know… what constitutes “artificial intelligence” might be far less useful than both Doomers and Hypers believe they are, especially in light of the environmental impacts these models have.

Honestly, it’s probably a good idea, at least for this discussion, to abandon the term “artificial intelligence” and go for something like Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI), because the impacts of the systems covered by these terms are already here. Workers are already being subjected to terrible conditions. Livelihoods are threatened. Privacy is already being violated. Climate is already disrupted. Marginalized people are already targeted. It’s probably best to separate discussions around Bostrom’s fantasy of 10⁵⁸ far future digital coomers from the impacts that large language models and deep neural networks already have to people’s lives today.

In any case, another thing that probably should get more discussion is how the science of “artificial intelligence”… isn’t? For example, that famous “sparks of AGI” paper not only wasn’t a peer-reviewed paper, but also included a rather… unfortunate definition of intelligence.

Many of the papers being lauded and discussed at length tend to come out in preprint servers with insufficient or no peer review, and large, influential data models from OpenAI, Google and Microsoft are all hidden behind commercial interests and intellectual property. Sources are obscured, or come with dodgy provenance, and vetted by underpaid, traumatized contractors. And the field has always had problems with replication.

If the field of artificial intelligence is as consequential as proponents say, the research must be made public. Forget “public option” — let that research run in the open, to be scrutinized and criticized not only on source and methodology, but also in methods of evaluation and balanced between its utility and impact. Then we can start having discussions to how world-changing and disruptive the technology is, rather than listening to hype men and criti-hypers.

Anonymous October 11, 2023 4:55 PM

A representative is an entity that would act as you would.
I am not sure a machine can act.

Clive Robinson October 11, 2023 10:03 PM

@ Anonymous,

Re : A representatives duties.

“A representative is an entity that would act as you would.”

Err no, in point of fact,

“they act in your interest”

Though you might not think so under certain circumstances.

For instance they can not “act as you would” because they can not “know your mind” or “what is within it”. But they also stand in a position you do not, so by definition have a different point of view to you.

But also you need to remember that “Professionals” also work under or have to others a “duty of care”.

A Church Minister, does not work for you even though they may have a duty towards you and can act as your representative within certain limitations.

A registerd medical/health practitioner is in the slightly different position of working for you but under the requirments of their professionaly organisation.

A legal representative is often an “Officer of the Court” and they are directly responsible to the court not you even though you may be paying employing them.

This can make life interesting for those who are unaware that you should also not talk to your legal representative directly.

That is if you tell them you have broken the law, then they are professionally bound to inform the court of that. However if you “ask a question” that might or might not be hypothetical then you are asking for information that you should then receive. In part it goes back to the notion of a “guilty mind” or “premeditation” as a one of the three parts for a crime,

1, The act is legislated against.
2, The person has intent of a guilty mind.
3, They carry out the act without a lawful defence.

So killing someone in various ways is legislated against in most places. The legislation usually requires there to be an intentional or knowing act of intent or premeditation proved, rather than an accident. And inportantly carried out without having a lawful defence such as defending yourself from attack.

Obviously the burden of proof for a guilty mind depends on a number of factors. If you blurt out “It’s me wot dun it, how do I get away with it” to your representative you are effectively confessing and they would be conspiring with you thus complicit if they proceaded down that path. Further if you tell them of evidence against you they are obliged in many places to reveal it to the lawful authorities. The flip side is they can only work with what you give them, so hypothetical or imprecise information may need to be given instead of factual information.

Each jurisdiction has “standing rules of evidence” by which both sides of a case are required to follow. Keeping away from “red lines” in this area needs carefull thinking about.

Oh and remember that in most places these days even though your legal representative might have legal protection / via legislation, it’s probably advisable to assume all “privileged communications” is being unlawfully monitored by third parities for profit etc, so,

“Should remain behind the hedge of your teeth”.

Winter October 12, 2023 6:48 AM

AI is not (yet) needed in disinformation. Even without AI, Twitter/X has become useless for information when it counts.

Israel-Hamas war has X and its users swimming in sea of disinformation
Flooded with old videos and video game footage, X is a bad place to stay informed.

Rather than being shown verified and fact-checked information, X users were presented with video game footage passed off as footage of a Hamas attack and images of firework celebrations in Algeria presented as Israeli strikes on Hamas. There were faked pictures of soccer superstar Ronaldo holding the Palestinian flag, while a 3-year-old video from the Syrian civil war repurposed to look like it was taken this weekend.

Many of these videos and images racked up hundreds of thousands of views and engagements. While some later featured a note from X’s decimated community fact-checking system, many more remained untouched. And as Elon Musk has repeatedly done in recent incidents, the platform’s CEO made the situation much worse.

“For following the war in real-time, @WarMonitors & @sentdefender are good,” Musk wrote in a post to his 150 million followers on Sunday morning. Both the accounts Musk referenced are well-known spreaders of disinformation. For example, both accounts spread the lie that there had been an explosion near the White House in May, a story that made the US stock market briefly plummet before it was debunked.

denton scratch October 12, 2023 8:53 AM

US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations

I thought the article was about Large Language Models (which exist). Whch US megacorps possess general-purpose AI? The ‘G’ in GPT doesn’t stand for “General-purpose AI”.

Clive Robinson October 12, 2023 9:12 PM

@ Bruce, ALL,

Hands up if you did not see this coming…

It would apear that the “Venture Capitalists”(VCs) have been steadily pulling their horns in to a new all time low…

The timing appears to be inline with the general loss of value in the likes of social media and Internet marketing orgs like Twitter/X and Facebook/Meta and even Google/Alpha.

https://www.theregister.com/2023/10/12/us_venture_capitalist_spending_continues/

But more importantly also the “Start Up Game” has pailed, which is where the fast churn was back a year or two.

Basically since the shine started to come off of the crypto-coin / blockchain “turd ball” the VCs were still trying to pass-off on their pump-n-dump style schemes with start-ups.

The VC’s tried moving on to pumping Web3.0 and NFT start-ups but they were to much like the failed crypto-coin / blockchain bubble to get to the point of even getting enough hot air to fill a whoopee-cushion. But still hot enough to burn the wrong fingers, but with Nvidia getting quite a positive slice off of the pump-up investments to realy add to the VCs pain.

So some VC’s had gone a different way and we’ve had the AI Bubble get a re-heat with big globs of cash going out. Spent on Hardware, internet connections, power and water bills as LLMs stoked up the VC boilers, but where was the return?

The AI Bubble appears to have run into problems and the losses are going up-n-up on the “drug-dealer style business models” as the LLM defects started coming through almost faster than Tay got her radicalisation by trolls back in March 2016. You would have thought Microsoft would have learned…

But it gets worse…

“[T]he number of VC investments is still down based on historical levels, the report found. The only reason total VC spend in Q3 2023 was on a par with Q2 2023 – slightly lower, technically – was because of Amazon’s $4 billion investment in OpenAI rival Anthropic late last month. “

So the slope is retrograde, but why? The artical notes the report says,

“Large deals have become increasingly uncommon, in part due to the pullback from nontraditional investors and to the ability of strong companies that are well positioned from a cash standpoint to stay out of the market while it is in their best interest.”

Then it goes on with a longish list of VC woes.

One of which could be described as “The Feds getting in on it”, with interest rates being pushed up to control inflation and also giving better value to start-ups and the like with,

“Things like the Infrastructure Investment and Jobs Act, CHIPS and Science Act and the Inflation Reduction Act have dumped billions into public programs to foster tech innovation in the US.”

The reality is though that in the totality of things the few billion here or there is not going to be that much, and that Fed cash has to come from somewhere…

Matt October 13, 2023 2:26 PM

“that many now think it will happen within 10 years”

Yeah, AI researchers have been saying since the 1960s that strong AI will happen within 10 years. When I took an AI class in college in 1998, the professor joked that strong AI was 10 years away, and had been for the last 40 years.

I know things change and that some day we might actually get there, but it still amuses me every time someone brings up the “10 years” canard.

Ivo October 15, 2023 7:00 AM

the doomsayer faction focuses on the far-off future

That is not so: they focus on “shortly after ASI comes into existence”. Which many of them now believe will be sooner rather than later, well within this century, if not this decade.

@Matt is wrong in that 1) none of their arguments depends on ‘foom’ and 2) ‘foom’ has been disproved. The amount of intelligence and agency at which ‘foom’ is expected by some hasn’t been achieved yet.

In the end it’s really simple: people are attempting to create something more capable than any human in solving any problem in any domain. Simultaneously people are attempting to give such things large qualitative and quantitive executive capabilities to enact solutions in the real world. This without being able to predict which subproblems this thing will identify to solve along the way, but knowing that that for even moderately ambitious goals there are subproblems like “first become more powerful” and “ensure your continued existence” that humans would solve first. So the obvious thing that will happen is that it will make sure no humans can interfere with it. And of course there will be someone that gives the thing a hugely ambitious nefarious goal. And we will have no clue what it will do then and what ‘morals’ it will have.

So we should hope we cannot actually create such a thing. But the doomers believe we can. Do you?

Clive Robinson October 15, 2023 11:58 AM

@ Ivo,

“So we should hope we cannot actually create such a thing. But the doomers believe we can.”

We have got to the point where we can now “engineer with individual atoms” so based on the usuall trajectory of such things we should have “bricks and motar” style mastery over the physical part of the universe from that point upwards. So it would appear to quite a few that in short order we will be able to make anything we chose to…

Yet we have not mastered fussion and our abilities with regards fission are at best extreamly crude. So at the very least there is much that we know we do not yet have even remotely close to the capabilities required to make sensible use of let alone be even remotely close to mastered.

But what of below the scale of atoms?

Then again we have little knowledge of chemistry, we are hoping that AI will help us with that…

Which almost certainly means we have rather more than one of those “Chicken and egg” scenarios that have a habit of becoming both turtles all the way down and fleas all the way up.

But also we have no real clue as to “what life is” beyond the notions that souped up “Conway Gliders” give us. Simple automata that appear to just spring out of random, and then be capable of not just directed movement and carrying information but also constructive collisions creating more complex Gliders and other automata.

But as for “learning” there we have even less of a clue, it boils down to create an environment rich for automata and watch them develop and reach a point where they effectively start exploring the environment and evolutionary principles take root.

So when you ask,

“Do you?”

The answer is,

“Very probably not within the lifetime of your grandchildren, or thereafter.”[1]

If what we produce will be “good or bad” actually is irrelevant, as far as I can tell we’ve never yet come up with something useful that can not as easily be used for both.

In part because in reality there is not realy anything such as “good or bad” just what an observer sees and concludes about the actions of a “directing mind”. Those conclusions are most often the reflection of the mores of society[2] than anything else. Which in turn apear to arise almost like a directed drunkards walk you would expect from individual and group dynamics directed by evolution.

Perhaps rather than the nebulous and so ill defined notion of “intelligence” we should use “curiosity about the environment” as a measure of progress, not just of “nature” but “machines” as well.

From which we can see that AI if ever is possibly millennia away. Because we’ve not yet found a way to give computers sensors from basic self replicating automata that have arisen at random…

[1] I’ve observed in the past from time to time that,

“Man created his deities as a reflection of himself.”

By a sort of convention those deities we see as good become gods those as bad we see as devils etc, etc. Generally people like to think –when they do– we strive to be more like the former than the latter. My viewpoint based on having looked at religion and it’s trapping getting on for more than a third of a century ago that asside from it being abused as a control mechanism of others –king game– there is something in the human Id that is not just “eternaly curious” it actually “strives” thus needs a goal to move towards. So we create our dieties such that they are just out of reach at any given level of understanding. So as our understanding improves “we shift the goal posts” or more correctly our deities… The side effect of this is that what we think of as life, and intelligence will likewise be moved. In effect as Charles Dodgson wrote we have a “Red Queen’s Race” where we have to run as hard as we can just to stay where we are with respect to the other participants (ie velocity not distance).

[2] https://simplysociology.com/mores-sociology-definition-examples.html

ResearcherZero October 16, 2023 3:36 AM

“This would be—I think this is without exaggeration—the most important Supreme Court case ever when it comes to the internet.”

‘https://www.govtech.com/policy/upcoming-supreme-court-cases-could-redefine-internet

“Now conversation isn’t strictly necessary, only watching and listening. …Now it’s clear that the tech companies have little interest in directing users to material outside of their feeds. According to Axios, the top news and media sites have seen “organic referrals” from social media drop by more than half over the past three years.”
https://www.newyorker.com/culture/infinite-scroll/why-the-internet-isnt-fun-anymore

‘https://www.technologyreview.com/2023/10/09/1081215/how-to-fight-for-internet-freedom/

Toward Better Automated Content Moderation inLow-Resource Languages

‘https://www.tsjournal.org/index.php/jots/article/view/150

ResearcherZero October 16, 2023 3:58 AM

@Clive Robinson, @Ivo

Some see even concepts such as ‘It Takes a Village’ as a plot. Unfortunately for those chaps, sometimes we do have to work together, or at least with others, to accomplish practical outcomes. (The electrician will leave if I kick them between the legs.)…

“Proponents say that CoCounsel has the potential to streamline the process of identifying strong evidence of innocence, allowing the organization to focus its limited resources on investigation and litigation.”

‘https://lawjournalforsocialjustice.com/2023/10/03/the-artificial-intelligence-of-criminal-justice/

“all our legal AI solutions, must be able to correctly process large, complex collections of legal documents—which could be thousands of pages long, contain images, or be poorly scanned. Missing even a single word could mean the difference between winning or losing a case.”

‘https://www.jdsupra.com/legalnews/making-the-most-of-today-s-ai-takes-a-9285492/

And some regulation will be needed…

“Protecting freedom of expression will require strong legal and regulatory safeguards for digital communications and access to information.”

AI has allowed governments to enhance and refine their online censorship.

‘https://freedomhouse.org/explore-the-map?type=fotn&year=2023&mapview=trend

“The report found that many countries—including Myanmar, the Philippines, Costa Rica—have drastically restricted online freedoms this year. China has the lowest levels of internet freedom for the ninth consecutive year. In 55 of the 70 countries assessed for the report, people faced legal repercussions for online expression—a record high. And in 41 of the countries, people were assaulted or killed as a result of what they said online.”

Freedom House found that, in at least 22 countries, social media companies were required to use automated systems to moderate content on their services, often to comply with draconian laws. Freedom House found that, in at least 22 countries, social media companies were required to use automated systems to moderate content on their services, often to comply with draconian laws. … at least 16 countries the distorting use of AI tools that can churn out images, text or audio.

‘https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence

…we will have to agree on what that regulation is. Elected representatives will also have to stop kicking each other in the nuts eventually if they want to achieve anything. And they will also have to establish a greater level of trust amongst all members of the public, not just their targeted voter base.

ResearcherZero October 16, 2023 5:40 AM

“Existential fear appears to be at the heart of what drives polarization.”

“One reason we tend to become fixated and polarized is because of individual and collective trauma that associates with a profound sense of insignificance.”

And if existential fear is indeed a root of polarization, our sometimes-warped view of the other side can perpetuate it.

“Most people are not on the extremes of any of these issues, but most of what we hear is from people who are more on the extremes.”

Psychological science suggests that it is both possible and imperative for members of our society to find common ground.
https://www.apa.org/monitor/2021/01/healing-political-divide

Undue concentration of ownership and control of both social and traditional media facilitate the dissemination of misinformation. Thus, policymakers are advised to support a diverse media landscape and adequately fund independent public broadcasters. Perhaps the most important approach to slowing the spread of misinformation is substantial investment in education, particularly to build information literacy skills in schools and beyond.

Another tool in the policymaker’s arsenal is interventions targeted more directly at behaviour, such as nudging policies and public pledges to honour the truth (also known as self-nudging) for policymakers and consumers alike.

…Knowledge of the complex interplay between cognitive and social dynamics is still limited, as is insight into the role of emotion.
https://www.nature.com/articles/s44159-021-00006-y

At the heart of political self-deception is the application of the notion of individual self-deception to the political sphere, to generate new insights into the behaviour of political leaders and their relationship with democratic citizens.

‘https://link.springer.com/article/10.1007/s10982-021-09418-6

How much do we really know about the power of emotions in politics? Dr Hutchison said for decades the literature has determined that emotions are irrational.

“As a consequence, research across politics and international relations has neglected the important role of emotions in political decision-making processes.”

https://hass.uq.edu.au/article/2021/09/uncovering-role-emotions-politics

Clive Robinson November 22, 2023 6:31 PM

@ Anonymous, ALL,

Re : Agency depends on both sensing and ability to act.

“If it’s not reacting to change, it’s not sentient.”

It does not have to be sentient to react to change, a simple feedback loop or adaptive filter do that, as does nearly every biological system from a single cell upwards.

But the other side of the coin is “sentience” can not react to change if it,

1, Does not have sensing / continuous input.
2, Does not have the agency to make a response to change.

It has been shown that people who have suffered a medical insult of some form, can have expected brainwaves yet can not respond to stimulus (locked in syndrome etc).

Look at it this way, if I block the nerves in your elbow with an appropriate chemical I can,

1, Stop you feeling touch, hot/cold, pain etc.
2, Stop you being able to use the muscles in your forearm etc.

When the chemical breaks down or is transported away normal functions return (you can also get the same effect with a tourniquet that stops the bloodflow, and alows minor surgery to be quickly performed such as removing splinters of glass from a hand cleaning up and closing wounds).

Either way if you can not hear or see what I’m doing to your hand, you will not react if I poke you with a pin, brush you with a feather, jolt you with DC current, or apply a hot piece of metal to you. Such things are very routienly done to the feet of those with diabetes or other disease with the potential for neuropathy, to see if it is present or how much it is progressing.

For these and other reasons testing for sentience is neither reliable or in fact practical.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.