Upcoming Book on AI and Democracy

If you’ve been reading my blog, you’ve noticed that I have written a lot about AI and democracy, mostly with my co-author Nathan Sanders. I am pleased to announce that we’re writing a book on the topic.

This isn’t a book about deep fakes, or misinformation. This is a book about what happens when AI writes laws, adjudicates disputes, audits bureaucratic actions, assists in political strategy, and advises citizens on what candidates and issues to support. It’s a book that tries to look into what an AI-assisted democratic system might look like, and then at how to best ensure that we make use of the good parts while avoiding the bad parts.

This is what I talked about in my RSA Conference speech last month, which you can both watch and read. (You can also read earlier attempts at this idea.)

The book will be published by MIT Press sometime in fall 2025, with an open-access digital version available a year after that. (It really can’t be published earlier. Nothing published this year will rise above the noise of the US presidential election, and anything published next spring will have to go to press without knowing the results of that election.)

Right now, the organization of the book is in six parts:

AI-Assisted Politicians
AI-Assisted Legislators
The AI-Assisted Administration
The AI-Assisted Legal System
AI-Assisted Citizens
Getting the Future We Want

It’s too early to share a more detailed table of contents, but I would like help thinking about titles. Below are my current list of brainstorming ideas: both titles and subtitles. Please mix and match, or suggest your own in the comments. No idea is too far afield, because anything can spark more ideas.

Titles:

AI and Democracy
Democracy with AI
Democracy after AI
Democratia ex Machina
Democracy ex Machina
E Pluribus, Machina
Democracy and the Machines
Democracy with Machines
Building Democracy with Machines
Democracy in the Loop
We the People + AI
Artificial Democracy
AI Enhanced Democracy
The State of AI
Citizen AI

Trusting the Bots
Trusting the Computer
Trusting the Machine

The End of the Beginning
Sharing Power
Better Run
Speed, Scale, Scope, and Sophistication
The New Model of Governance
Model Citizen
Artificial Individualism

Subtitles:

How AI Upsets the Power Balances of Democracy
Twenty (or So) Ways AI will Change Democracy
Reimagining Democracy for the Age of AI
Who Wins and Loses
How Democracy Thrives in an AI-Enhanced World
Ensuring that AI Enhances Democracy and Doesn’t Destroy It
How AI Will Change Politics, Legislating, Bureaucracy, Courtrooms, and Citizens
AI’s Transformation of Government, Citizenship, and Everything In-Between
Remaking Democracy, from Voting to Legislating to Waiting in Line
How to Make Democracy Work for People in an AI Future
How AI Will Totally Reshape Democracies and Democratic Institutions
Who Wins and Loses when AI Governs
How to Win and Not Lose With AI as a Partner
AI’s Transformation of Democracy, for Better and for Worse
How AI Can Improve Society and Not Destroy It
How AI Can Improve Society and Not Subvert It
Of the People, for the People, with a Whole lot of AI
How AI Will Reshape Democracy
How the AI Revolution Will Reshape Democracy

Combinations:

Imagining a Thriving Democracy in the Age of AI: How Technology Enhances Democratic Ideals and Nurtures a Society that Serves its People

Making Model Citizens: How to Put AI to Use to Help Democracy
Modeling Citizenship: Who Wins and Who Loses when AI Transforms Democracy
A Model for Government: Democracy with AI, and How to Make it Work for Us

AI of, By, and for the People: How Artificial Intelligence will reshape Democracy
The (AI) Political Revolution: Speed, Scale, Scope, Sophistication, and our Democracy
Speed, Scale, Scope, Sophistication: The AI Democratic Revolution
The Artificial Political Revolution: X Ways AI will Change Democracy…Forever

EDITED TO ADD (7/10): More options:

The Silicon Realignment: The Future of Political Power in a Digital World
Political Machines
EveryTHING is political

Posted on July 2, 2024 at 2:11 PM133 Comments

Comments

HPotter July 2, 2024 4:06 PM

Anyone who knows what the book/author is about will obtain the book regardless of the title.
To have wider appeal (sales), I’d keep it really simple – two large letters “AI” taking up most of the cover, with a very small subtitle of your choice, chosen to be non-technical and not off-putting.

Sten July 2, 2024 5:15 PM

AI is a hoax. Respected cryptographer has turned into a snake oil man. Go and book your place in a nuthouse you lunatic.

jelo 117 July 2, 2024 6:18 PM

Extraordinary Popular Delusions: AI, the Madness of Crowds

Bubbles, Manias, Alchemists

AIe ! July 2, 2024 7:12 PM

— The Definitive Coming of AI-ruled Open Sky Digital Concentration Camp For [Almost] All
Or How Not To Try And Negotiate With a Machine
— AI : The End Of A Democracy Which Did Not Really Exist Anyway
— AI : The Future Your Masters Want For You

Bruce Schneier July 2, 2024 8:36 PM

@Sten:

“AI is a hoax. Respected cryptographer has turned into a snake oil man. Go and book your place in a nuthouse you lunatic.”

There’s certainly a lot of hype around it, but I wouldn’t call it a hoax. Crop circles are a hoax. Paul McCartney dying was a hoax. Aside: Wikipedia’s list of hoaxes is surprisingly interesting: https://en.wikipedia.org/wiki/List_of_hoaxes

Bruce Schneier July 2, 2024 8:38 PM

@jelo 117:

“Extraordinary Popular Delusions: AI, the Madness of Crowds”

While I agree that that book needs to be written, this isn’t that book.

And here I am going to plug this really good Substack: https://www.aisnakeoil.com/

Abhi July 3, 2024 12:12 AM

I know it’s copyrighted (maybe) but

Ghost in the Machine
or
AI – The Ghost in the Machine

Luke July 3, 2024 12:18 AM

Artificial Intelligentsia: How Democracy Evolves in the Age of AI, and How We Will (and Should) Adapt

Daniel Popescu July 3, 2024 2:52 AM

Quite good essays on AI and politics Bruce, the book itself will definitely find spot in my bookcase. Now, as for the titles / sub-titles / chapters, there are some very nice suggestions above (although some of my peers seem to a bit too passionate about them :)): maybe try something around the origins of the word “robot”(I think it comes from “work” in Czech) and “demos”(you know about this one). Look at Mr. Asimov’s work(!) for some more inspiration and I bet that Clive will enchant us again on this occasion :). Thanks.

Robin July 3, 2024 3:26 AM

Looks interesting. Hard to make useful suggestions without knowing the target readership.

I would chuck in a few negatives though:

  • If the target is the general public, definitely avoid any latin.
  • “Democracy after AI” doesn’t feel good because it implies AI will stop developing.
  • “Democracy” needs to be in the title because otherwise the book will be lost in the noise of other AI book titles.

John Beattie (jkb) July 3, 2024 3:41 AM

AI: Social control by the machine

or

AI: by the machine, of the machine, for the machine

Winter July 3, 2024 3:56 AM

I found the angle of Yuval Noah Harari interesting (in Homo Deus).

What if there is: “An algorithm that knows you better than you know yourself”?
‘https://youtube.com/watch?v=vC4FtajN_QY

If there are algorithms that provably know better what you should decide than you know it yourself? That knows what politicians or parties would make you happier and better of? All of us better of?

I know many people are in denial. We see it here too.

It is like those people from an old SF story, who are shown a rat that can play chess and dismiss it with “it is not that good”.[1] The point is not that a rat should win chess tournaments, but that it can play at all.

Meanwhile, generally available computer algorithms win every chess and go tournament. And algorithms, can write poems and stories and create artwork and music. Not prize winning (not always), but they do it believable.

As has been said many times, algorithms only get better.

[1] In 1953, Charles Harness (1915-2005) wrote The Chessplayers. It is a short story of a chess club that runs across a refugee professor who claims he has a chess-playing rat that he trained himself. The story appeared in Fantasy & Science Fiction, October 1953.
‘http://billwall.phpwebhosting.com/articles/Science%20Fiction.htm

Clive Robinson July 3, 2024 5:26 AM

@ Bruce,

Re : Title and sub

I’d suggest,

“AI and the Death of Democracy : How logic will eliminate reason.”

Yes it sounds a little on the “Machines are coming to get us” side of things.

But realistically seeing how politics thus legislation is becoming way more authoritarian and those behind them are becoming more and more like modern day “Robber Barons” using guard labour and look the other way politically selected judiciary, that know which way to find for their minority view (see recent SCOTUS vote).

You know that current AI that has no reason or ability to do so will be used and abused to act as the new guard labour judiciary.

The current AI builds statistical models on deep patterns in data by simple logic used to insane levels of depth. Why insane, because we know that changing the order the data is supplied in changes the results. Thus the system can be gamed to give not “reasoned results” but “desired results”.

Yes it will cost billions in run time to get the “desired models” but think of what that will give those who have the resources…

We know from history that this is more or less how it’s going to play out, because for some their ultimate desire is “control of others” and they see everything else as just a tool to reach their desired goal.

Society is about checks and balances. The checks limit excesses by the few over the many, the balances are formed by harms to the majority.

History shows the reason why,

“Feel my pain”

Is so important to society, it gives reason over logic.

Ask yourself what happens when a machine that does not and can not have perception of pain “by design”, is given data in ways that evokes fake patterns that get brought to the fore?

But also ask a question, why has the statistical predictions of election outcomes being increasingly divergent from what actually happens?

That is why the invention of the notion of “The Shy Voter” as an excuse for the errors. That is why so many voters are so ashamed of the way they vote we have mass lying to those polling them?

It’s more likely that it’s an error somewhere else down stream of the polling, that is the statistical models being built are wrong.

Pause to think what that meens for the building of current AI “Digital neural Networks”(DNN)…

At the end of the day human thus societal reason is based on perception of the environment and self. Current AI has no such perception or even agency to acquire it. Thus it will fall to it’s internal logic that is at best unstable and prone to chaotic divergence from the reality of human existence.

We already know this, but those who have so much vested in current AI are following to basic short term destructive syndrome paths

1, Not Invented Here (NIH).
2, Don’t Kill The Golden Goose.

The “short term personal gain” they see, is blinding them to the “long term societal harm” that their get rich quick schemes will cause. As I’ve pointed out before there is a balance

“Individual Rights v. Societal Responsibility”

Currently AI is well over to the “build an investment bubble and get rich” side. Where “It’s all ‘me Me ME’ thinking”. We can already see the harms to society via the harms to the environment.

A fun thought to consider,

Some have faith in “The Law of Code” thus we have “smart contracts” and people putting everything of stored value they have into them. We are seeing others doing “rug pulls” and other attacks and stealing the stored value. Now imagine you get an AI to the point it can find all those “code defects” and is driven by rules to find them and exploit them. How long do you think it would be before the AI had all the stored value?

Not because it has any use for the stored value (though it might). But because it’s been designed to “find and acquire”.

So stored value loss by “Logic over Reason” giving rise to loss of choice in society which is a form of “Death of Democracy”.

Bob Paddock July 3, 2024 8:19 AM

The question is how is “Intelligence” defined?

LLMs are not intelligent in the sense of biological like intelligence.
Clive already gave a good description of that in one of the other threads.

I’ve yet to see anything that matches the intelligence of the simplest of biological creatures in any AI to date in the current AI hype cycle.

WikiPedia has their own significant biases.
One well known person corrected his own entry only to have the edit undone and told he was wrong.

Bob Paddock July 3, 2024 8:24 AM

Apologies it was Winter quoting Clive that I was referring to:

“There is nothing “magic or intelligent” about them [LLMs], they are simply based on statistically weighted spectrums that are multidimensional and applied filter masks. …”

Clive Robinson July 3, 2024 8:54 AM

@ Bruce, Sten,

Re : Hoaxes that passed sniff tests

“Aside: Wikipedia’s list of hoaxes is surprisingly interesting”

Yup it has three of note

1, Dihydrogen monoxide
2, Grunge speak
3, ID Sniper rifle

The first is “technically” not a hoax, as the molecule certainly exists, and it does follow the naming conventions.

It’s kind of been adopted in a Tongue-in-cheek way for various reasons. Which brings us onto the second yes it was a joke/hoax but it actually spawned for a while a slang. What that said about those that formed the “in-group” clique is something I’ll leave to their psychoanalysts and anthropologists.

But the third is one I’m sure many old timers from this blog will remember as it was featured,

https://www.schneier.com/blog/archives/2005/02/implanting_chip.html

And not unexpectedly got a “zombie angle” to it.

Also the fact that something is a hoax today does not mean it will not be true tomorrow or at some point in the future (eg Paul McCartney dying).

Mauro Bregolin July 3, 2024 10:54 AM

AI across the regime spectrum – Full Democracies, Flawed Democracies, Hybrid Regimes, Authoritarian Regimes (title may vary according to terminology used to describe more or less “democratic” regimes)

T0f July 3, 2024 11:09 AM

Et tu, AI? How AI Will Reshape Democracy
A flicker of AI that challenges the certainty of democracy.
Never send a machine to do a human’s job.

Randy July 3, 2024 11:32 AM

I like:

Democracy after AI: AI’s Transformation of Democracy, for Better and for Worse

As @HPotter states, you’d want cover design with “Democracy after” in smaller print above a big AI.

If you’re not discussing deepfakes and other maluse of AI, a huge topic where others are writing, then you want to be clear that the book is about Democracy. I like your list of use cases, but I see a big difference between “AI-assisted administration” and “AI-assisted agencies”. The administration term seems more focused on the white house, and “What happens when the FDA starts to use AI to review studies and recommend actions?” is a much different use case.

cybershow July 3, 2024 2:28 PM

I loved this book outline, but thought it would work equally well, or
better, with the word “AI” replaced with “competent leadership”. That
would be a political blockbuster.

I’ve been around and seen some humans. There are about 8 billion of
us! Some humans are pretty awesome. Incredible feats of athletics,
memory, cognition, stamina… Humans really rock, and the best of us
are just awe-inspiring. But I’m confused, because the current
candidates for leadership are some of the poorest human specimens
capable of standing upright.

As a fan of Postman’s
laws of technology, this doesn’t seem to me like a problem caused by a
lack of “AI”, or one that “AI” solves.

Surely there’s easier things we can do to improve politics?

cybershow July 3, 2024 3:11 PM

@Clive Robinson

the fact that something is a hoax today does not mean it will not be
true tomorrow

I wrote about
Chindogu a
while back, and I think it’s very relevant to AI.

Selfie-sticks were a hoax, and then a kinda ironic joke for a few
people who actually used them, and later became normalised
mass-produced items. What changed was the size of the camera and hence
people’s willingness to let utility overcome embarrassment.

Aaron July 3, 2024 3:37 PM

I’m going to go counter narrative. AI should be banned from all governmental entities, positions, offices, etc. It’s “We the People” not We the People + AI.

We must learn to govern ourselves, not believe that some sort of benevolent AI can do a better job.

Z.Lozinski July 3, 2024 4:42 PM

A welcome contribution. I don’t think we collectively paid enough attention to Larry Lessig ideas in “Code is Law” and what we are seeing with AI (well LLMs) affecting politics is the end result.

Anyway some ideas for titles. Most of these take titles of political books, articles, tracts and repurpose them on the idea they will be well known to political thinkers and people interested in democracy.

All Machines are created equal

The Senate and the Machines of America

The Intelligence Gap: Russia and China have better Bots

The New Model Democracy

One bot one vote

The First Blast of the Trumpet Against the Monstruous Regiment of Machines

A Critique of Machine Democracy

Inspired by George Orwell:

All Machines are equal: some more so than others

All Machines are equal: all more than the people

Riffing on Thomas Paine

The Rights of Machine

The Machine Crisis

The Age of Artificial Reason

The Age of Machine Reason

Winter July 3, 2024 6:18 PM

@Bob Paddock

LLMs are not intelligent in the sense of biological like intelligence.

I think the use of the word “intelligence” to describe the behavior of humans and machines is rather unhelpful. In neither case do we know what we are talking about. It is little more than a word for an impression.

Definitions for intelligence do not clarify anything about the behavior.

What is important for the future book, and society in general, is that there is a new class of algorithms that can solve problems only humans could solve until recently. They are about to become better at solving these problems than humans. Just as algorithms have become better at “solving” chess and go than humans.

Some of these problem solving capabilities are relevant for politics and governance. These new algoritmic capabilities have the possibility of redistributing the political power in society.

That redistribution of power is important. Not whether this is properly called “intelligence”.

Clive Robinson July 3, 2024 8:36 PM

@ CyberShow,

When I read,

“What changed was the size of the camera and hence
people’s willingness to let utility overcome embarrassment.”

I cringed, and immediately went into “To Much Information” mode.

Because my mind instantly realised that “camera” is just on of a thousand or more modern technologies that would fit right into that statement.

And we are both old enough to know that there will always be a few who will do something that defies reason, sense, and good taste, with the problem of where one leads, others will follow.

Bearing in mind what today now is in the UK, I think you will realise why the urban dictionary called it “The Farage Effect” from the number of entries,

https://www.urbandictionary.com/define.php?term=Faraging

Jon (a different Jon) July 3, 2024 9:40 PM

AI assisted citizen? I’m highly skeptical about that.

Anyone recall the “Facilitated Communication” scandal wherein ‘Facilitators’ ‘helped’ those with no communication skills to communicate, even if just by pointing at pictures?

For awhile, it was a huge thing, with “locked-in” children communicating with their parents and peers – until someone tried separating the Facilitators’ information from the children’s information – and it turned out the questions the children were answering were the questions asked of the Facilitators, not the questions the children themselves were asked.

So in short (like an awful lot of AI, actually) it was an immense fraud, and worse, even partially ‘facilitated’ by those who actually believed in it, even after the evidence was produced unequivocally against.

I think a bigger problem you’re going to have is AI assuming the role of citizen, according to the role that AI ‘thinks’ how (from its highly biased training set) a citizen should behave.

J.

S July 4, 2024 2:59 AM

Other ideas:

  • For the people, by the AI
  • The aggregated and weighed voice of the people
  • Outsourcing democracy / automating democracy
  • The computer who wrote laws

emily's post July 4, 2024 5:48 AM

Hapax Legomenon: Generative Linear A and J. Random Voter

I’m Not Intelligent, I Just Reported the Correlation

Clive Robinson July 4, 2024 7:02 AM

@ Bruce, ALL,

Something to consider is “dynamic algorithms” created by AI that will effect those at the bottom of the power hierarchy where all they have is “collective bargaining”.

There is an old joke called “Rules of the House” where the first rule is,

“If any of the rules become known by anyone other than the rule maker the rules will be changed!”

Usually when it comes to workers such a rule fails due to business process inertia and complexity. That is those administering and using the system take time to implement any changes.

Well AI will almost certainly change that.

Because AI systems will find patterns that are non obvious and too complex for most to be able to follow let alone sufficiently understand. Thus employers will exploit AI to gain an advantage.

Imagine that your average pay gets cut by 10% or more but you can not see why… Especially when a published rule change indicates that your pay should have risen.

That is the sort of thing AI systems will relatively easily find.

Is this likely to happen?

As has been observed in the past “Heck yes it will” in the US and other places the duty of the seniors is not to the workers but to the share holders. The only thing stopping major exploitation of workers is legislation and regulation. However as we know lobbying waters down legislation or restricts it’s scope.

In many places “gig workers” do not come under any legislation or regulation to protect workers. Which is why it is rapidly becoming a large organisations way to employ workers for what would be their primary employment.

If people have doubts about what Gig Employers will do even despite the well published info on Uber and the like, maybe they should read,

https://spectrum.ieee.org/shipt

Which shows how “collective action” by workers showed a gig employer “Shipt” was deliberately cutting their income to as little as $7/hour when legislation generally puts the minimum wage for employees at three or more times that and with health care etc.

Clive Robinson July 4, 2024 7:25 AM

@ Bruce, ALL,

Speaking of AI books, anyone remember “Superintelligence”?

Well I’d more or less forgotten it as I thought it was hype bordering on fear mongering and disagreed with much of it at the time.

But I was reading “Humanity Redefined” and the author there “Conrad Gray” pointed out the book has just hit it’s tenth birthday in the UK,

https://www.humanityredefined.com/p/superintelligence10-years-later

Along with an interesting discussion as to the back ground and how the book has aged.

Gabriele July 4, 2024 9:07 AM

I really like “E Pluribus, Machina”, but just to contribute something:

Keeping Us in Charge

coupled with an existing subtitle:

How to Make Democracy Work for People in an AI Future.

Variants of your titles:

We the AI People
We the People with AI

Clive Robinson July 4, 2024 9:37 AM

@ Bruce, ALL,

Re : AI existential threat by environment.

I’ve mrntioned before that the current AI are chewing up energy and creating green house gasses faster than the cryoto-coin “work-factor” lunacy started.

Unlike the other “AI Existential threats” this is a very “clear an present danger” to not just humanity but much else in the world, not just living things.

Well it appears the MSM are starting to pick up on this as rather more than a “talking point”. For instance today “The Guardian” a fairly well regarded newspaper published,

“Can the climate survive the insatiable energy demands of the AI arms race?”

https://www.theguardian.com/business/article/2024/jul/04/can-the-climate-survive-the-insatiable-energy-demands-of-the-ai-arms-race

It’s not just energy but potable water as well, the spending is expected to actually go up and into the trillions of dollars even if the AI systems become more efficient…

Why? Because of a quirk noted with the first steam engines they were grossly inefficient untill James Watt made a series of engineering changes and they became more efficient so burned less coal. However it was quickly found that as the engines became more efficient, they became more economical to run, thus the number of users went up and so did the consumption of coal. As the article notes,

In economics, that phenomenon is known as “Jevons’ paradox”, after the economist who noted that the improvement of the steam engine by James Watt, which allowed for much less coal to be used, instead led to a huge increase in the amount of the fossil fuel burned in England. As the price of steam power plummeted following Watt’s invention, new uses were discovered that wouldn’t have been worthwhile when power was expensive.

Something that is almost certainly going to happen with Current AI in the short term[1], even if it proves to be useless for various reasons. In fact we are seeing the start of it with the reduction in DNN sizes to try and get them into PCs and laptops.

Along with that, are some other interesting perspectives, that might make people some what further disquieted.

[1] Something people should look at is a graph of the Nvida share price rise and graphs of energy and water demand around certain data centers.

fib July 4, 2024 10:01 AM

Artifactual Democracy and the looming Automachia

How the powerful bend the rules and how to bend them back

Saaaaam July 4, 2024 11:03 AM

I really like “Reimagining Democracy for the Age of AI” as a subtitle – it suggests both agency (we can change it if we can think it), and necessity (the age of AI is here, and we should change to meet it).
I’m not sure which short title matches well with it though.

I also really like @Randy’s suggestion of “Democracy after AI: AI’s Transformation of Democracy, for Better and for Worse”.
The short title is to-the-point and covers the bases (as others have said, the title needs to mention democracy, not just AI). The subtitle again says “this is happening”. Less focus on agency but I like the callout of there being good and bad.

cybershow July 4, 2024 4:36 PM

@Clive Robinson

there will always be a few who will do something that defies reason,
sense, and good taste, with the problem of where one leads, others
will follow.

Fools rush in where angels fear to tread.

A physicist, maybe Dirac or Feynman said it, and so did Edward Snowden
more recently, “just because you can doesn’t mean you should.” It
should be emblazoned above the doorway to every university and
research institute.

Maybe we humans do science and technology quite badly, because we
aren’t very selective. We have a deflated, scalar notion of
“progress”, as if progress were a abstract noun that one could
bank. Progress is a vector, with a direction as well as a
magnitude. There are desirable directions. Instead we push out in all
directions, hoping that we’ll find the really bad and nasty stuff (and
make some money from it) before our enemies do. It’s quite an immature
philosophy of science for a species on the brink of fusion and
space-faring. The last 18 months in AI – the hope, hype,
catastrophising, grandiosity – has been painful to watch. It says so
much about how we’re lost.

Steve Fanjoy July 4, 2024 9:13 PM

I did a foresight talk in 2018 titled “Technology and Democracy, Self Correcting or Collision Course?” @Bruce you are welcome to use it, or some variation, if you like. Even before the generative AI era the prognosis was disturbing.

Clive Robinson July 5, 2024 9:39 AM

@ Bruce,

For obvious reasons I’ve held this back untill after the UK election has finished,

https://www.foxnews.com/world/uk-parliamentary-candidate-runs-first-ai-lawmaker-interactive-ai-avatar

Put simply a “Parliamentary candidate” did partially go down the AI route…

Steve Endacott, a local business man ran in “Brighton Pavilion” as “Steve AI”.

Of the 74,786 eligible voters the turn out was 70% which is actually quite high. Steve AI however only got 179 votes so would have “lost his deposit” amongst other things as he came in last place even behind the “Monster Raving Loony Party”…

https://www.bbc.co.uk/news/election/2024/uk/constituencies/E14001130

Does this bode anything for AI in future elections?

I’d say probably not, for seceral reasons. As a trend the Indian national elections would be a better indicator as the AI use there was not as overt.

Eriadilos July 5, 2024 10:33 AM

I like :

AI Enhanced Democracy : Who Loses (and Wins) when AI Governs
AI Enhanced Democracy : The Age of Artificial Reason
Democracy with Machines : AI’s Transformation of Democracy, sometimes for the Better

Norio July 5, 2024 4:26 PM

I would suggest a section under “AI-Assisted Citizens” and/or “Making Model Citizens: How to Put AI to Use to Help Democracy” that addresses AI-enhanced MUD (Multi-User Dungeon) games to develop political advocacy & collaboration (viz., rabble-rousing) skills.

MaryJackson July 6, 2024 12:49 AM

Do Turkeys Vote For Christmas?

Blood on Your Hands: AI Democracy and how it killed 14,000 children in Palestine.

AI – A User Guide to the Fall of US Democracy

Lars July 6, 2024 3:41 AM

I would suggest:

Artificial Democracy: How AI Will Reshape Society

I kinda like the merge of Artificial Intelligence and Democracy.

Clive Robinson July 7, 2024 11:49 AM

@ Bruce, ALL,

Re ” AI means “Post Open Source”.

It’s something that people need to consider.

The Court ruling on the monkey taking selfies and who “owns the copyright” of the resulting images, has implications. In that the Court ruling found for a “Human Only” on IP rights. Worse they in effect were passed to the owner of “the equipment used” to make the images.

So an argument could be made that anything you make with Co-Pilot or other AI assistance belongs to the owner of the AI.

So Microsoft would own or have rights in any code produced…

The other problem is that all the current Open Source licences are at best ambiguous on this as the assumption when they were first developed and subsequently be inheritance is that “the creator” is human and has the right to make “the work” Open Source.

Clearly either that is nolonger true, or it is now under question.

It appears that Bruce Perens is enquiring into this but Courts especially US courts tend to have their own views in ways that appear to favour the person with the largest wallet / deepest pockets.

Some have estimated it could cost upwards of ten million in legal fees to get a definitive answer through the US Court system.

The only thing Post-Open appears to have to say so far with regards AI is with regards input for training,

https://postopen.org/2024/04/30/april-30-make-post-open-ai-possible/

ResearcherZero July 8, 2024 2:53 AM

By 2028 an additional 4250 megawatts will be required in Australia for AI projects.

‘https://www.technologydecisions.com.au/content/data-centres/article/data-centre-energy-impact-defined-by-ai-174138578

“A lack of understanding on how the growth of AI can impact energy and water consumption has meant that there has been a lack of action.”

According to JLL, hyperscalers now have an estimated average density of 36kW per rack. IDC estimates this will grow at a 7.8% CAGR in the coming years to approach 50kW by 2027. Many AI clusters are projected to hit requirements of 80-100kW/rack. AI consumes 1.8-12L of water for each kWh of energy usage across Microsoft’s global data centres.

In the United States, it is estimated that 1 MWh of energy consumption by a data centre requires 7,100L of water.

https://w.media/the-rise-of-ai-specific-data-centres-in-australia/

By 2027 AI demand could cause data centers to consume over 1 trillion gallons of water.

‘https://arxiv.org/pdf/2304.03271

“The large-scale AP-1000 reactors mentioned by Mr Dutton today have a capacity of 1.1 gigawatts (GW), and he suggested small modular reactors would have a capacity of 0.47 GW.

If five large-scale reactors and two small reactors were to be built, that would be a total of 6.4 GW, which is less than a third of the 22.3 GW of coal currently in operation.”

Seven large-scale reactors would produce around 8GW of power.
5% of power output has been proposed for industry at wholesale prices.

The first projects could be operational from 2035 to 2037.
https://www.abc.net.au/news/2024-06-19/dutton-reveals-seven-sites-for-proposed-nuclear-power-plants/103995310

ResearcherZero July 8, 2024 2:58 AM

Date centers already consume nearly a fifth of Ireland’s electricity…

“Today, there isn’t enough power generation capacity or transmission capacity to fuel the data centers that are in the pipeline. Everyone’s just kind of gambling that we will build up the infrastructure and everything will be fine. But if the data centers outpace power generation, you could see brownouts, and a real constrained power situation growing.”

‘https://time.com/6987773/ai-data-centers-energy-usage-climate-change/

Generating one image using AI can use almost as much energy as charging your smartphone.

https://www.epa.gov/energy/greenhouse-gases-equivalencies-calculator-calculations-and-references#smartphones

Clive Robinson July 8, 2024 4:30 AM

@ Bruce, JonKnowsNothing,

Re : Conscious or not?

This Scientific American article looks into the subject of consciousness and why like intelligence it’s an illusive butterfly to pin down,

https://www.scientificamerican.com/article/the-mystery-of-consciousness-is-deeper-than-we-thought/

Back last century when doing the MSc course there had been a presentation about biological systems and part of it was about humans and pain as a feedback mechanism in evolution. I got chatting with one of the University Readers and pointed out that I thought as an engineer it was superficial, and he asked why. To which I answered because it does not explain the foundation issue. Which got a raised eye brow, to which I continued, it does not explain the evolution of the pain receptor only what happened after that. He looked surprised for a moment then laughed and said he’d not thought about it that way.

Just recently we’ve heard a lot about AGI, and it’s got into the MSM and even Government circles (The Bletchley Declaration and six monthly repeating summits). I find AGI even less than superficial for similar reasons.

At the bottom of the SciAm article you will find,

“I believe we can make rigorous scientific sense of teleological laws that work from future to present, ensuring that what happens in the present is dependent on the need to get closer to some future goal, such as the goal of harmonious alignment of consciousness and behavior.”

I don’t like the expression “teleological laws” not just because very few have ever heard the word let alone know it’s meaning, but because it sounds scary.

Most however have heard of “goal seeking” by the time they leave school and quite a few have already thought about “their future” and the steps of how to get there. So the concept is “not unknown”.

In short it assumes “purpose”. Interestingly it’s a subject that has been arising in legal and philosophical circles this past decade with the question of,

“Does the law have purpose?”

https://academic.oup.com/book/8280/chapter-abstract/153885282

In a way it’s the question about behaviour being a down hill gradient of “reactive” or an uphill one of “proactive” as most of us know “going with the flow” of the down hill gradient is usually a lot easier and requires almost no effort.

Thus the question as to “Why go uphill?” arises not just with law but evolution in general.

It’s a “foundation question” and one that underlies not just consciousness but actual intelligence.

Like the author of the SciAm article Prof Philip Goff indicates with,

“Some philosophers have argued that psychophysical harmony points to God. I think that’s a bit of an overreaction, but I have argued that dealing with psychophysical harmony takes us in radical directions, uprooting our most fundamental assumptions about reality.”

The usual “because deity” is really a silly argument because it is a “lesser flea” or “turtles all the way down” avoidance by pretense the question is unknowable / unanswerable.

But the Prof is right about one thing

“takes us in radical directions, uprooting our most fundamental assumptions about reality”

And all the law, politics, and AGI arguments of AI are just going to be “grist to the mill” of that.

Clive Robinson July 9, 2024 5:49 AM

@ ResearcherZero, ALL,

Re : AI heat death and going solo

I’ve repeatedly mentioned the issue of power consumption by crypto-coin mining, then NFT/Web3, and now AI, and the consequent use of water for cooling and the environmental havoc that is creating. But there is more damage arising that is less visible unless you take a closer look.

Some may remember back to the NSA and it’s not so little venture in Utah and how the people and the state government got a little upset over it but were incapable of stopping it.

Whilst the NSA had the US Gov backing it’s play to steam roller it’s way, to many are not realising Corporates have similar leverage such is political favours.

In the US Water rights are a big issue and @JonKnowsNothing has explained part of the issues.

But there is another issue which is power companies are fighting to,

1, Stop people going “off grid”.
2, Forcing people to pay more for less.

The first is to protect revenue streams, the second is so they do not have to do infrastructure upgrades. So maintaining “shareholder value” and obscene bonuses of executives along with payoffs to politicos via lobbying etc.

Which brings me to your mention of,

“Date centers already consume nearly a fifth of Ireland’s electricity…”

There is no way “The South” / “Republic” can support the technology growth the Eire Government has been fostering in some ways unlawfully (see EU enquires). Which is going to cause major issues.

As those living in London’s Capital know all to well the solution all Governments use will be to one way or another force the people that live there to pay ludicrous amounts of money through required payments for decades. Long after others have reaped profits and absconded with them (look up millennium dome, 2012 Olympics, Thames Water, etc).

At the moment the laws in the Republic of Ireland are such that people can go off grid not just for electricity but water as well and the Carbon Limits have not yet been imposed on people burning organic fuels for heating.

A friend who lives there has been able to install a modest wind generation and solar system as well as put in place a small mineral water business thus significantly cutting their “load on the grid”.

However they tell me they’ve been warned that the “grid tie” they have for electricity is going to go up significantly so the cost of “local storage” is being looked into.

As you might be aware “local storage” is very rarely “environmentally friendly” and there are indicators that certain technologies have put the armies of several nations in South America on the equivalent of a “war footing”.

Thus the question of,

“Can the world afford the stupidity of AI?”

Arises.

vas pup rejected July 9, 2024 4:05 PM

Is AI the answer for better government services?
https://www.bbc.com/news/articles/cmllxl89jlwo

“But the emergence of generative AI in the last two years, has revived a vision of more efficient public service, where human-like advisors can work all hours,
replying to questions over benefits, taxes and other areas where the government
interacts with the public.

In the UK, the Government Digital Service (GDS) has carried out tests on a ChatGPT-based chatbot called GOV.UK Chat, which would answer citizens’ questions on a range of issues concerning government services.

In a blog post about their early findings, the agency noted that almost 70% of those involved in the trial found the responses useful.

Portugal released the Justice Practical Guide in 2023, a chatbot devised to
answer basic questions on simple subjects such as marriage and divorce. The chatbot has been developed with funds from the European Union’s Recovery and Resilience Facility (RRF).

The $1.4 project is based on OpenAI’s GPT 4.0 language model. As well as
covering marriage and divorce, it also provides information on setting-up a
company.

When I asked it the basic question: “How can I set up a company,” it performed well.

But when I asked something trickier: “Can I set up a company if I am younger than 18, but married?”, it apologised for not having the information to answer that question.

Sven Nyholm, professor of the ethics of artificial intelligence at Munich’s
Ludwig Maximilians University, highlights the problem of accountability.

“A chatbot is not interchangeable with a civil servant,” he says. “A human being can be accountable and morally responsible for their actions.

“AI chatbots cannot be accountable for what they do. Public administration requires accountability, and so therefore it requires human beings.”

When it comes to digitizing public services, Estonia has been one of the leaders. Since the early 1990s it has been building digital services, and in 2002 introduced a digital ID card that allows citizens to access state services.

So it’s not surprising that Estonia is at the forefront of introducing chatbots.

The nation is currently developing a suite of chatbots for state services under the name of Bürokratt.

Estonia’s chatbots are not based on Large Language Models (LLM) like ChatGPT or Google’s Gemini.

Instead they use Natural Language Processing (NLP), a technology which preceded the latest wave of AI.

Estonia’s NLP algorithms break down a request into small segments, identify key
words, and from that infers what user wants.

NLP models are limited in their ability to imitate human speech and to detect
hints of nuance in language.

However, they are unlikely to give wrong or misleading answers.”

vint cerf July 10, 2024 2:56 AM

Book title: Can Democracy Survive AI?

Abusive application of AI (politcal propaganda, misinformation….) creates risks. But so does naive dependence on AI for decision making and allowing AI-based systems to take automatic actions that affect the real world. Incidentally, we have the same risks with software in general when we give it real-world autonomy to take actions.

v

ResearcherZero July 11, 2024 1:28 AM

@Clive Robinson

Re : AI heat death

AI Heat Death did come to mind. One of the likely dangers is to trust.
People who believe that AI is all a hoax, greatly underestimate it’s power.

The long term objective of foreign influence operations is to undermine trust in government. Trust in what politicians or public figures say. As AI enables very precise audience targeting, the danger will come from nuanced and subtle changes to information. Information that is then propagated by lesser known individuals who profess to have expertise in a given area. Or perhaps even individuals who are well known.

‘https://www.journalofdemocracy.org/articles/how-ai-threatens-democracy/

“AI technology could not only reach more people but add fake audio of trusted politicians or public figures … to make misleading messages more credible.”

…Swing voters, who usually make their minds up in the final days of the campaign, are also a target.

https://www.chathamhouse.org/publications/the-world-today/2023-10/how-ai-could-sway-voters-2024s-big-elections

~ Throw another shrimp on the barbie.

ResearcherZero July 11, 2024 3:13 AM

@vint cerf

Cold, hard machine logic overcomes human frailty (an impediment to response times), and it can function in places in the decision making chain normally occupied by a human.
Humans are a bottleneck in decision making. AI can remove that bottleneck.

Improved targeting, improved collateral damage…

“Formally, the Lavender system is designed to mark all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets.”

‘https://www.972mag.com/lavender-ai-israeli-army-gaza/

Such an AI system would likely require cloud computing infrastructure to function.
https://time.com/6966102/google-contract-israel-defense-ministry-gaza-war/

“AI enables very precise audience targeting…”

‘https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/

Clive Robinson July 11, 2024 4:32 AM

@ Vint Cerf,

Re : Running with scissors

“Incidentally, we have the same risks with software in general when we give it real-world autonomy to take actions.”

Yup giving the majority of software any kind of agency makes me nervous.

In part because back last century I used to do “safety critical” and “intrinsic safety” design. Some of which was used for billion dollar worth systems in the late 1980’s that are still in operation today, still using what I’d designed…

As my beard turns more white than grey, I find myself dwelling on what makes systems “adaptive” to the environment they are in. Which is arguably the second or even later step to actual AI[1].

Yes I smile when I hear about soldiers hiding in cardboard boxes and giggling past an AI system designed to recognise “Human threats”… But the real lesson is not lost on me.

https://www.businessinsider.com/marines-fooled-darpa-robot-hiding-in-box-doing-somersaults-book-2023-1?op=1

[1] The first step and one we do not have a clue on how it came about is “goal seeking” even in it’s most primitive forms it’s an “arm wavery” subject. But as it is what underpins all our notions about evolution, it underpins all that we know as life and so actual intelligence as we currently think of it.

Clive Robinson July 11, 2024 11:30 AM

@ ResearcherZero,

Re : AI hoax, threat or both?

“People who believe that AI is all a hoax, greatly underestimate it’s power.”

I know that what we currently call AI that is LLMs and ML is a hoax, as far as “Inteligence” goes and that the route we are currently taking is a cul-de-sac in that regard.

Likewise I know that the “AGI” stuff that has been spouted is to be polite a load of at best “faux marketing blather”.

But as I’ve frequently pointed out the current AI systems are an almost perfect “arms length” way to implement prejudice or political mantra or worse as the RoboDebt systems have clearly shown significant harm.

Also the,

“Computer says No”

Curtailment of people trying to get “faults” rectified is just the first of many harms that a reasonable person can not fight.

It is therefore an excuse that gets used frequently to get rid of people on phones. Also because nobody who pushes such discrimination ever has to take responsibility and punishment for it.

So I’m very definitely in the AI as is, is “Both a hoax and harmful threat”.

Clive Robinson July 11, 2024 11:49 AM

@ ResearcherZero,

Re : Improved targeting, improved collateral damage

“Cold, hard machine logic overcomes human frailty (an impediment to response times), and it can function in places in the decision making chain normally occupied by a human.
Humans are a bottleneck in decision making. AI can remove that bottleneck.”

That is the logic of US Airforce Colonel John Boyd’s OODA loop.

Designed to maximize “dog fight” kills.

It’s something that has been draged into “the competitive business world” and whilst some who push it lord it’s alleged benefits, the practical reality is it actually appears to do more harm than good.

Because “jumping in” as fast as possible “with both feet first” is a good recipe for ending up in a world of what could have been easily avoided hurt.

As for what the IDF are doing, well it’s been called genocide by many and as far as I understand it a criminal investigation is progressing in that respect.

But in the US the Federal Agencies as a habit put “conspiracy” charges on every list. So if that was added then Google and Co who must be 100% Cognizant of what is going on, would if they were not so important to the US / Gov because they make it look like the US economy is functioning… Be roped into the genocide case as co-conspiritors.

So much for brain washed employees chanting “do no harm”.

JonKnowsNothing July 11, 2024 12:40 PM

@Clive

re: not thought about it that way

Some time back I was watching an instructional video from a Tai Chi Master. During one of the discourses the Master was explaining foot stance. It seemed simple enough until he dissected the reason for the foot stance.

iirc(very badly)

Our feet point in the direction we are going. They are designed to carry our weight forward. Which is why your feet point in front of you.

Otherwise they would point behind you…

===

Search Terms

Tai chi

JonKnowsNothing July 11, 2024 1:13 PM

@Clive, @ ResearcherZero, ALL,

Re : AI heat death and going solo

The global problem of climate change and the impacts on society are becoming more apparent. The primary view of “nothing to see here” is still the dominant view.

In my section of California we are in a heat dome 116f last week 114f today. In Texas, the Houston area got whomped by Hurricane Beryl and millions are without power days later; which in Texas is a serious problem heat+humidity.

Our technology runs on electricity or at best short term batteries. Without power you are a tech-orphan and unable to communicate. Power drives the cellular system and when those are down, your vulnerability index goes up a lot.

This does not include the exclusion of millions who cannot afford even basic connections. In the USA, there is a significant push to drop Carrier of Last Resort designation, which provides service to rural areas, as well as, significant push to remove price caps on various connection types (TMob “no price increase” now with price increase).

Our entire industry depends on access to both devices and power to run them.

Several recent articles touching this topics expanded a bit further into the overlaying and underlying problems. While the venues are totally different the outcomes remain the same.

The Hayek economic model is collapsing, as it is based on Leave Nothing On The Table, this is how the model is playing out. It’s not about good choices, bad choices or unimportant choices, the model runs strictly on the value to be extracted.

  • The value of millions of homes, businesses and civic centers being deprived of power is less than the value of AI and Bitcoin operations.

The model is clear on the outcome.

===

  • note: I will attach the links / expected road rash

htt ps://ww w.theguard ian.com/australia-news/article/2024/jul/11/australia-economic-mobility-study-productivity-commission

ht tps://w ww.theguardian.com/commentisfree/article/2024/jul/10/kenya-finance-bill-protests

ht tps://w ww.latim es.c om/california/story/2024-07-03/complaint-alleges-inland-empire-community-is-being-disproportionately-targeted-for-warehouse-development

Clive Robinson July 11, 2024 5:24 PM

@ Bruce, ALL,

Re : AI Hallucinations bad in knowledge domains.

We are aware of what the AI industry calls “Hallucinations” but “the term of art”[1] in an overseeing knowledge domain is either “Soft Bullshit” or occasionally “Hard Bullshit”.

Where “soft” is in effect producing any old nonsense output as long as it sounds good. And “hard” is deliberately producing known to be factually incorrect output.

But how often does this happen?

Well according to an ACM article[2],

“Stanford University’s RegLab, showed that hallucinations are “pervasive and disturbing” in response to verifiable legal queries, with rates ranging from 69% to 88%.”

If I read that correctly what you get back between effectively 7 and 9 times out of 10 queries is “Soft Bullshit”.

Think on that for a moment a near 90% of the time the answers are neither accurate or useful… And from other sources I understand that the 10% of answers that are verifiable are actually not of much quality or use (in effect direct quotes of legislation).

Yes the more specialised the knowledge domain the less likely current AI systems are to give you even close to accurate, therefore useful or unharmful information.

But another article quote,

‘As Brent Mittelstadt, director of research at the “Oxford Internet Institute(OII) of the University of Oxford, U.K., explained, “There is no absolute requirement for them to be factual, accurate, or line up with some sort of ground truth.”’

From where I come from that’s called “an ouch moment” and thus most definitely “Soft Bullshit”.

But it gets worse a team of computing experts at the “National University of Singapore”(NUS) “AI Institute”,

“used learning theory to argue that hallucinations are inevitable and cannot be eliminated. LLMs will always hallucinate as they “cannot learn all of the computable functions

Note carefully what “computable functions” actually means with regards work from the 1930’s.

But back to Brent Mittelstadt and colleagues at Oxford University OII they have a somewhat quaint view point,

“Rather than using a language model as a knowledge repository, or to retrieve some sort of knowledge, you are using it as a system to translate data from one type to another or from one format to another.”

Anyone who has used Google Translate should know this is something you really should treat with considerable caution, especially when you go beyond “conversational translation” to things that have actual technical or scientific meaning.

So, what to conclude from these quotes by experts?

My view is,

“The current LLM and ML systems are even less fit for market than bug ridden consumer software.”

Like that Microsoft pushed out with say Vista…

Further that they are designed as “Surveillance Systems” with a business model of “The five Be’s” of

Bedazzle, Beguile, Bewitch, Befriend, and Betray”.

And to do this any old junk-in-a-box tech behind the curtain will do even if it pushes 100% “Soft Bullshit”.

There may be more hope for “small models” that run on augmented PC hardware that are built from highly verified and complete corpus’ input.

[1] The problem as I’ve noted before is the word now used as a “term of art” from a book in 2005 not just in it’s original knowledge domain but increasingly others, has frequently been found on word “naughty lists” for profanity and the like,

https://link.springer.com/article/10.1007/s10676-024-09775-5

[2] The ACM article by Karen Emslie,

https://cacm.acm.org/news/llm-hallucinations-a-bug-or-a-feature/

Clive Robinson July 11, 2024 7:43 PM

@ Bruce, ALL,

Re : Why AI output is bad in knowledge domains.

Current LLM AI systems answer questions with quite a wide variability.

As pointed out by CSAIL researchers in,

https://techxplore.com/news/2024-07-skills-large-language-overestimated.html

“We’ve uncovered a fascinating aspect of large language models: They excel in familiar scenarios, almost like a well-worn path, but struggle when the terrain gets unfamiliar. This insight is crucial as we strive to enhance these models’ adaptability and broaden their application horizons”

Another aspect is based on if they actually “compute” or “pull from memory”. But also if the data was actually in the training corpus.

Thus the learning process of “by rote” or “by reasoning” with the former predominating.

The conclusion of the researchers testing providing fresh insights into the capabilities of the best of current LLMs,

“It reveals that their ability to solve unseen tasks is perhaps far more limited than anticipated by many.”

Clive Robinson July 12, 2024 12:01 PM

@ Bruce, ALL,

Re ” AI blinder than a bat.

More on the research from Alberta and Auburn Universities that indicates even the better AIs can see and perceive less well than a bat for baseball.

https://techcrunch.com/2024/07/11/are-visual-ai-models-actually-blind/

Whilst some may say this makes LLMs a dismal failure, we have to remember that many mammalian newborns and especially human babies can not “see and perceive” either. But by a very recursive learning process they do develop visual skills (each loop is measured in very short time intervals some of just mS so getting on for 100K loops an hour[1]).

As I’ve previously noted evolution in the general sense is “goal seeking” in nature, why we don’t know, but by recursion this causes more optimal strategies to be arrived at. Importantly though the entity “needs agency” within an environment to build the more optimal strategies required to function in the particular environment.

Studies with kittens in highly controlled environments showed this to be the likely process.

Therefore it’s not unreasonable to assume that as we develop new types of AI giving them agency within an environment will be a key component of the development process.

[1] The human eye eventually responds very rapidly to focus in meer milliseconds and has been believed to be by a form of edge detection. That is the information content is highest at edges when they are in focus and the eye uses this locally as part of the ~100:1 reduction from retina to optic nerve. However what our brain perceives of what the eye picks up can take upto 15seconds to make sense of as the brain has a significant processing requirement we still have little knowledge of.

JonKnowsNothing July 12, 2024 2:31 PM

@Clive, @ ResearcherZero, ALL,

Re : AI heat death and going solo v2

The global problem of climate change and the impacts on society are becoming more apparent. The primary view of “nothing to see here” is still the dominant view.

In my section of California we are in a heat dome 116f last week 114f today. In Texas, the Houston area got whomped by Hurricane Beryl and millions are without power days later; which in Texas is a serious problem heat+humidity.

Our technology runs on electricity or at best short term batteries. Without power you are a tech-orphan and unable to communicate. Power drives the cellular system and when those are down, your vulnerability index goes up a lot.

This does not include the exclusion of millions who cannot afford even basic connections. In the USA, there is a significant push to drop Carrier of Last Resort designation, which provides service to rural areas, as well as, significant push to remove price caps on various connection types (TMob “no price increase” now with price increase).

Our entire industry depends on access to both devices and power to run them.

Several recent articles touching this topics expanded a bit further into the overlaying and underlying problems. While the venues are totally different the outcomes remain the same.

The Hayek economic model is collapsing, as it is based on Leave Nothing On The Table, this is how the model is playing out. It’s not about good choices, bad choices or unimportant choices, the model runs strictly on the value to be extracted.

  • The value of millions of homes, businesses and civic centers being deprived of power is less than the value of AI and Bitcoin operations.

The model is clear on the outcome.

===

Links not included -RR-

Winter July 13, 2024 7:41 AM

@Dors

This recent presentation of the field of AI is extraordinary:

I visit conferences where every other talk, or more, solve an age old practical, engineering problem using deep learning. Things that never have been done successfully are now run of the mill.[1]

These people rarely, if ever, use the phrase AI. They tend to be specific about the type and architecture they use. LLMs are an example of such a system.

Those who use the phrase AI rarely if ever solve real problems in practical ways. It has been said many times before, AI is used for the elusive systems of the future. Actual useful systems get more mundane names like LLMs for generative systems, or CNNs for recognizers.

Sadly, many readers confuse the elusive AI marketing with the actually useful machine learning progress. The upcoming book is about powerful applications currently in development. These applications are very, very real. The criticism, however, is mainly directed towards the elusive fairy tales of AI, most of which will never come to fruit in the form feared or derided.

[1] For instance, recently, a student transcribed a view hours of recorded interviews in no time using WhisperX, downloaded as an Open Source application. Something we could only dream about just 4 years ago.

Clive Robinson July 13, 2024 6:55 PM

@ Bruce, JonKnowsNothing, Winter, ALL,

Re : To crib or not to rob, that is the question.

There are many who claim LLM AI systems can reason, but the evidence for this when you dig into it is not actually thick on the ground. To put it politely if AI does reason it’s harder to find actual evidence for it than finding the pea in a three shell game.

What is usually found is that the reasoning is actually “cribing” or more politely “learned by rote” from what is actually in the input corpus.

That is if it has the problem and answer in the input data set you will get an answer that is in the input data set. If not you are all to often “So Out of Luck”.

So people have started testing this aspect of LLM AI systems, and guess what the hypotheses they test for is, and what the result is…

Here is another one that also has the benefit of being mildly amusing,

https://pivot-to-ai.com/2024/07/06/llms-can-solve-any-word-problem-as-long-as-they-can-crib-the-answer/

Oh did anyone else know MuckyDs used to do in house AI research and sold it off to IBM to bring the tech back for AI drive through ordering?

Well apparently it was a failure for various reasons so the hundred or so “experiments” will be shutdown,

https://www.restaurantbusinessonline.com/technology/mcdonalds-ending-its-drive-thru-ai-test

Now as someone who worked as a pot-wash in a restaurant to help pay my way through adult education as an orphan. I got accoustomed to being regarded as being a lower life form than a burger flipping short order cook even when I was promoted to a similar job on my way up the job ladder to chef.

I remember friends who “worked the registers” in fast-food outlets being regarded by their management as instantly replaceable off the street (even though they were not).

If IBM can not get an AI to replace what management see as a “body off the street”…

Maybe the “AI as existential jobs risk” is being a little over played where spoken human communications is the main requirement.

My view is that AI as a jobs risk is most likely a threat to upper middle class “professionals” where cutting high pay man power numbers is most desirable to executives (which is what we saw with IT layoffs in even quite profitable business not that long ago).

Clive Robinson July 14, 2024 7:21 AM

@ JonKnowsNothing, ALL,

Re : Power to the AI is heating up many places.

More on the AI cost and other issues. That contains this jolly little statement,

“Major corporations have even now started looking at modular nuclear power plants just to ensure that their massive AI data centers can get the power they require.”

Yup read that again and let it sink in…

This is not a prediction from what some would call “a crank group of environmentalists, fear mongering”.

But one of those institutions very very close to the beating heart of the Capitalist way “Goldmam Sachs”,

https://www.tomshardware.com/tech-industry/artificial-intelligence/goldman-sachs-says-ai-is-too-expensive-and-unreliable

And Goldman Sachs appear to be in two minds as to if AI as currently practiced will even break even,

“We still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubble take a long time to burst.”

With one “mind” saying that,

“The capital expenditure cycle on AI technologies seems promising and is similar to what prior technologies went through.”

Which is the brain fever madness of “just throw money in and double-down over and over”. A strategy which has apparently already got to the point where Sequoia Capital, a fairly well known Tech VC, has recently taken a sober look into “AI investments”. And has calculated that the visible part of the AI tech sector,

“Needs to make $600 billion annually just to break even on its initial expenditure.”

And on the other hand, the other “mind” of Goldman Sachs, more rationally voicing the contrary view that says,

“AI will only deliver limited returns to the American economy and that it won’t solve complex problems more economically than current technologies.”

As you know I’m on the far side of even that “optimism” of the latter mind.

I do not think the current AI LLM systems will ever break even when it comes to the way LLM AI is currently being developed. Also I do not think LLM AI systems will meet the many claims made of it in the AGI hype that has more recently accompanied it. And all the while it is being pushed beyond hype by “Venture Capitalist”(VC) types and shrill shills into an “Investment Bubble” for those with more money than sense.

But even I have one uncertainty, that is if the bubble will burst explosively or just deflate like a weak old party balloon hidden out of sight under the bed as an intoxicated prank.

And as I’ve previously pointed out the CPU Cycle investment bubble issue so far has a succession of failures from crypto coins, through NFT’s, Smart Contracts, Web3, and VR based Metaverse,

https://www.bbc.co.uk/news/articles/c51yl7q8z42o

All of which needs the same basic power hungry hardware tech.

In this series of failures view I’m not alone, Molly White who was getting more attention than she expected for her hobby so went full time is as, if not more, skeptical,

https://www.web3isgoinggreat.com/what

And the occasional poster to this blog Nicholas Weaver has observed in “The Web3 Fraud”, Web3 is just another way to push the CPU cycle burning crypto ponzi scheme,

“So why this hype? Because the cryptocurrency space, at heart, is simply a giant ponzi scheme where the only way early participants make money is if there are further suckers entering the space. The only “utility” for a cryptocurrency (outside criminal transactions and financial frauds) is what someone else will pay for it and anything to pretend a possible real-word utility exists to help find new suckers.”

Exactly the same logic applies to the current AI LLM and ML systems, that are just throwing more fuel on the bonfire of the world.

But I admit it is a “surprise surprise” moment to see that Goldman Sachs, at least in one mind, is in agreement.

Clive Robinson July 14, 2024 9:29 AM

@ JonKnowsNoring, ALL,

Re : Feet fall in the wrong direction.

There are many reasons given for why not just mammals but even dinosaurs have evolved feet over the geological history.

Thus,

“Our feet point in the direction we are going. They are designed to carry our weight forward. Which is why your feet point in front of you.”

Is a relative new commer to the observation game.

But the reality of most bipedal creatures is feet,

“Prolong falling over into directional motion.”

We walk by falling forward over one foot whilst moving the other foot forward so we can fall over that one and so on. Though less obvious the same applies to running where the legs of the able bodied give “vectored thrust”.

Oddly it is surprisingly efficient because over and above the mechanical energy from the muscles the Achilles tendon and calf muscle acts as an energy storage device that works with the lever and offset fulcrum of the foot and ankle. This can be fairly easily seen in kangaroos as they bounce almost effortlessly across Australia.

For those that are not able bodied modern materials has given rise to the development of “blades” of carbon fiber and other composit materials for those who have lost part of their leg. And as para olympic runners are now setting records in times competitive with supposedly more able bodied athletes the energy storage but not generation element is being replicated.

But they lack a lot, not just that they do not generate mechanical energy unlike the legs of able bodied humans. They also lack the significant advantage the able bodied have that they can “sense and adapt” the shape of their feet and angles to suit the ground under foot.

It is this latter aspect called “sensory feedback” that is just starting to be investigated in prosthetics,

https://www.triathlete.com/culture/the-science-and-controversy-of-running-blade-prosthetics/

It’s something I mentioned a week or so back to @vas pup with regards to “grip” in that it’s not grip force/pressure you need to measure but slippage.

And these sensory feedback systems are something in which the current AI LLM design systems all be it in a much stripped down format will almost certainly appear within a few years at most.

But do not expect it to be called AI or LLM, because as an acquaintance who is a double amputee from an IED pointed out to me there is a very great deal of prejudice by those athletes that run the athletics organisations. As he noted it’s not just discriminatory or antagonistic, it’s intentionally vicious as the rule changes show.

Winter July 14, 2024 11:39 PM

@Clive

The only “utility” for a cryptocurrency (outside criminal transactions and financial frauds) is what someone else will pay for it and anything to pretend a possible real-word utility exists to help find new suckers.”

That reminds me of gold. No utility, just what someone else will pay for it. And getting new gold is an environmental disaster.

What does this teach us about the future of crypto currencies?

Exactly the same logic applies to the current AI LLM and ML systems, that are just throwing more fuel on the bonfire of the world.

I utterly fail to see the connection with crypto currencies. I have actually seen many practical applications of ML and deep learning and used them myself.

For example, for those who live in an English speaking universe it might not be visible, but the utility of automatic translation applications based on ML is truly transformational for those of us who have to read, write, and understand other languages than those few we are really fluent in.

Likewise transformational are automatic subtitle generation for those who cannot hear and text to speech for those who cannot read themselves.

JonKnowsNothing July 15, 2024 12:05 AM

@Clive

re: feet as energy storage device

iirc(badly) Long time back, in the very early days of wearable tech and VR camera+display in headset googles, predating G$ junk, research done at various Unis which was the topic of several science shows.

In one system, in order to lighten the rig load and to be able to cart it around all day with continuous video feed and text upload via wifi to a Uni server system, they jettisoned the bigger battery cage. Instead they ran sensors down the inside of the trouser legs inside tennis shoes (aka trainers) and used a manual energy recovery system from the falling power of the footfall and recoil to collect energy to run the system.

It was a bit like generating electricity by pedal power on a bicycle were the head lamp works as long as you keep pedaling.

For the VR rig, being stationary for long periods was a serious bottleneck.

Recoil + recovery systems are used in some manufacturing plants for specific applications. When you first see them you might well think “perpetual motion” but it is not. The electricity+power comes from a consistent method to pre-load the spring coil and is released as the spring unwinds. It is a purely mechanical setup.

JonKnowsNothing July 15, 2024 12:33 AM

@Clive

re: walk by falling forward over one foot whilst moving the other foot forward

There are some bio mechanical phases to this pattern.

  • Thrust off (L1)
  • Balance (L2)
  • Pull Back + Push Forward (L2)
  • Landing (L1)

Depending on how many legs the creature has, the coordination has to be highly timed. Centipedes v Bipeds.

Snakes and Legless lizards use the same bio mechanics however the application shifts since their balance point along their body.

At speed there maybe an extra point where all legs are off the ground in airborne suspension.

Other animals use a “hop” method. (roos, rabbits, birds)

In horses and other 4 legged animals, the front 2 legs are pivot points with very little power, thrust or pull back. The speed comes from the actions of the rear legs which act as the engine to push the body over the pivot point. The main activity of the front legs is to reach forward to set up a new pivot point.

JonKnowsNothing July 15, 2024 1:18 AM

@Clive,

re: AI, Democracy and 40yrs of Hayek

There are global summits and government reviews on how to deal with 40yrs of Hayek Economics as applied to their specific circumstances.

The survival of Democracy may well be entangled with adjustments to the current application. It will be a time of great uncertainty because one cannot simply shift an entire economy or undo 40yrs of business, legal, taxation and accepted practices in 100 days.

One often suggested first step is sometimes referred to as Helicopter Money or variations of this policy. Also referred to as Universal Basic Income.

However, Helicopter Money alone will not shift the fundamental problems; nor will specific directed taxation have the long term effects needed to shift economic sectors. Some of these proposals are effectively a 1-time activity and while helpful in the short term, dealing with the long term will be the bigger challenge.

How AI will impact decisions, to tip the Hayek model back to a different equilibrium point or push the Hayek model into complete breakdown, will have direct impact on how people view Democracy as presented by various governments.

  • Consider: EMusk $46 Billion Bonus (pending)

There will be considerable economic in-fighting over what, if anything, needs to be done to shift such bonuses into different categories. There are not that many options on the table for governments. What USA selects to do maybe followed by the EU (or vise versa), China and Russia will likely select a different pathway.

Our global technology requires international agreements and cooperation in trade matters. If one country selects a significantly different path, this might cause an economic shock.

Consider: 75+ Countries cannot pay the Debt Service on their IMF loans.

Some have already gone into default, others are near to doing the same. Once in default, the international funds dry up and massive make-work projects halt. The country falls into an economic zombie status.

Previously the IMF, World Bank, demanded Austerity to extract the additional funding for the debt payment from the population. This will not be available as often.

  • If a country refuses to continue with the Austerity Model (Hayek), who is going to take the Hair Cut?
  • If 50%-100% of the wealth of a country is entwined with policies that no longer benefit the population, who is going to take the Hair Cut?
  • If N-Many Countries fall into an economic zombie status; who is going to take the Hair Cut on the no-longer viable internationally guaranteed debt instruments (international bonds etc)?

How AI calculates these modeling systems may have great impact on the future of Democracy.

===

Search Terms

Helicopter money

Clive Robinson July 15, 2024 2:38 AM

@ JonKnowsNothing, Bruce, ALL,

Re : Does the foot fall or fall off?

Whilst not actually “AI” the subject of “robots” is never far away as people look to giving AI systems “Agency”. It appears to be rarely out of the MSM or Tech/Trade journals, be it delivery drones for civilian or military use or electric vehicles with a safe self driving capability.

But all those robots are “Hard Robots” in that the in effect have a rigid frame on which sensors and actuators are fixed. The biological equivalent is a skeleton be it internal as with vertebrates or external as it is with most insects but there are other life forms that do not have hard structures such as Squid, Octopus, Starfish, sea cucumber and similar.

As an overly general rule Vertebrates are animals that have a backbone inside their body that is articulated. Not just using bone but cartilage and ligaments. In structures that surprise even engineers.

But even more surprising for engineers are those creatures that lack skeletons. These “soft creatures” have given rise to the design of robot analogues that are called “Soft Robots”.

On aspect of which is to mimic what some creatures like lizards and insects do. That is to detach parts of their body or in the case of some ants fuse bodies together.

NASA amongst others are looking into this, especially the idea that small general parts can build larger specialised function robots as required. Thus giving space craft the ability to self-repair rather than use redundancy, and ultimately self replicate such that exploring the vast volumes of space becomes much more efficient.

But less obvious is what soft robotics can potentially do for prosthetics. As I mentioned in my post above an active area of current prosthetic research is “sensory feedback” where by a prosthetic device gains the ability to adjust automatically to it’s environment.

Feet and footing are the obvious candidates as unlike our hands nearly all the movement of the parts of the foot are non conscious and in response to what the foot sensors ascertain about the environment.

After a few moments thought it will can be realised that AI based “sensory feedback” and “soft robotics” will form the underlying technologies in such prosthetic feet.

But a very recent and interesting area of research in soft robotics which will end up involving AI is the ability to detach and attach soft robot parts to not just mimic but improve on what lizards and ants can do.

https://spectrum.ieee.org/soft-modular-robot

The potential for use in low gravity environments is immense.

But think about a mixture of hard and soft robotics, and vertebrate spines. Do you build limbs like boneless squid tentacles, snake or centipede bodies, or what we think of as more normal long bone limbs?

The possibilities are shall we say not just interesting but potentially bring SciFi ideas into reality.

Also consider that we are looking at prosthesis to argument humans as force multipliers. In medical environments much is limited by what a human can physically do mechanically without risk of harming themselves. An individual unassisted nurse can not lift patients in and out of bed into chairs, baths etc thus either multiple nurses or hoists and other lifting equipment is required and this is very limiting and resource awkward, hence the interest in exoskeleton systems.

Further anyone who has injured a leg or even an arm knows how debilitating it can be to ordinary existence whilst it heals. And how even minimal assistance makes a heck of a difference to much quality of life and independence. There is not an engineer alive who has not at some time wished for a third hand, and most people will realise that similar thoughts have happened to them.

Soft robotics and AI will make these things not just possible but eventually everyday. What concerns me is the route we take to get there.

Few people under the age of thirty realise that the primary use thus development arena of technology was by the areas of Government involving international relations and social control via various “civil servants” and “guard labour”. It was not till the 1960’s and through into the 1990’s that this substantially changed. Now we think of civilian and industrial technology trickling down into the military and such like.

However some are realising that with robot technology the old “military first” pathway is reasserting it’s self through the likes of drones and loitering munitions and what is now called “Sixth Generation” weapons systems.

As noted Hayek’s excusing sociopathic behaviour in economics is bringing society to the brink of disaster.

Thus how we move forward with AI and the agency it gets given by robots is of concern to rather more than Google employees. But letting the fox dictate the rules of guarding the hen house has never been a good idea. Remember whilst a poacher might become a gamekeeper, they almost always “act in service” rather than be “masters”.

Scott from 105 July 15, 2024 4:36 AM

Democracy in the Age of AI: The Coming Transformation of Government, Citizenship, and Everything In-Between

The main title is concise, meaningfully descriptive, and easy for anyone to understand.
The subtitle conveys the ubiquity and depth of the potential changes, suggesting why anyone and everyone has a reason to learn what’s coming.

Clive Robinson July 15, 2024 5:09 AM

@ Bruce, ALL,

Re : The Red Queen dictates but does not rule.

The current so called “AI market” is at best an illusion created by sleight of hand. At worst a Ponzi scheme or investor bubble depending on which side of a quaint legal divide you take others money.

The thing is history shows that to get going new technology has to be dishonest in some way to get the funding needed to get past the inertia and thus the ball rolling.

Because few will invest in what will take more than the rest of their life to pay real dividends.

Some claim AI will be the next “Industrial Revolution” not realising that it all ready has been but under a variety of different names. The thing is industrial revolutions take the better part of a couple of centuries to happen (although it is speeding up as communications improves). Because the first signs are generally unrecognised untill much later by historians, timescales appear shorter.

For those that do not know AI systems were in practical use back in the 1980’s with the recognisable initial work going back into the 1950’s. For my sins I was working in developing them in industrial control systems in the 1980’s and some are still running today, much to my surprise four decades later.

The thing about the technology side of all industrial revolutions is it starts in curiosity then research long before it becomes usable. And in usage it goes from the highly specific nich to the general and ubiquitous.

One classic example of this would be the electric light bulb. It replaced gas lights that had replaced oil lamps that had replaced candles. Each clearly demonstrated advantages that not just outweighed it’s disadvantages but the disadvantages of the proceeding technology. But few realise that quite a while before the wire filament lamp became practical the arc light was being used quite successfully. Sir Humphrey Davey demonstrated both the platinum filament and carbon arc as light sources in the first decade of the 19th century. This was 75years, or a lifetime before Swan and 80 before Edison. During that period both Limelight and the carbon arc light were developed and used to great effect. During the first forty years or so the filament bulb was a curiosity in what we would call researchers laboratories or inventers workshops. As such it had little visibility or interest to the general public even as a quaint novelty.

At the moment with AI we are kind of at the tipping point, where it becomes recognisable to most that there is some advantage over what is already there.

I was reminded of this, when the current news letter of Jamin Ball came to my attention over the weekend,

https://cloudedjudgement.substack.com/p/clouded-judgement-71224-the-red-queen?utm_source=post-email-title&publication_id=56878&post_id=146443935&triedRedirect=true

Note the premise of the “Red Queen Race” thought up by Victorian logician and mathematician Charles Dodgson for the entertainment of a child.

It’s actually an indicator of “falling for the hype”. Yes whilst existing AI systems under different names are showing quite profitable returns, the current LLM and ML systems probably will not ever. They are at best niche solutions looking for general problems to solve, at an exorbitant use/cost of resources.

The resource cost is these AI systems Achilles heal and within a fairly short time they will either be dropped or something more efficient will replace them.

So “my money” would be on the latter of being replaced by something more efficient.

There is an expression that almost always applies to new technologies,

“The leading edge is the bleeding edge.”

And few things survive exsanguination or the traditional method of slaughter. Which is why the newer expression of,

“It’s not only the cheese the second mouse gets”.

As far as the current AI LLM and ML systems go, you don’t want to be a “first mouse”.

Clive Robinson July 15, 2024 1:49 PM

@ Winter,

Re : Power and water resources.

I guess you are not looking at things in the whole with,

“I utterly fail to see the connection with crypto currencies. I have actually seen many practical applications of ML and deep learning and used them myself.”

Well let me think,

1, The use of Nvidia GPUs in vast arrays.
2, The electrical power to run those arrays[1].
3, The potable water to cool those arrays[2].
4, Similar costs for comms support[3].

It’s been estimated that a single run in building a LLM from training data can be up in the tens if not hundreds of millions in electrical power alone. Then there is similar high costs for potable water. As for the cost of the GPU array and the infrastructure hundreds of millions again. With I’ve no idea how to assess it but the environmental costs from component manufacture onwards.

But it would appear that LLM systems are really quite grossly inefficient and the DNNs not what many think for various reasons,

https://robleclerc.substack.com/p/general-theory-of-neural-networks

As I mentioned subject me time ago the “Multiply and Add”(MAD) network is usually configured in a way that is not exactly effective or efficient (see difference between “inside the sum” and “outside the sum” for the nonlinear components (it’s at the end of the appendix).

[1] From what has been said Alphabet and Microsoft now consume as much electricity as two of the larger EU States.

[2] It’s difficult to get reliable information about water consumption but I’ve seen over a litre per query being given, and as for training well lets just say enough hot water to make a large cup of coffee for every one in the US at least.

[3] You might not be aware of it but the total world power usage in comms for watching videos off of the Internet is estimated at being 1% of the world electricity consumption…

JonKnowsNothing July 15, 2024 1:52 PM

@Winter, @Clive

re: automatic translation applications based on ML

The fundamental problem with all automagic translations is:

  • You don’t know what you don’t know

Automagic translations are used because you do not know the language you are reading, writing etc. so you do not know the value or validity of the provided substitution.

I do not read, write or speak any Asian language but if I selected to automagically translate an English document into South Korean characters, the results might not even be close to what was intended and the unintended result could cause Serious Mischief.

  • iirc(badly) Years ago, an international incident occurred when a US President visited a former East European country and the translator was less-than-knowledgeable and mistranslated the English text.

There is nothing inherently bad about auto-translations that match the old commonly used Phrase Book for exchanges. However, if you are relying on these to provide serious technical details of complex topics on-the-fly, being skeptical of the results would be a good idea.

RL tl;dr An observation on watching streamed international programs and movies.

I have always preferred to watch foreign films in their original language even when I do not speak or understand it. It comes from the possibility that the original audio has not been re-written and the performance not altered for English speakers. This is not always the case as anyone familiar with dubbed movies or subtitles can attest; what you are watching may or may not be the original performance.

So I watch the movie, listen to the original audio and read the English subtitles to follow the plot lines. There isn’t a lot of plot for kung-fu fight scenes so subtitles are not that much of an issue.

However, I have no reason to believe that the subtitles match the audio at all. The story I am reading in text may have nothing to do with the movie at all. I could easily be watching and reading 2 different stories that may or may not have converging points.

One interesting aspect of this, is that after watching hours of a Korean soap opera melodrama period costume sword and sorcery serial, my brain thinks it speaks Korean. Of course, this is not true in the slightest, but my brain thinks I’ve understood every word of dialog in the Korean language.

  • You don’t know what you don’t know

===

Search Terms

Alchemy of Souls

Jake Hoban July 15, 2024 6:10 PM

The end or the beginning? Reimagining democracy in an AI future.

I agree that that subtitle carries the most meaning in terms of asserting some kind of agency over the shape of a future that is definitely coming. My main title isn’t as catchy as I’d like but I was aiming for some kind of contrasting pair that again points to how we have a choice which way things go. I also thought of “threat or saviour?” but it turns out there’s already a book about AI with that title.

ResearcherZero July 15, 2024 11:25 PM

Disinformation at Scale.

‘https://www.newsguardtech.com/special-reports/generative-ai-models-mimic-russian-disinformation-cite-fake-news/

CopyCop used AI tools to scrape content from real news websites, repurpose the content with a right-wing bias, and republish the content on a network of fake websites with names like Red State Report and Patriotic Review that purport to be staffed by over a 1,000 journalists—all of whom are fake and have also been invented by AI.

‘https://go.recordedfuture.com/hubfs/reports/cta-ru-2024-0624.pdf

“Doppelganger does not operate from a hidden data center in a Vladivostok Fortress or from a remote military Bat cave but from newly created Russian providers operating inside the largest datacenters in Europe. … Doppelganger operates in close association with cybercriminal activities and affiliate advertisement networks.”

https://www.qurium.org/alerts/exposing-the-evil-empire-of-doppelganger-disinformation/

Russian Software Tools for Influence Operations…

“Amezit -V’s suite of tools operates within global social media networks, such as Facebook, which could push out and increase the rate of disinformation and influence operations posts. This software system could build and form public opinion anywhere in the world by creating and using fake accounts (bots), social media groups, etc., and quickly taking down or promoting news stories.”

‘https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/analyzing-the-ntc-vulkan-leak-what-it-says-about-russias-cyber-capabilities/

These methods are becoming increasingly targeted.

“Some accounts have been successful in generating real engagement and building a real audience. – This is particularly concerning in light of the erosion in the capacity of major platforms to detect and respond to sophisticated influence operations, after key Trust and Safety teams have been dismantled or substantially reduced over the past year.”

https://www.isdglobal.org/digital_dispatches/pro-ccp-spamouflage-campaign-experiments-with-new-tactics-targeting-the-us/

ResearcherZero July 15, 2024 11:33 PM

The Kremlin has long planned to take advantage of polarisation and mistrust.

From documents form the SVR (Russia’s Foreign Intelligence Service):

SVR should aim to “deepen internal contradictions between the ruling elites” in the West

The “leitmotif of our cognitive campaign in the [Western] countries is proposed to be the instilling of the strongest emotion in the human psyche — fear,” the document states. “It is precisely the fear for the future, uncertainty about tomorrow, the inability to make long-term plans, the unclear fate of children and future generations.”

‘https://theins.press/en/politics/272870

People have felt that local institutions, both governmental and non-governmental, are not meeting their needs. This has lead to counter-productive actions, which only further serve to undermine trust in these institutions.

“Twenty years ago Americans had the highest confidence in their national government of people in any G7 country.”

‘https://www.economist.com/united-states/2024/04/17/americas-trust-in-its-institutions-has-collapsed

How and why has this happened?
https://direct.mit.edu/daed/article/151/4/43/113715/Fifty-Years-of-Declining-Confidence-amp-Increasing

The problem cannot be solved by any one side of the political divide alone…

“…if citizens trust the institutions that they interact with most closely—the police, the tax collector, the street cleaner, the school board, and so forth—their confidence in these close-to-home representations of government might mitigate distrust of the more remote federal institutions.”

‘https://www.annualreviews.org/content/journals/10.1146/annurev-polisci-050316-092550

“It also means institutions must listen to communities, and use community insights and expertise to build policies and programs people believe in and can trust.”

https://ssir.org/articles/entry/rebuilding_trust_in_american_institutions

ResearcherZero July 16, 2024 12:37 AM

@JonKnowsNothing

Foreign films are indeed much better with the original audio. The dubbed versions sometimes do not convey the same meaning, even with only subtle changes. It also changes the atmosphere and one’s impression of the actors and their delivery of the performance.

The names of places and locations in their original language often contains a depth of information about those places, what is found there, or also describe the surroundings.

“Farming is a beloved pastime for millions of Russians.”

‘https://www.theregister.com/2024/07/09/russian_ai_bot_farm/

The lifecycle of a cyber influence campaign:

https://www.csoonline.com/article/574767/propaganda-in-the-digital-age-how-cyber-influence-operations-erode-trust.html

Meliorator — a covert AI enhanced software package — to create fictitious online personas.

‘https://www.ic3.gov/Media/News/2024/240709.pdf

“We understand politics in social group terms.”

“People make sense of the world by talking to each other in casual contexts, sometimes about politics, explicitly, and sometimes more generally about, ‘where did you get cheap gas?’” These encounters are a hotbed for misinformation, because we often incorrectly assume that our experience of the world represents a statistical reality.

https://www.bu.edu/com/articles/how-political-messages-spread-and-why-we-believe-them/

This confusion does make societies less responsive to risk:

“During a time of major collective risks, like a global pandemic, a lack of initial consensus and uncertainty about the right thing to do allows politicians and their advocates to create competing politicized narratives that weaken public compliance.”

‘https://news.uchicago.edu/story/who-does-or-doesnt-wear-mask-partisanship-explains-response-covid-19

Media manipulation is liable to taint all audiovisual evidence, because even an authentic recording can be dismissed as rigged. (or at least further undermine trust)
https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review

Winter July 16, 2024 4:19 AM

@JonKnowsNothing

Automagic translations are used because you do not know the language you are reading, writing etc. so you do not know the value or validity of the provided substitution.

Every human understands more language than they can produce. Be it in speech or writing.

Non-native speakers can often judge whether language is well formed even when they cannot formulate it themselves. This covers, eg, most non-natives who have to publish in English.

Automatic translations have improved the quality of the language in submitted manuscripts.
‘https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10508892/

an English document into South Korean characters,

When in Korea, Japan, or China, automagical translations of signs and instructions are a life saver for most Westeners. Also, automagical translations of reports and articles are very often good enough to get the message.

Winter July 16, 2024 4:51 AM

@Clive

But it would appear that LLM systems are really quite grossly inefficient and the DNNs not what many think for various reasons,

So you agree ML in general and DNNs are useful, but you think they are too inefficient.

That is a completely different discussion than saying they are as useless as crypto currencies.

Flying airplanes is very inefficient compared to riding a bicycle, there are good reasons to sometimes prefer an inefficient airplane. Likewise, ML can do things other approaches cannot.

As I mentioned above, some applications, eg, automagical translations, are as transformational as flying airplanes. Throwing them out because you consider AI as the new crypto currency is rather absurd.

Jonn Kares July 16, 2024 5:33 PM

I vote for “The Artificial Government Revolution: X Ways AI will Change Democracy…Forever”

Clive Robinson July 16, 2024 8:13 PM

@ JonKnowsNothong, Bruce, Winter, ALL,

Re : AI as a lie detector.

As many know the US like certain dictatorships use “lie detectors” of various forms as well as the thoroughly discredited “Reid Interrogation Technique”(RIT) and similar forms of psychological abuse.

Untill recently there were suggestions that “Fast Magnetic Resonant Imaging”(FMRI) would be the new unbeatable lie-detector. Guess what the idea has been discredited, as has voice stress analysis that insurance companies thought would save them billions and ended up harming their reputations for probity and such things have been banned in many places.

Despite evidence that quack medicine has better our comes than the polygraph people in the US strangely believe in it even though it is known without doubt to be

1, Complete bunkum.
2, Used to extort confessions.

Hence the nonsense the US Federal Government puts employees through.

Not long ago in the UK the Government decided that “interviews” were going to be a requirement to get a passport,

https://m.youtube.com/watch?v=9SLhq0ItoqM

Supposedly people are selected at random the truth is that simple statistics show there is certainly profiling going on and the system has been mired with controversy due to the fact Government DB’s are crap as are those that come via third party business records and similar junk. Also it asks for the impossible as well[1].

There is a plan to replace this “expensive to administer” with AI…

Yup those same “Garbage Input” databases will be used and by some magic they expect there not to be “Garbage Output”…

Yup when you’ve finished rolling around on the floor with tears of laughter flooding the place… It’s time to soberly think what the implication of this “interrogation system” will actually be used for…

Effectively to “extort a confession” that permits Government Policy, that is better known as “Political Mantra”.

We know five things will happen,

1, It will cost a king’s ransom.
2, It will be “officially” hailed as a success.
3, In reality it will be a total failure GIGO system.
4, It will be used to give arms length avoidance of responsibility and liability.
5, It will cause not just mental harm, people will be killed because of it.

Now think about the first two on yhe list…

They will be used to extend the system into other areas of society such as the law enforcement and judiciary processes.

To see just how bad this could be have a read of,

https://lithub.com/what-the-all-american-delusion-of-the-polygraph-says-about-our-relationship-to-fact-and-fiction/

And replace every occurrence of polygraph or lie-detector with AI LLM or ML system.

Is it a society you would want to live in or worse be born into.

Oh and remember the great computer lie,

“The computer says no.”

You can guarantee that will be chucked up for a quarter century or more by those who want to live out their “Honoured” positions without having to admit they darn well knew the harm they were causing.

There are several “enquires” still trying to “kick things into the long grass”…

[1] The classic Government dumb reasoning applies… All “official” identity in the UK is required to have photo-identity traceable back to your Birth Certificate… As such it’s a document if you even have it that can not carry a current photo identification, so all “official” identity has to actually be traceable back through your Passport… But you don’t have a passport it is what you are applying for. But… You are required to bring “Official Photo ID” with you to the interview. Only in politicians and civil servants minds could such dumb reasoning be thought up… Then there is proof of a bank account, which due to Government rules requires “official photo-id” and so on…

C. Hughes July 17, 2024 6:23 AM

Another title/subtitle candidate, playing on well-known phrases:

Machine Democracy
One Nation Under AI?

chris watts July 17, 2024 7:29 AM

AI Pluribus, Machina
The new democracy of the machine empire.

puns, apprehension, and power rangers reference. it’s all there! hi ho!

watts

JonKnowsNothing July 17, 2024 9:20 AM

@Clive

re: What does AI Democracy include in the box?

There are many scholarly discussions about “democracy” and what it is and what it isn’t. In some very general versions, democracy is about being able to vote for a person to represent you in government. This is a very rough version.

  • Democracy is about being able to select a representative, free of coercion, by voting

The main thrust of many applications of AI is about coercion of the voter; incentives or threats to vote in a particular manner or for a particular person.

  • (USA) political parties send out “voting preference” flyers which list the “approved” candidate(s) for that party, for you district and area. These sometimes mimic the actual ballot so you can go down the official ballot and tick all the suggested candidates in their presented order.

There is a looming problem with what is not in the AI box at all:

  • The global population age shift and how governments respond to this threat to their economic system.

Nearly every country has a large aging population and a small younger population, except in Afrika where successive waves of horrendous pandemics and disease have wiped out the majority of the older population.

The basis of all economic systems is “work” and “worker”. AI will certainly replace some work and some workers; however there are limits on how much AI can replace or even robotics can replace.

Within some government economic reviews, the lack of younger workers comes down to: women. Women not having children. Women working. Women remaining single. Women perusing activities and economic options not historically available to them.

  • There are great stories of historical women who beat the odds, but for the vast majority of women globally they cannot beat the odds.

No government wants to give up their women workers: they work for less pay, they work longer hours, they endure worse conditions, they are easily intimidated.

Consider:

  • How will AI Democracy address the discrepancy between lack of younger workers and women who provide significant contributions to their country’s GNP?
  • How will AI Democracy address shifting laws over the legal status of women within a country and within the world community?
  • How will AI Democracy change demands for manual work and benefit workers with economic parity?
  • How will AI Democracy change representation if there are not enough women having children and the accessibility of cryopreserved eggs and embryos?

Clive Robinson July 18, 2024 5:35 AM

@ JonKnowsNothing, ALL,

Re : What is democracy?

“There are many scholarly discussions about “democracy” and what it is and what it isn’t. In some very general versions, democracy is about being able to vote for a person to represent you in government.”

That is not “democracy” but the misnamed con job of “representational democracy”.

Look on it as actually a “Miss Rep” beauty pageant.

That is you are voting for a monkey in a suit to go to a “chimps tea party”, not voting on actual substantive issues.

Very occasionally you do get to vote on a substantive issue but usually the question you have to vote for or against is “rigged” to get a desired outcome. So not even close to a fair vote.

You have next to no choice about which monkey gets to stand, as there are hidden selection processes by party cliques that have specific agendas. Agendas that have been formulated by other persons you have no control over nor mostly even know. Those we do get to find out about are in many eyes “highly undesirable” to put it politely.

Other fun nonsense the so called “Campaign Promises” that melt away never to be seen untill the next voting con-job.

After analysis you realise that,

1, There is no “We the people”.
2, Policy is purchased by those who do not in any way represent the people.
3, Such people do not contribute via taxation or any other method to society just to those chosen “representatives”.
4, The majority of people in society are there to be a bled to death tax resource.
5, The tax is mostly not for society but to be “gifted” in various ways to those buying the representatives opinions.
6, The tax take is then used to acquire assets to be used as further taxation on the majority of people via “Rent Seeking” behaviours.

In short the few are trying to turn society into a new form of “Feudal System” where the fate of the majority of people is in no way in their hands.

Oh and for those saying,

“The US is a Republic”

Or,

“Life liberty and freedom”

It’s not, and it was never designed to be. It’s just a new form of “The King Game” and every bit as, if not more nasty, than having tyrannical, despotic or lunatic monarchy.

Oh and with the way “US Political Families” work, just as “in bred” as well.

Fun thoughts for you.

1, Due to a falling out in the UK Royals, one of them has moved to the US, any children they have there become eligible to be a US President. So in a little less than half a century you could have an English Prince on the US throne…

2, But in much less time, if Ex-UK Prime Minister Boris Johnson who was born in the US decides it might be fun to be US President…

Now won’t that all be jolly good fun, especially as Boris has more or less said the UK should be so aligned with the US that it in effect becomes joined at rather more than the hip.

Just think you could have “Swan Upping”[1] on the Charles River in New England with tea at Boston Harbour[2].

[1] https://www.tatler.com/article/swan-upping-what-is-it-and-why-does-it-still-exist-today

[2] https://www.epa.gov/charlesriver/about-charles-river

ResearcherZero July 19, 2024 4:05 AM

Dubious candidates have employed the same pitch from the time of Caesar.

ResearcherZero July 19, 2024 7:23 AM

In a casino, all gambling is designed so that the house ( i.e. the casino owners) will always net a profit, regardless of the successes of individual patrons.

Be wary of those who claim to have your best interests at heart.

“no entity has proved as effective at fabricating facts to demoralise, unsettle and outmanoeuvre opponents.”

‘https://www.telegraph.co.uk/news/2024/07/15/putin-is-leading-russia-into-a-demographic-catastrophe/

Winter July 19, 2024 11:37 PM

@JonKnowsNothing

In some very general versions, democracy is about being able to vote for a person to represent you in government.

Plato asked “Who should rule?” (The Republic). Democracy is answering “The People”.

Karl Popper wrote that Plato asked the wrong question (The Open Society and its Enemies). The correct question is “How can we get rid of the rulers?”.

In the view of Popper’s question, a Democracy is a system where “The People” can remove anyone from power on short notice in practice.

If you want to know the state of a Democracy, just look how long those in power stick to their seats.

Clive Robinson July 27, 2024 11:26 AM

@ Bruce Schneier, ALL,

Re : No democracy with model collapse.

There is a problem with current AI systems based on LLM and ML systems.

The are way to susceptible to “model collapse” which has serious side effects. Some of which are detailed in

“AI models collapse when trained on recursively generated data”

https://www.nature.com/articles/s41586-024-07566-y

“We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models.”

(Open Access Published : 24 July 2024)

In short “data based on previous data” amplifies the GIGO effect.

Perhaps not new news to some but the effects as in analogue and DSP filters used in regenerative feedback loops can be “oscillatory”. At the very least it causes ‘Q-multiplication’ effect the results of which are the “tails of the original signal distribution disappear” rather rapidly.

Unfortunately the “Democratic Political Process” is very much all about “feedback loops” within “feedback loops”. Even without AI systems the effect of MSM and Social Media often cause the very polarised effects we see and the consequent problems that follow. Current AI systems can not make this better, in fact they can only make it worse.

Clive Robinson July 28, 2024 9:31 PM

@ Bruce, ALL,

Re : Nvidia chips overpriced?

As some know “that special chip” behind all the LLM systems is the Nvidia H100 at around $40,000 a pop.

Well that price may be that high due to supply and demand pricing created by an artificial shortage…

“Just four companies are hoarding tens of billions of dollars worth of Nvidia GPU chips”

https://sherwood.news/tech/companies-hoarding-nvidia-gpu-chips-meta-tesla/

One little twist that certainly raised my eyebrow,

“Venture capital firm Andreesen Horowitz is reportedly hoarding more than 20,000 of the pricey GPUs, which it is renting out to AI startups in exchange for equity”

Remember I said “secondary investing” in the likes of Nvidia was the way to go with this AI bubble. Because the Venture Capital money would have to be spent somewhere…

Well it looks like Andreesen Horowitz is “smart buying and renting”.

If the LLM bubble does not burst and something useful does come out of it then Andreesen Horowitz is going to have one heck of a lot of equity they got for next to nothing to push out at top dollar and still have chips to burn to keep milking it.

Likewise if the bubble does burst and die, they will still have those chips to push into something else.

But what else is there to need very high end Graphics Processing Units?

As I’ve mentioned there is Web3 and Web3.0 that have been conflated.

But there is a third “web three” that Meta –the biggest of the chip horders– is getting into on the quite.

Some might know it as the “Metaverse” which is being pushed as being in effect a totally immersive VR of supposedly “The Matrix” quality or better…

Which is quite a jump on from the view Meta was pushing just under three years ago,
https://www.bbc.co.uk/news/technology-58749529

As for those 10,000 European jobs Meta were talking about, I’m not holding my breath on that.

Clive Robinson July 30, 2024 10:14 PM

@ Bruce, ALL,

Re : Can an AI be an observer?

In,

“Metaphysical Experiments’ Probe Our Hidden Assumptions About Reality”

https://www.quantamagazine.org/metaphysical-experiments-test-hidden-assumptions-about-reality-20240730

A discussion is had that initially appears detached.

However after some convoluted ground work, towards the end a rather important question arises, about what it means to be an observer.

The article points out a rather thorny issue currently,

“Some philosophers of mind believe in the possibility of strong AI, but certainly not all. Thinkers in what’s known as embodied cognition, for instance, argue against the notion of a disembodied mind, while the enactive approach to cognition grants minds only to living creatures.”

And yes there are other views as well with Roger Penrose and David Deutsch getting mentioned,

“… he read books by the physicists Roger Penrose and David Deutsch, each offering up a radically different metaphysical story to account for the facts of quantum mechanics. Should we give up the philosophical assumption that there’s only one universe, as Deutsch suggested? Or, as Penrose preferred, perhaps quantum theory ceases to apply at large scales, when gravity gets in on the action.

Roger Penrose proposed the notion of “quantum effects behind our intelligence” a subject that still caus s argument, and David Deutsch gets credited as being the mind behind quantum computing.

But nobody has yet answered the question of,

“What it takes to be an observer”

Some would think any sensor from a simple mechanical switch through to the likes of the LHC are “observers of events” but that is just a fraction of what “an observer does”.

As I’ve noted several times in the past,

1, Technology is agnostic to use.
2, The use is chosen by a directing mind.
3, It is an observer who decides if that use is good or bad.

After a few moments thought many will realise that machines whilst they can follow “the rules embedded in them”, can not decide good or bad for a novel use.

So can AI be an observer?

Well certainly not with the current LLM and ML systems, no matter how arm wavery the AGI proponents get.

Clive Robinson August 3, 2024 12:42 PM

@ Bruce, ALL,

Re : Aphantasia and the mind.

One of the arguments that AI AGI is basically woo-woo-belief lacking an logical, theoretical, or physical foundations is how people or machines,

“See the world around them.”

Via “the internal or minds eye”.

In humans and increasingly in animal studies there is evidence building up that “The mind models” and studies into learning going back into last century suggest there are three basic internal points of view for thinking and reasoning,

1, Those that see by language
2, Those that see by graphs / images
3, Those who see by equations.

I’m one of those cursed to see not just by images, but pull images apart and see how the systems they show work internally. In effect I do not see the car, but all it’s parts hidden away inside and in effect make an “exploded diagram” of the sort common in science and engineering magazines back when microcontrollers did not run the world.

Worse I can do it without having to shut my eyes. Yes it’s like a very annoying and sometimes disorienting “Heads Up Display”. But it did have one or two advantages, like being able to run through a crowd. As I’ve mentioned before the way to “run through a crowd” is to imagine that everyone else is stationary and head for where they are not. The trick is to project them forward to the time when you will be close to them thus you aim “not for where they are not now”, but “where they won’t be in the few seconds” or less it takes to get to their approximate position. If you assume they are moving in a straight line at constant speed it’s like the trick with a compass when you are on a sail boat where if the bearing to another say large container vessel does not change you are on a collision course. So kind of “linear projection” in the mind.

Don’t ask how I do this I don’t know so can not explain.

But it appears that not all humans can “visualise” or “see in their mind” objects and it appears to be a lot less frequent than research last century appeared to consider.

It’s generally not something I think about that often but this article

https://www.quantamagazine.org/what-happens-in-a-mind-that-cant-see-mental-images-20240801/

Got posted on Friday last week.

What only gets a fleeting mention with respect to research at an old stamping ground of mine “University College London”(UCL) is that

“Aphantasia is relevant to AI.”

One of the big AI arguments currently is based on the notion of “self awareness” that is often argued as being,

“A sense of self in the world around via ‘the minds eye’.”

If it can be shown that the “minds eye” is not a requirement for “intelligence in humans” or other creatures with “real agency” then life is going to get a bit more interesting.

Yes I have my own views on the subject, but I’m going to leave it open to others to come to their own conclusions without pre-bias from my views.

Clive Robinson August 3, 2024 2:16 PM

@ Bruce,

Will the AGI bubble will be gone before year end?

Some certainly think so, the author and AGI skeptic Gary Marcus certainly thinks so,

“I’ve always thought GenAI was overrated. In a moment, though, I will tell you why the collapse of the generative AI bubble – in a financial sense – appears imminent, likely before the end of the calendar year.

To be sure, Generative AI itself won’t disappear. But investors may well stop forking out money at the rates they have, enthusiasm may diminish, and a lot of people may lose their shirts. Companies that are currently valued at billions of dollars may fold, or stripped for parts. Few of last year’s darlings will ever meet recent expectations, where estimated values have often been a couple hundred times current earnings. Things may look radically different by the end of 2024 from how they looked just a few months ago.”

https://garymarcus.substack.com/p/why-the-collapse-of-the-generative

(Warning it’s a “pay to read” page on his site that if you don’t have a reason to be already paying, I’d give it a miss and read his recently preceding posts).

Winter August 4, 2024 3:59 AM

@Clive

One of the arguments that AI AGI is basically woo-woo-belief

AGI is just a repackaging of the Singularity, or the Christian Rapture/Second Coming without the Christ.

It is all techno-religion without a foundation of understanding what Intelligence actually is.

Winter August 4, 2024 6:04 AM

@Clive
Re: Democracy

They are way to susceptible to “model collapse” which has serious side effects. Some of which are detailed in
“AI models collapse when trained on recursively generated data”

This has long been seen in the media.

When media, eg, TV, start to report on what other media have been reporting, we see a similar “collapse” in subject matter and texts.

Re-re-reporting news cycles start to collapse on mere statistical flukes. A perennial example is The War on Christmas[1]. Reporting devolving to regurgitating just a few “iconic” cases and standpoints that have little or no connection to reality.

The media end point of model collapse is called Fake News.

[1] ‘https://www.history.com/topics/21st-century/the-war-on-christmas

Clive Robinson August 5, 2024 5:17 AM

@ Winter, ALL,

Re : Minds eye a requirment.

You say what I suspect other are thinking with,

“It is all techno-religion without a foundation of understanding what Intelligence actually is.”

Or as others in other places have indicated is in effect a “cargo cult”.

But think about it the other way. We currently have no idea let alone useful definition of what “intelligence” is. Like “random” it’s a kind of “placeholder word” for what we feel is missing but have no description by which we can make valid tests for.

In effect that boils down to one of a number of points on a zero to potentially infinity type scale,

1, There is no such thing as intelligence.
2, There is some barrier that prevents us understanding intelligence.
3, We lack the capacity to understand thus define intelligence.
4, What we call intelligence is just an attribute of something else like “life” we first have to understand.
5, Intelligence is so innate in the physical universe that it is in effect a ubiquitous foundation of it.

This is by the way one of those “awkward viewpoints” such as that of Prof Sir Andrew Hopper computer technologist and entrepreneur associated eith both Cambridge Uni and Acorn Computers that gave rise to ARM.

His viewpoint is in effect a foundation of the notion of “ubicomp”,

https://en.m.wikipedia.org/wiki/Ubiquitous_computing

And he developed the notion of the “Active Badge System” that I did some work on during the 1990’s when looking at “Universe Databases”. Where the databases were in effect all in parallel and distributed where ever people were and they acted as backups for each other.

In effect all information became ubiquitous and available where ever you were and that your primary information needs followed you to reduce latency etc. Thus of the three basic operations on information of

1, Storage
2, Communications
3, Processing

The part that was important to an individual was “processing” and needed to be as local to a person as possible. Whilst the other two were effectively about the security and efficiency of information.

My personal view is ubicomp is lacking in that it is currently of limited spectrum range. That is it starts at the point of “Man Made”. Thus is artificially constrained by “what we currently are” rather than “What nature allows”.

That is I suspect that the ability to compute is actually as foundational to our universe as is entropy. What is lacking is in us in the form of recognition and understanding.

Thus the “Can we, Will we” questions. We are in effect part of that natural system which gives rise to a question about “systems” that back in the early 1930’s Kurt Gödel had some things to say about their limitations.

JonKnowsNothing August 5, 2024 11:17 AM

@Winter, @Clive

re: Model Collapse, Education, Science

Indications in MSM is that model collapse is a bit closer to reality. However, I am not so sure it will happen that quickly. There is a lot of data still to be scraped globally and while the Velocity of Re-Post is quite high for hot topics, low value topics will still have low hit counts. How models decide between high hit count and low hit count value data will be interesting to observe. Of course they will be looking for “first hit count” as some sort of gauge but we know from SEO activities that is a poor predictor.

  • No AI Model knows the real value of a data point nor can they analyze truthfulness of data

So, one of the areas of concern is that after N-years or N-months of model recycling, after every current Uni student has graduated, being first gen AI model users, and having the “best reliability” of data for their AI generated essays, reports, theses:

  • What will be the outcome for the next gens in reliability or innovative model use?

If one is simply writing a book report or summarizing historical events, model collapse is mostly of function of false data being inserted into the report.

  • eg: medieval wars used mechanized tanks (1)

However, when it comes to difficult topics or deeper science concepts we will have a problem validating the reports.

It’s not unlike now, where shoddy studies of “meta data” claim a global finding, however, when checking the fine print of the details, the study population size was <500 and the matching criteria was <200.

Already we see similar problems in judicial filings where not only lawyers are inventing cases and references to fictitious cases but judges are using the models to present their legal reasoning and findings.

In this area, model collapse has a significant impact. Even if and when the models are retired or eliminated, these artifacts will remain in the data sphere. There isn't any shoehorn method of ejecting them.

iirc(badly) a review of the accuracy school text books on historical events, math concepts, science topics with the author-suggested Q&A for work assignments; large amounts of errata were noted. Most of this errata never made it to the reprint book.

E-books have tons of errata and unless the publisher pushes the errata into the current publication or provides an push-update, the errata never makes it to the reader.

  • Errata in First Edition physical books, may make a physical book more valuable.
    • Errata circulated by model collapse may not have the same effect.

===

1) There is a whole genre of Alt-History fiction stories based on What-If. An AI model could hallucinate such Alt-History Fiction as Real History Non Fiction.

Bob Paddock August 12, 2024 8:51 AM

@Clive Robinson

“… Via ‘the internal or minds eye’. …”

Clive, there is the very obscure realm of learning to ‘see’ while blindfolded.
Young children learn this ability rapidly, because they don’t know it is “impossible”.
Adults take a lot longer to get there.

Some call Mindsight “Remote Viewing Up Close In Real Time”.

Website “Vision Without Eyes” for upcoming seminars/workshops:

https://www.visionwithouteyes.com

Clive Robinson August 13, 2024 6:32 PM

@ Bruce, ALL,

Re : Neural Network that wood string along.

I’ve kind of made myself a little unpopular with some when it comes to “Digital Neural Networks”(DNN), by pointing out they are only glorified “Digital Signal Processing”(DSP) base algorithms and have no inherent “magic” or “Intelligence” (thus are pressing a pin against the bubble). In short DNNs as currently implemented in LLMs are just glorified “filter banks” with non linear weightings at the output of a digital neuron. As such they really do not behave the same way mammalian and other biological neurons do.

But it appears I’m not the only one who does not see the Magic in the Emperor’s new attire… So much so they have built a mechanical analogy of what a DNN does that they call a “Mechanical Neural Network”(MNN).

Hopefully it will provide an education to those who “see magic” when it’s not even “Science sufficiently advanced”…

Anyway read more,

https://www.schaffland.eu/en/mnn.html

Or watch it in action,

https://m.youtube.com/watch?v=cEzk8JKDzy4

Winter August 14, 2024 9:11 AM

@Clive

I’ve kind of made myself a little unpopular with some when it comes to “Digital Neural Networks”(DNN), by pointing out they are only glorified “Digital Signal Processing”(DSP) base algorithms and have no inherent “magic” or “Intelligence” (thus are pressing a pin against the bubble).

I think your metaphor of LLMs as glorified “Digital Signal Processing”(DSP) is falling on deaf ears because it does not help in any way in understanding current LLMs.

Even for those people who know what a DSP or Filterbank are, nothing in the history of DSP’s even hints at the ability to engage in a coherent written conversation (spoken versions are in the works).

So, when ChatGPT is better able at engaging in an reasonable and coherent conversation than certain well known personalities, what does the DSP metaphor add to help me to understand this capability?

Also, what is the hitherto hidden feature that gives a DSP the capability to engage in coherent conversations? Should I conclude that DSPs are more intelligent or better conversationalists than these well known personalities?

Clive Robinson August 14, 2024 2:34 PM

@ Bruce, ALL,

Re : Another abusive use of AI for surveillance and harm to citizens.

Two Democratic US senators have been made aware of highly discriminatory practice against individuals by the use of AI.

Senetors Elizabeth Warren and Bob Casey are raising alarms about US grocery chain “Kroger’s” using an artificial intelligence-driven “dynamic pricing” model.

By using an AI based surveillance system from “IntelligenceNode” allied with Microsoft apparently Kroger’s collect customers personal and private information to calculate a “personal price point”.

Sometimes called a “pain point”, in essence they want to drive the price you pay personally to a maximum without driving you away as a customer.

https://www.rawstory.com/kroger-pricing-strategy/

So Microsoft are now well and truly at the “betray” end of the AI model I outlined –see above– noting,

“… they are designed as “Surveillance Systems” with a business model of “The five Be’s” of

Bedazzle, Beguile, Bewitch, Befriend, and Betray”.

Expect way way more of this as the “AI Investor Bubble” dies back as those billions if not trillions of Dollars sunk into the “con game” have to be gouged back from ordinary peoples pockets over the coming year or two.

My guess currently is that every US Consumer is “on the hook” for 150-300 Dollars each on average to “pay back” these near worthless current AI LLM and ML systems.

Clive Robinson August 14, 2024 3:01 PM

@ Bruce, ALL,

In case people think I’m being ‘unfair’ in some way on current AI LLM etc systems, remember back a long time ago at a University far far away I used to play not just with AI systems but Robots as well.

So I’ve seen some of the issues most have not with the combination of them both, that is,

1, Giving robots limited action capability via primitive AI
2, Giving AI ML systems agency to explore the environment they are in

But I’m not alone in this there are others,

Over at MIT CS/AI labs in the Robotics domain Roboticist Prof. Rodney Brooks,

https://people.csail.mit.edu/brooks/

Has come up with his,

“Three Laws of AI”

https://rodneybrooks.com/rodney-brooks-three-laws-of-artificial-intelligence/

” 1, When an AI system performs a task, human observers immediately estimate its general competence in areas that seem related. Usually that estimate is wildly overinflated.

2, Most successful AI deployments have a human somewhere in the loop (perhaps the person they are helping) and their intelligence smooths the edges.

3, Without carefully boxing in how an AI system is deployed there is always a long tail of special cases that take decades to discover and fix. Paradoxically all those fixes are AI-complete themselves.”

Clive Robinson September 12, 2024 1:19 PM

@ ALL, Bruce,

Re : Things to get ahead of.

I’ve pointed out that the current LLM “Digital Neural Networks”(DNN) are for a whole heap of reasons not the way to go.

In the past I’ve mentioned why the “Multiply and ADd”(MAD) DSP style algorithms used in current AI DNN’s are both unsuitable and actually the wrong way around.

Further I’ve mentioned that “Biological Neural Networks” do not work on “levels” but regular pulses as though from variable frequency oscillators and why this works significantly differently to the nonlinear transforms used in DNN’s and,

“Importantly does not have the same limitations.”

Well it appears after a decade and a half and maybe a dozen papers that the academic community is starting to think on this,

https://pubs.aip.org/aip/apr/article/7/1/011302/997386/Coupled-oscillators-for-computing-A-review-and

For various reasons I think this is likely to be what will be the next stage of underlying AI systems.

So should be seen as “future Directions” rather more than “current”.

But also loosely coupled oscillators in quantum resonator systems, allow you to do some of those “Quantum Computer” things that are getting people so twitchy.

Clive Robinson September 19, 2024 12:23 AM

@ Bruce, ALL,

Many are aware of the “pollution by AI” down side of LLM’s but have talked about it more theoretically than practically.

Well how about this very real world harm that LLM’s are causing by their pollution of the Internet etc,

https://github.com/rspeer/wordfreq/blob/master/SUNSET.md

This of course has a “knock-on effect” into other fields of endeavor.

Security and the sub field of cryptanalysis being one knowledge domain being harmed.

Clive Robinson September 19, 2024 9:42 PM

Another subject involving AI and Security is that involving “Agency”.

Currently through IoT we are adding “world aware sensors” to networks rather rapidly.

The problem is they are mostly insecure in various ways, but worse many are deliberately designed to “ET Phone Home” to “The Mother Ship” one way or another to increase “surveillance on individuals” on mass.

But starting to appear since the 1980’s and 90’s are cyborgs / transhumans. Untill recently they were in effect “cut-n-shut” operations to put things as simple as magnets in people.

I know from having had for medical reasons a smaller than matchstick sized “reveal” cardiac monitor there are issues with cut-n-shut one of which is the build up of “fibroids” around them (think like a perl forms around grit in an oyster but with soft tissue).

But that is not stoping people having hearing and visual implants to make up for what in essence is an unfortunate biological handicap or disability.

But some people are going in different ways,

https://www.bbc.co.uk/news/articles/cg58r70yj43o

Where the level or type of augmentation is going beyond that of correcting for a handicap or disability.

It should be obvious from just looking at the explosion of IoT devices and types just what could be done if the sensors are “connected effectively” to the human neural network.

It’s not much being said, but with a little thought it becomes clear that,

1, AI ML and even LLMs will be required.
2, The level of security needed is nowhere even near to being what it needs to be.

In fact with regards to AI it’s entirely unclear if we actually have any real idea as to how to find let alone address the real processing security as opposed to communications and storage security issues.

But like it or not cyborg / transhuman augmentation has in effect reached a tipping point with the help of AI. To do interface translation between the standards we use to computer networks and biological neural networks. This is unlikely to stop or slow down and the next decade will see considerable advancement in this area, probably more so than we will see with Quantum Computing advancement.

Importantly it will happen in three basic directions,

1, AI systems will get increasing agency via IoT sensors and existing networking.
2, Biological sensors will become increasingly interfaced to existing digital networks.
3, Both electronic and biological sensors will get interfaced by some level of AI to biological neural networks.

The two questions that we need to think about now are,

1, Where will this take us?
2, What of privacy, autonomy and security of the individual as we moce along the journey?

Clive Robinson September 21, 2024 4:13 PM

@ Bruce,

Q : What does slow AI look like?

Is not a question that I and I suspect many others have thought about except as “bad” due to the perceived notion that everyone wants everything to be instant, thus instantly gratifying.

Or do they?

Even with the engineers ability to see systems in detail almost instantly, I don’t actually want to. What I want is to “mull things over” to take time to think about them and see if there are other ways.

I’ve joked about “Premature Optimisation” being “The fast road to hell” or “trip over the cliff edge of a tipping point”. But as I suspect you know “Managers” do not want problems to think about, they want,

“Answers that are safe because they can not be wrong”

Thus they are guided in the main by only one metric “The faster the better” with the implicit “No other measures considered” Because “It makes the maths simple, linear, and unarguable”.

Thus we do not find optimal or “sweet spot” answers but fast answers, we judge by a single line, and worse assume it is unarguably right.

So life gets faster and faster and usually more and more inefficient,

“Why does inefficiency go up?”

Because as any Realtime System designer will tell you,

“To increase speed of operation you have to reduce response speed for tasks. But the more tasks the faster you have to switch them… But switching tasks carries a finite time penalty for each switch, so as switching speed rises, resource availability for tasks decreases.”

In effect,

“Slow down and do more with what is available.”

Or

“Speed up and do less or nothing.”

This actually holds true for many things even as you increase the resources by putting them in parallel. Because if the tasks have any interaction at all and usually they do… all you end up doing is moving the inefficiency of task switching into task scheduling and queuing.

This is assumed by many to be amenable to more complex mathematics, but… We also know from research in the 1930’s before electronic computers, it’s all to often not possible to analyze.

But what of,

“The human experience”

What do we get from it and what do we gain, or more importantly what else will humanity loose to not just “ever faster” but “AI as a short cut”?

Have a watch of this (warning one or two NSFW words),

https://m.youtube.com/watch?v=Bc9jFbxrkMk

The cost to humanity of AI could be worse than the “Existential Fears” being pushed by those claiming the sky will fall as the drones wipe out every human alive.

Humans have to strive to grow and develop, if we don’t humanity ends. As I’ve noted in the past aside from political control of the masses, the whole underlying essence of religion with deities is something to strive towards becoming, to improve not just in body but mind.

AI could easily rob us of the forward progress of striving thus developing, into a form of “slow death of rush ever faster”.

Clive Robinson September 21, 2024 6:25 PM

@ Bruce,

You and I are according to the Internet of a similar age. Hopefully you are aging better than I.

As you are possibly aware my health has been an issue for many years and I’ve had to “grasp the nettle” of the medical profession with a distinct firmness. And whilst having to battle doctors into suitable sane paths of action has been possible in the past, I note that as the age disparity increases as the doctors in effect get younger with respect to myself I find myself cast as being “aged and thus unreasonable”. With reason, and even sanity in the doctors diminished in preference to “the computer says”…

That bolsters their “Do as you are told” mentality an inexperienced adult might try to inflict on a pet or child.

It’s got to the point where I find myself wondering about the future of medicine. As you might be aware long ago back in the 1980’s and even 90’s I had involvement with “Robotics and Artificial Intelligence” and did some work on trying to combine “Expert Systems” and “Fuzzy Logic” to try and soften if not remove the issues of both (without much success).

Expert Systems once “skinned as human” by the “Turing Test Illusion” were especially problematic. Because they were in effect rule following decision trees that could not deal with any kind of ambiguity, amongst other major failings. Thus to humans they were dogmatic and inflexible, worse they did not change with increasing data and were highly sensitive to the order data became available…

All of which came to mind again with reading,

https://www.nature.com/articles/s41599-024-03282-0

“Paternalistic AI: the case of aged care”

As noted in the Abstract,

“In this paper, we argue that AI systems for aged care can be paternalistic towards older adults. We start by showing how implicit age biases get embedded in AI technologies, either through designers’ ideologies and beliefs or in the data processed by AI systems. Thereafter, we argue that ageism oftentimes leads to paternalism towards older adults. We introduce the concept of technological paternalism and illustrate how it works in practice, by looking at AI for aged care. We end by analyzing the justifications for paternalism in the care of older adults to show that the imposition of paternalistic AI technologies to promote the overall good of older adults is not justified.”

Now consider other forms of,

“Technological Paternalism”

We see Paternalism of a similar form in “cults” and certain types of religion we call “cargo cults” that have their analogues in most societal domains including regulation, legislation, politics ethics, and mores.

AI is in essence going to behave in an unacceptable way and at high speed, with the ability of the bulk of society to address and rebalance as once was possible, being rapidly diminished.

Part of which is AI ML will just “eat it’s own crap” thus re-enforce it’s deviancy away from society as “crap begets crap”.

Clive Robinson September 28, 2024 7:52 AM

Is “Generative AI”(AGI) the next snake oil for the failing “eat you face off” SaaS industry to rip you off with?

As some know reports show that AI is actually making people less productive and quite often loss making as well as ineffective.

Microsoft is desperately playing a “shell game” to hide the bad news that CoPilot will head you down the bankruptcy path as it’s costs on a business are some say are four times as much as they are worth, and that is only going to get worse.

One viewpoint is the bad news that SaaS is is now not just morally bankrupt but now heading down the path of diminishing returns since the end of lockdown.

Thus the plan is “Polish the Turd” with “Snake Oil 4.0” or just slap non working mythical AGI on top at a cost to an organisation of $100/user/period.

You can read fairly compelling information to this effect at,

https://www.wheresyoured.at/saaspocalypse-now/

The article asks the bottom line question,

“People want AI and they’ll pay for it, right?”

Well there are figures available where people are not playing shell games to hide what they don’t want you to know… Which is why the author says,

“Nope! Other than Microsoft — which has barely been able to sell it as it is — I can find no public SaaS company that appears to have attributed any meaningful revenue growth to generative AI. In fact, I think they’re doing their best to hide the fact that it isn’t selling at all, and costs more money than it makes when it does.”

Sounds like time to keep the SaaS sellers outside the moat with rather more than a 10ft pole.

Which begs an important question,

If all these AI companies pushing LLMs and ML that technically “Bullshits”[1] are failing to deliver “productivity” or “cost reduction” and now not even making it as a “silly entertainment”…

“Where is the value?”

That has inflated a bubble beyond trillions…

Which leads to the question of when the inevitable happens, will,

“Sam Altman become the next Sam Bankman Fried?”

Set up for a 25year or more stretch at the tax payers displeasure.

[1] Yes it is genuinely now a word as a “term of art” in a recognised “knowledge domain”… See Above for links,

https://www.schneier.com/blog/archives/2024/07/upcoming-book-on-ai-and-democracy.html/#comment-439237

Clive Robinson September 29, 2024 10:45 PM

Is AGI intractable?

This paper,

https://link.springer.com/article/10.1007/s42113-024-00217-5

Whilst it is not the easiest to read makes the statement,

This practice has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems, and it is framing this realisation as a short-term inevitability. In this paper, we undercut these views and claims by presenting a mathematical proof of inherent intractability (formally, NP-hardness) of the task that these AI engineers set themselves. This intractability implies that any factual AI system created in the short-run (say, within the next few decades or so) is so astronomically unlikely to be anything like a human mind, or even a coherent capacity that is part of that mind, that claims of ‘inevitability’ of AGI within the foreseeable future are revealed to be false and misleading.

Clive Robinson September 30, 2024 9:56 PM

Do AI companies work, now and in the future?

It’s a sensible question to ask.

So far people are blindly dumping billions upon billions into AI companies for what?

As far as current AI LLM and ML very little of substance and even less in income.

Large AI companies are spending all the investment on their “next model” that will be history within a year.

Think about that for a moment, they spend upto 10billion getting their latest model, nearly all on “leasing equipment” and the model lasts maybe a year before someone else has “Built a better mousetrap” and the world heads for that new door…

Can you earn 10billion/year off of an LLM?

The figures are out there, and the answer appears to be no, not even with hidden income such as IP and PII etc theft from users.

The money goes into paying those who lease out the chips, and they funnel a lot of that into buying chips which is why Nvidia is unsurprisingly a trillion dollar company.

But a lot of those chips are just sitting in storage… That is they are being bought up to create an artificial shortage thus price hike.

One “Venture Capital”(VC) firm leases out it’s chips not for money, but equity… Have a think on what that actually means.

But I’m far from alone in considering what is going on,

“What, then, is an LLM vendor’s moat? Brand? Inertia? A better set of applications built on top of their core models? An ever-growing bonfire of cash that keeps its models a nose ahead of a hundred competitors?

I honestly don’t know. But AI companies seem to be an extreme example of the market misclassifying software development costs as upfront investments rather than necessary ongoing expenses. An LLM vendor that doesn’t spend tens of millions of dollars a year—and maybe billions, for the leaders—improving their models is a year or two from being out of business.”

https://benn.substack.com/p/do-ai-companies-work

In the long run,

“You can not burn more than you earn”

Bills have to be paid, even if it’s just the wages of some coal miner digging the next bucket of green house gas emitting carbon fossil fuel to throw on a vanity bonfire.

Non of these massive spends are in reality anything other than excessive “running costs” with next to no “value added” on the process.

What could be fairly said to these companies is,

“Profit is what you ain’t got!”

So it would be fair to ask,

“Is this ever going to change?”

And a look at the research curves strongly suggests the answer is “No”.

It would be interesting to hear from the likes of Nicholas Weaver on this, I know he has some thoughts on AI, Web3, NFTs and Crypto. With a thought on at least one that “it should burn” 😉

Winter October 1, 2024 4:19 AM

@Clive

Is AGI intractable?

Probably, likely after seeing this paper. I never believed it anyway. AGI looks like the AI predictions of the fifties

The authors also write:

The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science.

That is not even a prediction, this is happening now. I have already seen a study [1] that replicates human experiments on speech with an LLM (wav2vec) to disentangle the way speech data affect speech perception.

[1] ‘https://arxiv.org/abs/2407.03005

Clive Robinson October 1, 2024 6:34 PM

@ Winter,

“AGI looks like the AI predictions of the fifties”

Back in the fifties the idea of AI was new and kind of started with Alan Turing getting into theological debates with certain types of “Churchmen” on the national broadcaster “home service”. As a hypothesis AI had not been tested in any formal way and the reasoning was little more than linguistic, the mathematics and logic did not come untill later when the field split and went down to quite different paths.

AGI can be said to be a product of the “mechanistic path” that in effect says intelligence is something that will happen when you stack enough deterministic complexity up that it crosses some assumed barrier…

Whilst I would not rule out intelligence being a product of complexity I’m far from certain it can be done purely deterministically in the way we see state machines and Turing Engines. Even Alan Turing felt that was not the case and always argued –quite rightly– that people had to be able to bring randomness or a reasonable approximation of it into consideration. Others now believe that chaotic systems go beyond deterministic systems in ways that are more than just “determinism piled high”. Others as you know have argued that perhaps some form of quantum effects are needed, and interestingly where it was assumed that quantum effects had no part in biological systems, it’s now found increasingly that they are both necessary and fundemntal. I can not say where such thinking will go, but I’ve reason to think that the efficiency required for biological brains to work at the power levels they do needs certain quantum effects.

But as I said that is but one of the two paths, the paper authors are looking more at the other path, and it’s not a knowledge domain I’m comfortable with as my knowledge is not sufficient.

In some respects you can see one path as “bottom up thinking” and the other “top down thinking”. That is the two paths are almost coming from opposite directions.

One I’m reasonably comfortable with, the other I’m not, and I could almost write a thick book on why I prefer one to the other.

I suspect the same will be true for others if they dig deep enough. For all the effort of the past half century much of which I’ve had involvement in what have been argued as “artificial intelligence systems” I’ve seen nothing but two things,

1, Basic determinism.
2, Lack of any environmental agency.

Humans however like most biological systems are very much products of,

1, Basic laws of nature,
2, Applied in environments,
3, In which the systems can respond,
4, In ways that are seen as “free will”.

In all honesty it’s only in “robotics” where I’ve experienced some of these being investigated. And only recently have systems gained the sophistication for non directed agency to be possible.

So I suspect the next decade or two will see great strides in what will become non directed learning beyond what is in current reality just statistical filtering of data built by semi-random processes into multidimensional weighted spectrums of increasing complexity but little else.

Clive Robinson October 6, 2024 6:53 AM

As real data arrives AGI hype dies.

There has been much hype about AGI and the fact it will be a 10x programmer or better, giving over a 1000% increase in productivity…

Well the real world data coming in says not ‘It ain’t necessarily so’ but it’s actually worse than a 0.5x programmer, who leaves technical debt like dung in the Augean Stables so necessitating a Herculean task to clean up.

https://garymarcus.substack.com/p/sorry-genai-is-not-going-to-10x-computer

If you think about it this is to be expected.

There is no actual intelligence in the current AI LLM and ML systems and never will be. They are based on statistics of “the most common available” input and perturbed by a degree of noise (hence ‘stochastic parrot’). In effect they are the opposite of an “Expert System” from the 1980’s.

So given the sources, is it really unexpected that low quality, inefficient, and error ridden code predominates the input to these LLM systems, thus would get statistically ranked higher?

Clive Robinson October 7, 2024 8:10 AM

US copyright Office is wrong on AI.

Consider the case that you use a PC to craft in a CAD package a design you then send to a CNC tool or 3D printer.

The US Copyright office will accept you as the “author” and credit you with copyright on your “work” even though you in no way “made it physically”.

However what if you use a tool that has some kind of AI in it?

Well it appears the US Copyright Office is claiming quite falsely in comparison that the AI was the “creative” and thus the “author” is unentitled…

Very much a “dual standard”, lacks impartiality and is discriminatory.

Is the effective claims being made,

https://arstechnica.com/tech-policy/2024/10/artist-appeals-copyright-denial-for-prize-winning-ai-generated-work/

I concur with the artist that AI in the LLM and ML form has no sentience at all, it is no more than a tool.

In reality it’s just at best a so so statistical tool that actually requires significant direction from a sentient individual to get not just worth while but actual creative output from it.

It will be interesting to see where the legal case goes, personally I hope it goes against the US Copyright Office because the alternative will be very very harmful, not just to the use of AI as a useful tool but to society in general.

Clive Robinson October 8, 2024 1:57 PM

@ Bruce,

This “AI in Politics” story is one to put in the research file.

Titled,

“Virginia congressional candidate creates AI chatbot as debate stand-in for incumbent”

It actually sounds not to dissimilar to that an independent candidate in the UK tried to do a few months back …

But this Virginian is way more “in your face” about it,

“Bentley Hensel, a software engineer for good government group CivicActions, who is running as an independent, said he was frustrated by what he said was Beyer’s refusal to appear for additional debates since September. So he hatched a unique plan that will test the bounds of both propriety and technology: a debate with Beyer’s artificial intelligence likeness.”

giampiero obiso October 15, 2024 8:34 AM

My personal choice is a medley between options 2 and 3: A better run: how AI can improve democracy

The idea is that the optimistic view of the book is shown from the start and that no one here is thinking about AI deciding (by itself) what democracy should be. The only verb I would use about democracy is to improve it, not rethinking or rewriting or rebuilding, and so on.

Rick Hunter October 16, 2024 4:22 PM

Book title, AI, Democracy and We,the People.
Combines most pieces and keeps is short and sweet. A little U.S. centric but
not entirely as we the people includes everyone, but rings loud for U.S..
Anyway, you did some security presentations to us at Tandem Computers when I was an active security consultant in the late 80s or 90s.

Regards, Rick Hunter

Ronald Hudson January 21, 2025 12:52 PM

I really think the crypto investment community isn’t doing enough to combat the high rising rate of online investment scams currently flooding the crypto space. I fell victim for this notorious scams after I tried investing in an alleged crypto investment platform. Apparently, I thought they where a legit investment platform but I later got to find out the hard way after they kept asking for more money before I could withdraw out my investment profit.

After I had paid more than enough to withdraw my profit of $180k, I soon started suspecting that this payment was not ending soon and so I concluded that they where scammers. I felt really down because I had poured in all my savings into this investment idea I thought will give me a higher returns. I quickly started looking online for a legit investment recovery agent who could help, then I saw a comment about a firm called RecoveryHacker101@gmailcom, who specialises in crypto recovery. I quickly contacted them and provided relevant information about all the transaction I had done with the scammers.

In short I was able to retrieve $145k back. Though, I wasn’t about to get back the full $180k, but at least I was able to recover back $145k with the help of RecoveryHacker101. I honestly want to say a big thank you to the team that put their time and expertise so I could recover back my stolen bitcoin. Therefore, I am using this medium to recommend recoveryhacker101 to anyone looking to recover back their lost cryptocurrency. You can reach them via RECOVERHACKER101@GMAILCOM

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.