Defending against AI Lobbyists

When is it time to start worrying about artificial intelligence interfering in our democracy? Maybe when an AI writes a letter to The New York Times opposing the regulation of its own technology.

That happened last month. And because the letter was responding to an essay we wrote, we’re starting to get worried. And while the technology can be regulated, the real solution lies in recognizing that the problem is human actors—and those we can do something about.

Our essay argued that the much heralded launch of the AI chatbot ChatGPT, a system that can generate text realistic enough to appear to be written by a human, poses significant threats to democratic processes. The ability to produce high quality political messaging quickly and at scale, if combined with AI-assisted capabilities to strategically target those messages to policymakers and the public, could become a powerful accelerant of an already sprawling and poorly constrained force in modern democratic life: lobbying.

We speculated that AI-assisted lobbyists could use generative models to write op-eds and regulatory comments supporting a position, identify members of Congress who wield the most influence over pending legislation, use network pattern identification to discover undisclosed or illegal political coordination, or use supervised machine learning to calibrate the optimal contribution needed to sway the vote of a legislative committee member.

These are all examples of what we call AI hacking. Hacks are strategies that follow the rules of a system, but subvert its intent. Currently a human creative process, future AIs could discover, develop, and execute these same strategies.

While some of these activities are the longtime domain of human lobbyists, AI tools applied against the same task would have unfair advantages. They can scale their activity effortlessly across every state in the country—human lobbyists tend to focus on a single state—they may uncover patterns and approaches unintuitive and unrecognizable by human experts, and do so nearly instantaneously with little chance for human decision makers to keep up.

These factors could make AI hacking of the democratic process fundamentally ungovernable. Any policy response to limit the impact of AI hacking on political systems would be critically vulnerable to subversion or control by an AI hacker. If AI hackers achieve unchecked influence over legislative processes, they could dictate the rules of our society: including the rules that govern AI.

We admit that this seemed far fetched when we first wrote about it in 2021. But now that the emanations and policy prescriptions of ChatGPT have been given an audience in the New York Times and innumerable other outlets in recent weeks, it’s getting harder to dismiss.

At least one group of researchers is already testing AI techniques to automatically find and advocate for bills that benefit a particular interest. And one Massachusetts representative used ChatGPT to draft legislation regulating AI.

The AI technology of two years ago seems quaint by the standards of ChatGPT. What will the technology of 2025 seem like if we could glimpse it today? To us there is no question that now is the time to act.

First, let’s dispense with the concepts that won’t work. We cannot solely rely on explicit regulation of AI technology development, distribution, or use. Regulation is essential, but it would be vastly insufficient. The rate of AI technology development, and the speed at which AI hackers might discover damaging strategies, already outpaces policy development, enactment, and enforcement.

Moreover, we cannot rely on detection of AI actors. The latest research suggests that AI models trying to classify text samples as human- or AI-generated have limited precision, and are ill equipped to handle real world scenarios. These reactive, defensive techniques will fail because the rate of advancement of the “offensive” generative AI is so astounding.

Additionally, we risk a dragnet that will exclude masses of human constituents that will use AI to help them express their thoughts, or machine translation tools to help them communicate. If a written opinion or strategy conforms to the intent of a real person, it should not matter if they enlisted the help of an AI (or a human assistant) to write it.

Most importantly, we should avoid the classic trap of societies wrenched by the rapid pace of change: privileging the status quo. Slowing down may seem like the natural response to a threat whose primary attribute is speed. Ideas like increasing requirements for human identity verification, aggressive detection regimes for AI-generated messages, and elongation of the legislative or regulatory process would all play into this fallacy. While each of these solutions may have some value independently, they do nothing to make the already powerful actors less powerful.

Finally, it won’t work to try to starve the beast. Large language models like ChatGPT have a voracious appetite for data. They are trained on past examples of the kinds of content that they will be asked to generate in the future. Similarly, an AI system built to hack political systems will rely on data that documents the workings of those systems, such as messages between constituents and legislators, floor speeches, chamber and committee voting results, contribution records, lobbying relationship disclosures, and drafts of and amendments to legislative text. The steady advancement towards the digitization and publication of this information that many jurisdictions have made is positive. The threat of AI hacking should not dampen or slow progress on transparency in public policymaking.

Okay, so what will help?

First, recognize that the true threats here are malicious human actors. Systems like ChatGPT and our still-hypothetical political-strategy AI are still far from artificial general intelligences. They do not think. They do not have free will. They are just tools directed by people, much like lobbyist for hire. And, like lobbyists, they will be available primarily to the richest individuals, groups, and their interests.

However, we can use the same tools that would be effective in controlling human political influence to curb AI hackers. These tools will be familiar to any follower of the last few decades of U.S. political history.

Campaign finance reforms such as contribution limits, particularly when applied to political action committees of all types as well as to candidate operated campaigns, can reduce the dependence of politicians on contributions from private interests. The unfair advantage of a malicious actor using AI lobbying tools is at least somewhat mitigated if a political target’s entire career is not already focused on cultivating a concentrated set of major donors.

Transparency also helps. We can expand mandatory disclosure of contributions and lobbying relationships, with provisions to prevent the obfuscation of the funding source. Self-interested advocacy should be transparently reported whether or not it was AI-assisted. Meanwhile, we should increase penalties for organizations that benefit from AI-assisted impersonation of constituents in political processes, and set a greater expectation of responsibility to avoid “unknowing” use of these tools on their behalf.

Our most important recommendation is less legal and more cultural. Rather than trying to make it harder for AI to participate in the political process, make it easier for humans to do so.

The best way to fight an AI that can lobby for moneyed interests is to help the little guy lobby for theirs. Promote inclusion and engagement in the political process so that organic constituent communications grow alongside the potential growth of AI-directed communications. Encourage direct contact that generates more-than-digital relationships between constituents and their representatives, which will be an enduring way to privilege human stakeholders. Provide paid leave to allow people to vote as well as to testify before their legislature and participate in local town meetings and other civic functions. Provide childcare and accessible facilities at civic functions so that more community members can participate.

The threat of AI hacking our democracy is legitimate and concerning, but its solutions are consistent with our democratic values. Many of the ideas above are good governance reforms already being pushed and fought over at the federal and state level.

We don’t need to reinvent our democracy to save it from AI. We just need to continue the work of building a just and equitable political system. Hopefully ChatGPT will give us all some impetus to do that work faster.

This essay was written with Nathan Sanders, and appeared on the Belfer Center blog.

Posted on February 17, 2023 at 7:33 AM40 Comments

Comments

Robin February 17, 2023 9:10 AM

I’d like to tentatively add another helping strategy but I can’t avoid feeling naïve in proposing it.

Even those actors who will relish using AI to undermine democracy (and they are not thin on the ground) must be convinced that, in the long run (and not so long at that) they too will be at risk. Revolutions are well known for eating their children: Robespierre’s story is a sobering one that didn’t end well, for him.

Of course they will probably believe, like most adolescents, that they will live forever, immune to risk and that they will be able to make shedloads of cash before the sky falls on them. But their “cash” will all be virtual, stored in the cloud like every other aspect of their existence and therefore exquisitely vulnerable to hostile acts by competing AI machines.

Jordan February 17, 2023 9:19 AM

We could make policy based on evidence and merit rather than who speaks the loudest/longest/most-frequently. That would remove the AI lobbying threat. But it’s a very big change for us. We’ll need to develop whole new paradigms and process.

Steve February 17, 2023 10:15 AM

10,000 ChatGPT comments or 10,000 Tucker Carlson viewer comments[1], what’s the difference?

[1] Or, to be fair, Rachel Maddow viewers.

Winter February 17, 2023 10:43 AM

@Steve

10,000 ChatGPT comments or 10,000 Tucker Carlson viewer comments[1], what’s the difference?

Swamping any discussion with a firehose of falsehoods is a very effective strategy to silence opposition:

The Russian “Firehose of Falsehood” Propaganda Model
Why It Might Work and Options to Counter It
‘https://www.rand.org/pubs/perspectives/PE198.html

We characterize the contemporary Russian model for propaganda as “the firehose of falsehood” because of two of its distinctive features: high numbers of channels and messages and a shameless willingness to disseminate partial truths or outright fictions. In the words of one observer, “[N]ew Russian propaganda entertains, confuses and overwhelms the audience.”2

Contemporary Russian propaganda has at least two other distinctive features. It is also rapid, continuous, and repetitive, and it lacks commitment to consistency.

Clive Robinson February 17, 2023 11:06 AM

@ Bruce, ALL,

Re : What next after democracy?

It is becoming very clear in all manner of different ways that technology has reached a tipping point.

Democracy as we have known it can nolonger exist, as people with money tie up with those who have technology to “adjust the political space” to their liking.

Can anyone who reads here discount the effect technology had as an enabler to the events that occured at the US Capitol?

How about the effect a certain hedgefund billionaire who having failed to get control of the GOP used his resources to interfere with UK Brexit unlawfully, and would probably have become illegaly if the UK Met Police investigation had not been stopped by political intervention.

There are very many more claiming to be able ro deliver elections via the use of technology. To many to be just written off as nutters, charlatans, or con artists.

The fact they get paid large sums suggest that those trying to buy influance feel their money is well spent.

What we see as AI is just the next step on the journy to Orwellian style “total influance” via technology.

We’ld all like to believe we would not be taken in by this, but history and the Press Barons have shown repeatedly in the past how easy it is to manipulate the basic information people make their judgments by. Worse how they can also quickly build up such a cognative disonance they would actively pick up pitchforks and burning brands in a more modern sense (though violance and death has happened).

Legislation can not fix this, metaphorically the genie is out the bottle and Pandora’s box lies broken on the ground. Democracy as we have known it is now irretrievably broken.

We have three basic choices,

1, Accept the failure.
2, Try to fix the failure.
3, Move on to something new.

So far we’ve tried to fix the failure repeatedly and technology has stayed ahead of those efforts so effectively we’ve been left in accepting the failure.

I don’t think we can, by legislation or the other tools society currently has, get ahead and stay ahead of technology being used against the political process.

So realistically we need to start considering not changing the ideals of democracy, but the methods we go about achieving them.

As I’ve seen this comming for quite some time, in the past I’ve made some suggestions to “take the money out” and “limit the rate and type of legislation process” and add sunset Clauses to all legislation. I’ve even suggested we consider getting rid of the “representatives” who are when all is said and done an easy target for crooks and con-artists we call lobbyists.

All such changes have down sides, but unless we start discussing them, very soon in two or three election cycles it very well may be to late to make the necessary changes. Thus changes made will be those we will regret, or our children will.

JonKnowsNothing February 17, 2023 11:32 AM

@Clive, All

Currently, one of the MSM has a report about private company doing nation grade level of digital hacking and good old fashion astroturfing for profit. It’s not specifically AI per the reports, just a massive alt-persona-creation-program hitting many social media systems, pushing whatever agenda the buyer wants. It’s not even all that exceptional because Nation States do it all the time. The interesting part is the implication that the setup was done by former employees of Nation State Hacker Teams.

AI will just add to the propaganda stream, even now, in its infancy, reading AI-Chat is a snooze fest of words. So, many words making so little sense, it solves all problems because there’s no point in reading.

We don’t do cursive writing and we don’t need to read either. Computers do maths better than humans and do most of the analysis of experimental outcomes, as well as controlling most of the experimental protocols, rendering human inputs to science low on the Ah Ha! list.

===

Search Terms

  • Team Jorge

Anonymous February 17, 2023 12:06 PM

@ Clive Robinson, All

It is becoming very clear in all manner of different ways that technology has reached a tipping point.

I think the tipping point has been reached when smartphones and social networks became one.

I don’t know if I’m too negative, but I see the world falling apart around me. The craziest behaviors, the most undignified postures, disturbingly normalized. The systematic disbelief in science, the uncontrollable narcissism and the abandonment of productive activities.

It is clear to me that human beings cannot function in networks that extensive beyond Dunbar’s number. For what they are doing to civilization, social networks are one step away from explaining the Fermi Paradox. The apparent passivity of people who have voice, like the regulars of this forum, in the face of what is unfolding exasperates me.

Do something[0].

AI is besides the point now.

[0] One first step would be to regulate the presence of figures with public mandate on the networks, the Trumps and Bolsonaros of the world.

fib February 17, 2023 12:49 PM

Re: @ Anonymous

>I don’t know if I’m too negative, but I see the world falling apart around me.

Somehow I skipped my handler there

fib

Moshe Yudkowsky February 17, 2023 12:52 PM

Essentially, AI will soon allow private citizens to wield the same power to target legislators that is currently exercised only by, e.g., lobbying groups. Will that truly be bad?

Winter February 17, 2023 1:12 PM

@Anonymous

The craziest behaviors, the most undignified postures, disturbingly normalized.

I am pretty old by now. And I have seen this behaviour all before. Religious anti-science opinions are as old as science. People running after clowns in every era, and in every time, those in charge could get away with crime. Just look at what me-too showed women had to cope with. That type of lawlessness was what everyone had to cope with who was not rich.

Just look at the devious campaigns of Barry Goldwater and George Wallace, or the lawlessness of the prohibition. Evil stupidity is of all times.

What changed is the visibility. Before, things in another city were far, far away. Now I saw the Brazilian uprising of sore losers against Lula almost life. I probably now know more about US politics, which happens on another continent, than my grandparents knew about the municipal politics of their home town.

Anon February 17, 2023 1:40 PM

Regulation is not the answer. Any AI in compliance will serve established interests. The best solution is to have many AIs that compete and drown each other out. People will pick the ones they want to interact with. Some of them will learn by their interaction with real people rather than government or corporations. Those of you that are offended by someone somewhere “thinking wrong” will have to get thicker skins.

Winter February 17, 2023 2:17 PM

@Anon

The best solution is to have many AIs that compete and drown each other out.

You mean like, say newspapers, magazines, or talk radio in the last century?

Did that work out like you expected?

Keller February 17, 2023 2:30 PM

What about using LLMs like ChatGPT to play devil’s advocate? The problem with our information landscape, including social media and lobbying (human and AI), is it tends to only provide the most favourable points involved with whatever argument/outcome it wants while simultaneously having studied and prepared ready counter arguments for the most likely oppositions you could readily think of on your own. When all else fails it can resort to logical fallacies, rhetoric, or diversions which are often hard for people to spot and ignore.

A positive use of LLMs I could imagine would be a system feeding whatever you’re reading and automatically providing the best possible counter argument for you to consider. Create an article that refutes this and provides the best defence of its contrapositives. Let it fight itself. Diversify people’s information diets. It’s not perfect, but better than amplifying ignorance. I can’t really blame you for making a bad call if you’re not really considering all the alternatives.

Steve February 17, 2023 2:49 PM

@winter:

Swamping any discussion with a firehose of falsehoods is a very effective strategy to silence opposition

I rest my case.

Solomon Beard February 17, 2023 3:25 PM

First, recognize that the true threats here are malicious human actors.

No, they aren’t. The number of people motivated by malice is likely significantly smaller than then number of people motivated by self-interest, particularly relating to financial benefit. Sure, there are some people who just want to fuck things up, but they’ve never really been an organized group. I’m pretty sure almost all current and past lobbyists were trying to make things better—even if only in a financial sense for themselves and their employers, and even if they were apathetic about the effects on the rest of us. That’s quite different from a group of people whose goal is to harm the rest of us.

Never attribute to malice that which is adequately explained by greed.

Of course, this may change in the future: with ChatGPT effectively making the cost of (remote) lobbying zero, one may have legitimate concerns about trolls waging “asymmetric warfare” on government. But regulating “Self-interested advocacy” can’t do anything about actual malice, which isn’t related to self-interest in any useful sense. (It might make a person happy, but better-plowed sidewalks would make me happy, so what’s the distinction?)

Clive Robinson February 17, 2023 5:06 PM

@ Moshe Yudkowsky,

“Essentially, AI will soon allow private citizens to wield the same power to target legislators that is currently exercised only by, e.g., lobbying groups.”

Sorry no it won’t.

The history of the Internet is stary eyed individuals saying it will give people equality, privacy, freedom, lack of censorship, etc, etc.

Has it actually delivered any of those?

Nope. The reason why is the stary eyed individuals do not actually understand the nature of power and how it is wielded.

Whilst AI might give you the verbage or even eloquence of lobbying entities it does not give you the power or resources lobbyists currently have backstopping them.

All that will happen is the lobbyists and their all to willing recipients of their largesse will do is change their game.

Clive Robinson February 17, 2023 5:29 PM

@ Keller,

“The problem with our information landscape, including social media and lobbying (human and AI), is it tends to only provide the most favourable points involved…”

You’ve not grasped how the game is played.

1) The backer of the lobbyist want’s something that needs a legislator or regulator to put in place.

2) The legislator or regulator is not going to do squat without something in return.

3) The lobbyist and legislator/regulator do the horse trading dance to agree terms.

4) What is said publically is actually nothing more than a smokescreen to hide the shady deal. It gives an excuse the legislator can hold up in public to hide their real motives behind.

That is the nature of the game the backer, lobbyist, legislator know what they want and what it is worth to the other parties. You the member of the voting citizens are not supposed to know. Thus the smoke screen is there to hide the real-politic of the deal, should you start paying closer attention.

Burke February 17, 2023 5:48 PM

<?

….. So, "The ability to produce high quality political messaging quickly and at scale, … " threatens the very fabric of our society ?

….same basic phony horror tale was widely told of the original Gutenberg Press technology– such mass communication changes would permit malicious people to manipulate society and ruin its hallowed institutions (like the Church & Monarchies).
Same nonsense arose with development of radio and internet communications.

Our current "democratic processes" already abound with sophisticated, mass, targeted misinformation & manipulation — most of it from the government, politicians, and corporate media.

Chill, Luddites — fear not

lurker February 17, 2023 11:52 PM

@Jordan
“We could make policy based on evidence and merit …”

Good idea. Where are the policy makers who recognize and can evaluate evidence and merit?

ResearcherZero February 18, 2023 1:24 AM

It is not just a problem of ‘messaging’. Connected technology affects everyone. Critical infrastructure and the supply chain is dependent upon connected systems. Even the most powerful and influential entities are dependent upon these systems.

A just and equitable political system helps to protect everyone, not just the poor.

There are real world examples of this.

“the data that is available suggests that in cities where there were reductions in low-level arrests, there were also reductions in police shootings.”
https://fivethirtyeight.com/features/police-arresting-fewer-people-for-minor-offenses-can-help-reduce-police-shootings/

The police constitute both a critical infrastructure and symbol of political power. Around the world, police and politics are intertwined in deeper ways than may seem evident.
https://theconversation.com/police-and-politics-have-been-dangerously-intertwined-during-the-2020-u-s-presidential-election-149420

“We tend to get justice in this country based on whether you have access to money. Rural areas suffer from a lot of the significant problems that the rest of the country does.”
https://www.themarshallproject.org/2021/08/13/shooting-first-and-asking-questions-later

When speech is stifled or when dissenters are shut out of public discourse, a society also loses its ability to resolve conflict, and it faces the risk of political violence.

https://knightfoundation.org/reports/free-expression-in-america-post-2020/

There has been a dark cyber crime market in existence for a long time, including hack-for-hire outfits.

Percepto International advertises itself as “the masters of perception” and offers intelligence and cyber services, among others.
https://forbiddenstories.org/story-killers/percepto-icrc-burkina/

“It edited Wikipedia pages, created avatar accounts to promote a client or discredit an opponent, and placed articles in reputable media–all unofficially.”
https://forbiddenstories.org/story-killers/insider/

“Chilling effect”

“Eliminalia clients in 50 countries across five continents.”
https://forbiddenstories.org/story-killers/the-gravediggers-eliminalia/

People’s fears and frustrations have been used in the past to exploit them…
https://millercenter.org/the-presidency/educational-resources/age-of-eisenhower/mcarthyism-red-scare

and the present…

Fear and uncertainty in late modern society
https://www.tandfonline.com/doi/full/10.1080/23254823.2022.2033461

Good governance is the basis for a just, equitable and functioning society.

Clive Robinson February 18, 2023 1:59 AM

@ ResearcherZero,

“A just and equitable political system helps to protect everyone, not just the poor.”

I used to use infectious disease pathogens and “basic healthcare” as an example of a “rising tide lifts all boats” benifit system.

What shocks me is after the past three years of having it proved people still deny it…

To misquote a Yoda StarWars line,

“The cognative bias is strong with this one, yes it is.”

ResearcherZero February 18, 2023 2:15 AM

@Clive Robinson

Maybe even a good dose of bird flu won’t do it.

When the Australian government received a report on the increasing risk of infectious pathogen spread due to cheap air travel, it mothballed all the quarantine centers, which had just been refurbished and refitted with all new medical equipment. Instead they created a new organisation, Border Force, with no biological mandate. Then they leveled those quarantine centers with a bulldozer, which were in very handy, accessible centralized locations across the country.

Recently they had to build new facilities, in the middle of nowhere, because they popped in a bunch of new government offices where the old facilities were. Genius.

A network of fake news sites is one part of a complex apparatus the Spain-based firm Eliminalia uses to manipulate online information on behalf of a global roster of clients.
https://www.washingtonpost.com/investigations/interactive/2023/eliminalia-fake-news-misinformation/

Dear chatGPT does this fella have signal? What is his number plez?

https://twitter.com/DaveLeeFT/status/1626288109339176962

ResearcherZero February 18, 2023 2:36 AM

Records invariably tell a story. As the records become easier to access, so does the story they tell. The application of that information increasingly becomes more complex.

Winter February 18, 2023 3:46 AM

@Burke

same basic phony horror tale was widely told of the original Gutenberg Press technology– such mass communication changes would permit malicious people to manipulate society and ruin its hallowed institutions

Not the best example. The printed press was instrumental in starting the reformation and then the religious wars that killed some 10 million Germans (~50% of the population).

Winter February 18, 2023 3:58 AM

@anon

What does the last century have to do with this?

Those who forget history are condemned to repeat it

Your suggestions have been tried out extensively in the 20th and it did not work out then like you seem to expect.

modem phonemes February 18, 2023 12:19 PM

Blade Runner has it right. The main use of any advanced technology is to more pervasively sling ads.

Vote Social Dystopian !

lurker February 18, 2023 2:17 PM

@ResearcherZero

Bulldozers are ignored by Anopheles spp. Amongst the bitey things on the West Island we are now warned the southernmost mainland state can now welcome us with the tropical Japanese encephalitis, as well as the local Murray Valley variety, and Ross River virus, and …

sarnian February 19, 2023 5:42 AM

The problem is one of scale.
If your democratic process is fine grained enough that individuals and their representatives can realistically meet at will then you can have high quality conversations. As soon as you insert electronic communications, email, social media, etc, then the representatives are in context with computers not people. And then AI can intervene, or other bad actors. If representatives make themselves available face to face, the problem does not arrive.
I grew up somewhere this is true. The electorate in total is small, around 60k voters. How this scales is one problem.
The other issue is party politics where realistically a representative listens to one voice, their party. Voters and public opinion may have some sway, but little and rarely. Interestingly political parties are also illegal in the democracy I grew up in.
Ironically it is now changing itself gradually to be a ‘democracy’ in the way the US and UK claim to be, and this seems primarily so that elected representatives can shirk representative responsibilities and ensure re-election after failures. But I no longer live there, I live in a country where my vote is totally inconsequential, neutered by layers of status quo preserving legislation.

Petre Peter February 19, 2023 8:12 AM

Unfortunately, Blade Runner didn’t predict cellphones and flying cars are still not around. It’s impossible to predict the future.

LewisJ February 19, 2023 10:33 AM

@ sarnian

===

… you almost have the true Big Picture here.

Real issue here is the popular mythology of sacred ‘democratic processes’ & noble ‘representative’ democracy.
At scale, that stuff does not & can not exist.

Individual citizens have no influence upon their government representatives’ daily actions/decisions, via any communications method.
Even if a ‘representative’ constantly knew the genuine viewpoints of each constituent– there would be no way for that representative to objectively translate that broad info into specific actions that honestly ‘represent’ his constituency (nor even a slight majority of it)

So our noble democratic representatives generally just follow their Party leaders, and perhaps their big financial donors.

Constituent-Communications don’t matter, whether real or phony, or AI generated.

Winter February 19, 2023 10:59 AM

@LewisJ

Individual citizens have no influence upon their government representatives’ daily actions/decisions, via any communications method.

That is not “generally true”.

Not democracies are like the USA, based on a district system where elected officials represent individual localities with only two parties to choose from. So, eg, I, do not have a “representative” in parliament.

When I want to contact a MP, I contact the relevant expert in the national party I voted for. But I can choose to address local or regional offices of the party.

My experience with this system is that party representatives are very responsive to such communications. Mostly because only few people bother to do so.

The other difference with the “normal” (=USA) situation is that most democracies have (much) more than 2 parties to choose from. I for one, can choose out of 20 parties that cover everything from left to right. The largest Party only has 1/5 of the seats. So there really is choice. But even Germany has 6 parties and several independents to choose from. None has a majority.

In real multi (>3) systems without an obvious majority party, politicians tend to be more responsive to voter sentiments.

Also, not all democracies legalize corruption like the USA does. That too shifts the power balance somewhat to voters.

Anon February 20, 2023 10:18 AM

@Winter

I think people like you are repeating the mistakes of history. Regardless, I’m not interested in your rabbit holes.

Winter February 20, 2023 10:37 AM

@Anon

I think people like you are repeating the mistakes of history.

Other people always misunderstand history, or so we all think (including me).

Regardless, I’m not interested in your rabbit holes.

Goodbye, have fun!

ResearcherZero February 21, 2023 12:03 AM

Paul Menzies-McVey, the chief legal counsel at the Department of Social Services (DSS), revealed he did not share legal advice that the Robodebt scheme was illegal with the department.
https://www.abc.net.au/news/2023-02-21/robodebt-scheme-government-royal-commission-former-lawyer/102001616

“Robodebt is a very public example, but it’s just one of many automation-assisted decision-making processes that have blown up as illegal, unfair or biased, causing reputational damage to the organisations that deployed them.”
https://ia.acs.org.au/article/2021/robodebt-was-an-ai-ethics-disaster.html

Winter February 21, 2023 3:36 AM

AI are not really “better” than humans, they can do some things better, under some circumstance.

Humans strike back at Go-playing AI systems
Amateur fleshbag defeats synthetic in 14 of 15 games
‘https://www.theregister.com/2023/02/20/human_go_ai_defeat/

Dekker said any highly trained AI is likely to have these blind spots, that adding more and more complexity to cover the blind spots is partly why it is so hard to get it working well, and why it might take longer than anticipated to get driverless cars on our roads.

Clive Robinson February 21, 2023 6:12 AM

@ Winter, ALL,

“AI are not really “better” than humans”

As a very senior UK judge put it computer systems are the creations of man, thus their failings are of man, so man is ultimately responsible.

Whilst he was right, he missed an important point, “resources cost money” and nearly everything is “for profit” thus costs, thus resources, get minimized.

So expecting any system that is not some form of “advertising” to be even close to being on par with the broad human abilities is fairly pointless.

It’s why we are getting all this “Robo-Debt” and similar nonsense that causes so much harm to innocent people, whilst the guilty designers and suppliers make a quick profit and take the “exercise” option of “Grab it and run” or blaim others or both, which ever keeps the 40 pieces of silver flowing their way.

Winter February 21, 2023 7:11 AM

@Clive

Whilst he was right, he missed an important point, “resources cost money” and nearly everything is “for profit” thus costs, thus resources, get minimized.

I think you missed the point too. Even when resources do not get minimized, AI will have glaring weak spots. And they will have those weak sports for the same reason humans have them:
Learning is a procedure of lossy data compression combined with interpolation and extrapolation

Every learning system will have to decide what data are useful and have to be kept, and what data are noise and have to be dropped. That is the whole point of data compression, interpolation, and extrapolation, ie, learning.

But what is noise in one circumstance, is data in another [1]. Therefore, all AI trained on existing data will have glaring holes in their application.

The application area the article points to, autonomous driving, is a good example. Roads and traffic are so diverse, if not chaotic, that driving AI always runs into “corner cases” that end in catastrophes.

Humans (and animals) have very efficient algorithms for learning and they can adapt very fast to new situations. GPT3 needs ~400B words, where humans learn to converse with some 100-200 million words. In addition, humans will learn to use a new word after 1-5 presentations, 1 is often enough. Having seen a single Duck Tours bus/boat on the road, you will recognize immediately what it is and how to handle it. AI, not so easy.

I am pretty sure work is done at this very moment to make AI adaptable, but I do not hold my breath to see it. That could easily take a year, maybe even more.

[1] Read any Agatha Christi or Sherlock Holmes story, or any whodunit for that matter.

Clive Robinson February 21, 2023 11:37 AM

@ Winter,

“I think you missed the point too. Even when resources do not get minimized, AI will have glaring weak spots. And they will have those weak sports for the same reason humans have them”

No, I didn’t, the judges observations more than sufficiently covered that.

“Therefore, all AI trained on existing data will have glaring holes in their application.”

This has been covered before by the,

1, Known, Knowns
2, Unknown, Knowns
3, Unknown, Unknowns

Of individual instant in a class of attacks.

AI should get the first on the list of all “Known instances” in all “Known classes”. It may even be able to identify holes in the coverage spectrum of the Known Classes of attack thus infer places for a limited number of new instances of attack in some classes of attack.

But can it infer the spaces where new classes of attack should be and identify their charqcteristics?

I suspect not except at the fringes of known classes.

“I am pretty sure work is done at this very moment to make AI adaptable”

Yes but to make it’s inferance wider, or to actually make it truely initiative?

I suspect all the work over the next few year if not decadess will be on the former, not the latter.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.