AI as Sensemaking for Public Comments

It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?

You’d be forgiven if you’re distraught about society’s ability to grapple with this new technology. So far, there’s no lack of prognostications about the democratic doom that AI may wreak on the US system of government. There are legitimate reasons to be concerned that AI could spread misinformation, break public comment processes on regulations, inundate legislators with artificial constituent outreach, help to automate corporate lobbying, or even generate laws in a way tailored to benefit narrow interests.

But there are reasons to feel more sanguine as well. Many groups have started demonstrating the potential beneficial uses of AI for governance. A key constructive-use case for AI in democratic processes is to serve as discussion moderator and consensus builder.

To help democracy scale better in the face of growing, increasingly interconnected populations—as well as the wide availability of AI language tools that can generate reams of text at the click of a button—the US will need to leverage AI’s capability to rapidly digest, interpret and summarize this content.

There are two different ways to approach the use of generative AI to improve civic participation and governance. Each is likely to lead to drastically different experience for public policy advocates and other people trying to have their voice heard in a future system where AI chatbots are both the dominant readers and writers of public comment.

For example, consider individual letters to a representative, or comments as part of a regulatory rulemaking process. In both cases, we the people are telling the government what we think and want.

For more than half a century, agencies have been using human power to read through all the comments received, and to generate summaries and responses of their major themes. To be sure, digital technology has helped.

In 2021, the Council of Federal Chief Data Officers recommended modernizing the comment review process by implementing natural language processing tools for removing duplicates and clustering similar comments in processes governmentwide. These tools are simplistic by the standards of 2023 AI. They work by assessing the semantic similarity of comments based on metrics like word frequency (How often did you say “personhood”?) and clustering similar comments and giving reviewers a sense of what topic they relate to.

Think of this approach as collapsing public opinion. They take a big, hairy mass of comments from thousands of people and condense them into a tidy set of essential reading that generally suffices to represent the broad themes of community feedback. This is far easier for a small agency staff or legislative office to handle than it would be for staffers to actually read through that many individual perspectives.

But what’s lost in this collapsing is individuality, personality, and relationships. The reviewer of the condensed comments may miss the personal circumstances that led so many commenters to write in with a common point of view, and may overlook the arguments and anecdotes that might be the most persuasive content of the testimony.

Most importantly, the reviewers may miss out on the opportunity to recognize committed and knowledgeable advocates, whether interest groups or individuals, who could have long-term, productive relationships with the agency.

These drawbacks have real ramifications for the potential efficacy of those thousands of individual messages, undermining what all those people were doing it for. Still, practicality tips the balance toward of some kind of summarization approach. A passionate letter of advocacy doesn’t hold any value if regulators or legislators simply don’t have time to read it.

There is another approach. In addition to collapsing testimony through summarization, government staff can use modern AI techniques to explode it. They can automatically recover and recognize a distinctive argument from one piece of testimony that does not exist in the thousands of other testimonies received. They can discover the kinds of constituent stories and experiences that legislators love to repeat at hearings, town halls and campaign events. This approach can sustain the potential impact of individual public comment to shape legislation even as the volumes of testimony may rise exponentially.

In computing, there is a rich history of that type of automation task in what is called outlier detection. Traditional methods generally involve finding a simple model that explains most of the data in question, like a set of topics that well describe the vast majority of submitted comments. But then they go a step further by isolating those data points that fall outside the mold—comments that don’t use arguments that fit into the neat little clusters.

State-of-the-art AI language models aren’t necessary for identifying outliers in text document data sets, but using them could bring a greater degree of sophistication and flexibility to this procedure. AI language models can be tasked to identify novel perspectives within a large body of text through prompting alone. You simply need to tell the AI to find them.

In the absence of that ability to extract distinctive comments, lawmakers and regulators have no choice but to prioritize on other factors. If there is nothing better, “who donated the most to our campaign” or “which company employs the most of my former staffers” become reasonable metrics for prioritizing public comments. AI can help elected representatives do much better.

If Americans want AI to help revitalize the country’s ailing democracy, they need to think about how to align the incentives of elected leaders with those of individuals. Right now, as much as 90% of constituent communications are mass emails organized by advocacy groups, and they go largely ignored by staffers. People are channeling their passions into a vast digital warehouses where algorithms box up their expressions so they don’t have to be read. As a result, the incentive for citizens and advocacy groups is to fill that box up to the brim, so someone will notice it’s overflowing.

A talented, knowledgeable, engaged citizen should be able to articulate their ideas and share their personal experiences and distinctive points of view in a way that they can be both included with everyone else’s comments where they contribute to summarization and recognized individually among the other comments. An effective comment summarization process would extricate those unique points of view from the pile and put them into lawmakers’ hands.

This essay was written with Nathan Sanders, and previously appeared in the Conversation.

Posted on June 22, 2023 at 11:43 AM22 Comments

Comments

JonKnowsNothing June 22, 2023 12:25 PM

@All

re: If Americans want AI to help revitalize the country’s ailing democracy…

Loaded statement. Must have been HAIL.

re: Word Counting / Reading Level

Word processing systems have been able to word count for a long time. It’s not a provenance of AI/ML. They have also been able to determine “Reading Level” for the document. Again, not a provenance of AI/ML.

Academic levels used by such dictionary counts are determined by the general ability of the population to read; the number of words in a vocabulary at different ages. In the USA, this level pretty dismal running Grade 3 to Grade 6. Anything written beyond that vocabulary level will be “lost on the population”.

So the sieve only gathers “low value words”.

The main aspect of HAIL is that it collects the query, and from the query it extracts “high value words”; meaning if someone with a Reading Level Grade 6+ puts in a request, it gain a potential new word for the sieve.

It’s like review ratings on an Amz item: how many stars, how many reviews. A large number are bogus reviews, a very small number are real reviews. A bell cliff on the front and a tiny whip tail on the back.

William Buckley had an extraordinary vocabulary. His main tactic in debate or conversation was to use his extensive knowledge of English (Spanish, French), to outwit, out think and out speak his opponent. Essentially, no one knew what he was talking about and everyone felt pretty darn stupid when facing off against him. He was not only knowledgeable about a topic but also able to express himself in a why that highlighted everyone else had a Grade 6 Vocabulary Level (at best).

In the sieve count for a Buckley statement, you would collect 1 point for each word. Those words would rarely reappear outside of commentary about Buckley.

AI/ML isn’t about “saving democracy”, it’s all about constraining the population to a Grade 3 Vocabulary Level.

Elevate your diction…

===

ht tps://en.wikipedia.o r g/wiki/William_F._Buckley_Jr.

(url fractured)

Galois June 22, 2023 1:07 PM

There’s a trend with a certain brand of socially minded technologist to hyperfocus on tiny solutions and improvements. The incentive structure of our government isn’t an information issue. We could have a more representative government using tech from 1900. The issues are the specific policies that allow the current incentive structure – money as speech, corporations as people, public matching of private funds is illegal, voting rights bill gradually weakened. If we fix those, then it would be lovely to talk about the slightly different strategies of collating public comments, but until then it’s just a distraction.

nobody June 22, 2023 1:22 PM

This is very much another case of inappropriately seeking to use technology to solve an inherently social problem.

American democracy is on life support shape because 700 billionaires have gutted democratic rule in order to avoid making concessions that would make society survivable for everyone else. The billionaires have decided that they will end democracy before they will pay their share of taxes or pay their employees living wages. Everything else that is wrong with American politics stems from this.

There is no technological fix for this problem; technology can’t solve abject greed.

Luke June 22, 2023 1:27 PM

Democracy does not scale.

It is impossible to discern a supposed “Will of the People” by any means, in large groups.

Large scale representative-democracy is an irrational concept, but is blithely accepted by most people in theory and daily practice.

Effective political representation decreases sharply as the ratio of elected representatives to citizens decreases.

For example, there are 435 members in US House of Representatives in a country of 330 million people. Thus each member “represents” an average of 760,000 people — how could any such representative determine what all those constituents want … with even the most advanced AI imaginable ?

Mark Gent June 22, 2023 2:09 PM

This is indeed an interesting and potentially positive way to have useful “AI”. Personally, I believe that we need to improve the common understanding of what the overall space is first, i.e. machine learning is just a set of advanced inference techniques. Once we have addressed the hokum, then we can make genuine progress on using these technologies across society in a free and fair way.

JonKnowsNothing June 22, 2023 2:51 PM

@Mark Gent

re: improve the common understanding

Improving the “common understanding” of anything, is seriously problematic. We might reach a consensus or agreement about an issue or definition but even that can get slippery PDQ.

  • Consider all the words you know that are no longer viable to use.
  • Consider all the words you know that previously were not acceptable to use.

Did we lose some common understanding between the lists?

Then we hit the tricky parts of what words mean, their idioms, jargon, current urban uses.

  • When I play poorly in PVP I get called “a scrub”. I can assure you it does not mean I have a medical diploma.

If we layer on other languages just in the form of imported words, we get a whole other aspect of understanding. “Taco Tuesday” did not exist in my vocabulary until recently, but I certainly know what it means now. Does it mean the same in other languages? I have no idea if the idiom translates.

If the outcome of AI/ML depends on our understanding of a highly technical process by people with a Graduate School Vocabulary Level, then the outcome will be decide by this group and not those with a Grade 3 Vocabulary Level.

You may restate this in other ways but it comes down to

  • Techies & Oligarchs making the choices for the Global Population

and then

  • Passing the Buck on the outcome, because “Tech is Agnostic to Use”

We all know how this ends up too.

He was among those who observed the Trinity test on July 16, 1945, in which the first atomic bomb was successfully detonated. He later remarked that the explosion brought to his mind words from the Hindu scripture Bhagavad Gita:

“Now I am become Death, the destroyer of worlds.”

J. Robert Oppenheimer

===

ht tps://en.wikipedia.o r g/wiki/Robert_Openheimer

(url factured)

Clive Robinson June 22, 2023 4:44 PM

@ Bruce,

“But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?”

Two things,

1, It won’t be “the one tool” mankind moves generally in the forwards direction with technology.

2, If it genuinely could that would actually scare me way way more than the numpties trying to put AI in weapons to “kill the world”.

Why the second?

Think about who gets to see the output…

Do you realy want your future “pre-selected” by people only interested in turning you into “as cheap a form of labour as possible”, who will exploit you either on low wages or high rent, because they will stop you getting assets to escape their tredmill they want to leverage you into.

I suspect many people have no idea of the true meaning of “wage slave” and how it kills you early with stress, poor health care and poor living conditions and crippling interest on loans you will be forced to take out.

As a business plan it’s very easy to see, the only problem is “streaming people” into the tredmills they will be most easily exploited in.

That is what that sort of AI would enable.

Thankfully we are not even close currently and to be honest, I can not see the current joke that AI is and had been for fifty years doing it any time soon.

Peter Gerdes June 22, 2023 10:21 PM

I think an even more important role for AI is in it’s ability to abstract away partisan concerns or to uncover their influence. You can’t really determine whether someone would agree with an argument or support a policy if the partisan valence was the other way. You can, however, hide that information from an AI and see how it reacts. Alternatively, you can demonstrate bias by showing that the accuracy of a model is drastically improved by letting it consider partisan valence.

For instance, the ability to show that if we train an AI to judge how illegal/warranting of prosecution some kind of behavior might be and then feed it the facts relating to various scandals (Benghazi, Trump documents, Trump NYC etc) it could be pretty impactful to demonstrate what happens if we mask data on partisan affiliation. Same with training an AI on constitutional arguments.

Winter June 23, 2023 7:35 AM

If Americans want AI to help revitalize the country’s ailing democracy, they need to think about how to align the incentives of elected leaders with those of individuals.

I think Americans do know what is threatening their democracy: Groups that are against voting (by others). This practice is both age old and widespread [1].

What is new is that candidates claim unreservedly that they will not accept the vote if they loose. And worse, that their voters accept that.

AI is no help against a coup-d’etat.

[1] ‘https://people.howstuffworks.com/10-ways-us-has-kept-citizens-from-voting.htm

denton scratch June 23, 2023 9:54 AM

I’m less and less enchanted by the word “democracy”. I’m not always sure what I mean when I use it; far less what others mean by it. Often it seems to just mean “electing representatives”, which is quite a weak gloss.

I am starting to prefer “collective decision-making”.

I’ve long deplored the word “terrorism”. Peaceful protestors are accused of terrorism to justify shooting them, because terrorists are always in the wrong. “Democracy” is like the converse; people introduce the word to deflect any possible criticism. Everyone knows democracy is the best way, so you don’t have to define it.

Anonymous June 23, 2023 10:47 AM

Just about when I start to lose faith in how modern Security Professionals think.. I’m glad for the context of what you dropped the comment ” But what’s lost in this collapsing is individuality, personality, and relationships”

I’m glad to see this theme getting pronounced here on this blog… and unless you’re running AI on this blog feedback Bruce (maybe you’ve already got your themes Amalgamated for the next predictive behavioral algorithmic extrapolation)..

But most of all with the influences we all share here as Security Professionals and the influencewe have on significant decision making both in government policy, and private sector Managed IT services… it goes with my opine that finally I’m glad to see the arrival at the consequences on personality,
For the past 3 years I have been on the soapbox about the removal of human personality at the service to technology which could then be in service to a state to a government or to a big Tech data company that may be commandeer and mutinized and infiltrated successfully by stakeholders of the original big Tech behind it etc etc so even in respect to the social engineering Dynamics my past three years of themes has been when you degrade human personality there is only one thing that will happen: man will kill neighbor and kill himself..

Bruce as you may know that other gentleman that you used to serve on the Electronic Frontier Foundations board… who used to say “point and click totalitarianism ” is just one click away… it’s no longer a click away it’s already been here it’s now what are we going to do about it so I believe we’re in web.3 era now…

as i digress , imho…Human personality, in my estimations.. was meant specifically for the individual free will consent to a choice of either being created in service to an image and likeness of God, a non-artificial intelligence…

The issue is really about “freedom” and defining it.. which is almost as argumentative as trying to define God.

imho, i hold that anY value expression is a human right, a private choice, I believe is necessary.. AI,,zuckeberg, brin, page , bezos…tend to take that away via ai influence marketing..in other words I’m not sitting here pontificating go find Jesus Christ as your lord and savior or go join your synagogue or enjoy state atheism , agnosticism or Allah, or vishnu, zen etc… no…

In my digression here it is about the human personality being gutted and rooted out and replaced by technocratic communistic era type of artificial intelligence… it’s only a matter of time before a nuclear bomb gets dropped or When people’s human personalities are distorted by tech, infiltatrion, mutiny, commandeering,…

And most type of Security Professionals , sounding like me, would find themselves killed by the “castor bean” program like I said my last post, “suicided”,etc.. May Daniel Ellsberg look down upon us Godspeed

GHK2 June 23, 2023 11:28 AM

well, the starting idea in this blog topic is that democratic “governance” could actually benefit from AI.
the details are pretty fuzzy, but it assumes that those people in government would make better decisions if they more clearly knew what the people being governed wanted them to do.

apparently most all the necessary democratic input information is in already available somehow in public “comments” & “feedback” channels, but that mass of important governance data is just too jumbled noisy to make sense of.

AI information processing could rapidly filter/sort all that public data into actionable info by governing people, thus strengthening democracy (?)

lotsa huge assumptions in that theory

vas pup June 23, 2023 3:52 PM

I guess as the first move we should let AI read and digest Orwell’s ‘1984’ and not less important ‘Animal Farm’. So it could proactively detect attempts of the government to screw up citizens.

Clive Robinson June 23, 2023 9:15 PM

@ Anonymous, ALL,

Re : Evolution and deities

“The issue is really about “freedom” and defining it.. which is almost as argumentative as trying to define God.”

You have to understand why Darwin did not publish his theories for many years.

It was at a time when the “King Game” had lost power, because the traditional “First Estate” was “King as Godhead” and what we would now call the “executive”, with the Church basically filling in the other roles of state even that of controling the internal guard labour via the “Barons and other lords of land”.

With time the Barons broke away from the court and crown and eventually became a third independent power base. It’s why in the UK the reminents are called “The House of Lords”.

But with the rise of skilled trades, freemen and hence guilds, a strong middle classes was born and they in turn gained power. Eventually the Church started loosing power as well it’s role had significantly diminished and the instirution of Science brought trades out of Guild’s and hence technology brought power to the masses.

The reason Darwin was not happy about publishing was two fold.

The first was he was going up against and other entrenched doctorin that quite harshly enforced a hierarchical structure with power reserved for a very few at the top.

The second was a little more subtle, in that “Survival of the fittest” is not what many think –and have been taught– about “the individual alone”. Evolution is actually about not even survival of groups or species, it’s about the survival of what we chose to call “life”.

From the human perspective that means three things have to be considered,

1, Humans from individuals upwards.
2, The environment we live in.
3, The technologies we use as force multipliers.

I doubt Darwin gave “technology” much of a thought originally as back then what would become science was still very much in the hands of “learned Gentlemen of independent means” or as “Natural Philosophers” who could only get their university positions by being “clergy” (which is where “tenure” comes from). As for “force multipliers” they were wind, water and horses and other parts of the environment “harnessed by man”.

Technology brought the downfall of the Dominion of the Church and lords in the same way the Barons had reduced the power king.

The Victorian era started with powerfull guilds, merchant venturers bringing in “wealth” but they were limited by “Force Multipliers” and limits of “productivity” due to lack of “skills”.

The advent of steam engines and the fact they killed people, brought about the end of the Guild Artisan and was replaced with what became science and thus the birth of the engineer. Forced to happen by “public outrage” over the deaths by boiler explosions forcing legislation by Parliment for safety, the first of it’s type anywhere in the world.

In all these processes you can see Evolution causing the benifits to the individual becoming benifit to increasingly larger groups and thus to society in general.

There are two problems…

Firstly, those at the top of hierarchies do not wish to loose “personal power”…

Secondly, the misunderstanding of what evolution actually is by the majority of individuals means that we are not keeping an eye on the environment and technology domains.

Which is why the second, has alowed the first to abuse the environment directly in the past, but increasingly technology to abuse the environment.

Unfortunatly as people are finally starting to realise techbology can and now is being used to abuse society.

It’s most obvious in the “Strong Man” nonsense we’ve seen and still are seeing with “Conservative types” who unable or unwilling to live within contemporary society want to drag it backwards into a “myth of the past” which has slowly been growing in society since the liberating effects of the late fifties through sixties.

But actually started long long before, with “skilled workers” seeing their skills automated, by machines invented by “Gentlemen of Leisure” who’s main aims in history is to accumulaye “real wealth” via assets and such like, that earn income from others labours. They realised if they could get rid of skilled labour they could make more profit…

What neither the skilled workers or the gentlemen of leisure realised due to their own self interests was that in effect they removed a bottle neck and in effect created a tide that lifted all boats and society made an evolutionary series of jumps due to technology.

But… The bottle neck that did not get removed and has now become a nose around societies neck is that technology has advanced where it can be controlled by very few. Thus some see it as a way of rapidly increasing their rentable asset inventory at societies expense.

Thus the pendulum of Evolution has swung in favour of the very few individuals at now the expense of evolutionary advancment of society.

Technology is an enabler, who it enables for it cares not. It is society that should decide how and do it by legislation and regulation.

But as increasing numbers are aware we do not live in a democracy, society has no say in what legislators get to do. Any introduction of technology into that side of the political process will without doubt turn into a disaster. We have seen the beginings of it but it’s getting worse, as can be seen with the old “don’t let them vote” tactics that are on the rise. Funded by the self selected few through “dark money” organisations like the Heritage Foundation and similar. As well as pulling back on regulation of Internet companies “in favour” of certain persons who blow their own trumpet that way…

lurker June 24, 2023 12:32 AM

It must be something I ate, but a number of comments in this thread smell like they were written by an AI. Consensus building seems a pretty trivial application when there is a lack of trust in both directions. The problems with US democracy won’t be fixed just by plugging in a robot.

Robin June 24, 2023 5:12 AM

@vas pup:
“I guess as the first move we should let AI read and digest Orwell’s ‘1984’ and not less important ‘Animal Farm’. So it could proactively detect attempts of the government to screw up citizens.”

Perhaps yours is a tongue-in-cheek throwaway comment, but it points at a central issue:

An AI machine can “read” those books and modify its statistical knowledge about sequences of words, but as things stand would learn nothing about the ethics of government. It’s more likely to get that from literary criticism of the books, by way of the same statistics.

How to embed a sense of ethics into AI?

Clive Robinson June 24, 2023 7:03 AM

@ Robin, vas pup,

Re : Orwell’s words.

“but as things stand would learn nothing about the ethics of government. It’s more likely to get that from literary criticism of the books, by way of the same statistics.”

My thoughts traveled a different “time related” way, which might “throw a different spanner in the works” as it were.

I’ve noticed of late that DuckDuckGo realy has no notion of time any more. If that is down to Microsoft or not I don’t know but with their search engine “things are definitely “rotting away” which is something our host @Bruce and others might want to cojitate on[1].

But if the AI is fed information out of time sequence what will it actually learn?

So many of George Orwells ideas have been taken by “entertainment shows” so much so some younger people on reading his work for the first time dismiss him as a “plagiarist” or worse a “kill joy”.

Because they like AI have been fed information in reverse time order.

@Bruce once observed,

“Amateurs hack systems, professionals hack people.”

Is not the whole story. Whilst amateurs still hack systems and pros still hack people, we have to now also consider who hacks the truth, thus the minds and the societies minds are built with?

Gives rise to the observation,

“Why use lies when telling the truth in an out of time way induces cognative bias.”

Not just in AI systems but humans.

[1] @Bruce noted some years back that,

“attacks DO NOT get worse with time”…

But it would appear that “systems DO get worse with time” and it’s not “bit rot” that’s to blaim[2]. With it makes loss of privacy to unlawfull behaviour so much more easily achieved.

[2] This degradation of systems utility is a deliberate choice by those in managment over the development of such systems to make them less usefull as has also been seen by Google systems. The reason some think probably incorrectly is part of the rush to usless features. However some believe is to keep you using their systems for more time so they can collect more data from you, thus make more paper profit, to spin into assets by which you can be made not a slave but worse a serf (the illusion of freedom saves the lord and master the cost of having to give food and shelter).

modem phonemes June 24, 2023 5:27 PM

Re: More Compartments in Pandora’s Box Discovered

Calling to mind the well known and ever present vulnerabilities arising from the use of computers in voting systems, e.g. research by [1], one would be concerned that AI sensemaking might just be an opportunity for AI propagandizing. The Big Lie, now with rich Data and the creamy goodness of AI. As with the Bomb, which when it was dropped was dropped everywhere, there is no going back and the only option is to try to politically manage the use. The track record with computing has been worse than that with nuclear wespons. Interesting links at [2].

  1. https://en.m.wikipedia.org/wiki/Andrew_Appel
  2. https://citp.princeton.edu/

Petre Peter June 26, 2023 8:35 AM

There is also the issue of AIs becoming hackers. With an automation and speed on their side, future AIs could search for vulnerabilities and exploit them over and over again with unprecedented efficiency. Most of our system of laws is written in natural language where AIs are starting to excel. I am not an alarmist but, the potential for AI hacking systems of law is a real threat, especially when we think about lobbying and folding the system with comments and letters that can change legislation.

modem phonemes June 26, 2023 5:09 PM

“Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin.”

From “Various techniques used in connection with random digits” by John von Neumann in Monte Carlo Method (1951) edited by A.S. Householder, G.E. Forsythe, and H.H. Germond

And likewise for amyone considering statistics based AI as producing knowledge.

One has to respect the limits of one’s tools.

MarkH June 26, 2023 6:36 PM

@modem phonemes:

Word! … and nice analogy.

The reactions I’m reading in the press to so-called “AI”, are as comical as a kitten first confronting a mirror: “how’d that other cat get there?!?!?!”

Humanity is not doing well, at distinguish seeming from being.

modem phonemes June 26, 2023 7:48 PM

@MarkH

distinguish seeming from being

You might enjoy Andre Appel’s comments on computers in voting, especially internet voting in [1]. An extract:

The representation of the thing is not the thing—….

The assesseurs of a normal French election see a physical ballot-box with their own eyes. They can touch it with their own hands to make sure it’s not a mirage. They can see and hear each voter approach the ballot box and deposit one envelope. The picture of a French urne that I have displayed is, I am told, what the ballot-box really looks like. But the picture is not the thing.

When the assesseurs of the Election des Conseillers à l’Assemblée des Français de l’Étranger see a computer screen in Paris saying “0 votes in the ballot-box”, they are not seeing a ballot-box. They are seeing a representation,

The clear consensus of computer-science experts around the world who have studied these issues is that Internet elections cannot be trusted,

This issue is exponentially expanded in AI. AI is intrinsically untrustworthy. It is inappropriate technology.

https://www.cs.princeton.edu/~appel/papers/urne.pdf

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.