Ten Ways AI Will Change Democracy

Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.

When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen.

Some items on my list are still speculative, but none require science-fictional levels of technological advance. And we can see the first stages of many of them today. When reading about the successes and failures of AI systems, it’s important to differentiate between the fundamental limitations of AI as a technology, and the practical limitations of AI systems in the fall of 2023. Advances are happening quickly, and the impossible is becoming the routine. We don’t know how long this will continue, but my bet is on continued major technological advances in the coming years. Which means it’s going to be a wild ride.

So, here’s my list:

  1. AI as educator. We are already seeing AI serving the role of teacher. It’s much more effective for a student to learn a topic from an interactive AI chatbot than from a textbook. This has applications for democracy. We can imagine chatbots teaching citizens about different issues, such as climate change or tax policy. We can imagine candidates deploying chatbots of themselves, allowing voters to directly engage with them on various issues. A more general chatbot could know the positions of all the candidates, and help voters decide which best represents their position. There are a lot of possibilities here.
  2. AI as sense maker. There are many areas of society where accurate summarization is important. Today, when constituents write to their legislator, those letters get put into two piles—one for and another against—and someone compares the height of those piles. AI can do much better. It can provide a rich summary of the comments. It can help figure out which are unique and which are form letters. It can highlight unique perspectives. This same system can also work for comments to different government agencies on rulemaking processes—and on documents generated during the discovery process in lawsuits.
  3. AI as moderator, mediator, and consensus builder. Imagine online conversations in which AIs serve the role of moderator. This could ensure that all voices are heard. It could block hateful—or even just off-topic—comments. It could highlight areas of agreement and disagreement. It could help the group reach a decision. This is nothing that a human moderator can’t do, but there aren’t enough human moderators to go around. AI can give this capability to every decision-making group. At the extreme, an AI could be an arbiter—a judge—weighing evidence and making a decision. These capabilities don’t exist yet, but they are not far off.
  4. AI as lawmaker. We have already seen proposed legislation written by AI, albeit more as a stunt than anything else. But in the future AIs will help craft legislation, dealing with the complex ways laws interact with each other. More importantly, AIs will eventually be able to craft loopholes in legislation, ones potentially too complicated for people to easily notice. On the other side of that, AIs could be used to find loopholes in legislation—for both existing and pending laws. And more generally, AIs could be used to help develop policy positions.
  5. AI as political strategist. Right now, you can ask your favorite chatbot questions about political strategy: what legislation would further your political goals, what positions to publicly take, what campaign slogans to use. The answers you get won’t be very good, but that’ll improve with time. In the future we should expect politicians to make use of this AI expertise: not to follow blindly, but as another source of ideas. And as AIs become more capable at using tools, they can automatically conduct polls and focus groups to test out political ideas. There are a lot of possibilities here. AIs could also engage in fundraising campaigns, directly soliciting contributions from people.
  6. AI as lawyer. We don’t yet know which aspects of the legal profession can be done by AIs, but many routine tasks that are now handled by attorneys will soon be able to be completed by an AI. Early attempts at having AIs write legal briefs haven’t worked, but this will change as the systems get better at accuracy. Additionally, AIs can help people navigate government systems: filling out forms, applying for services, contesting bureaucratic actions. And future AIs will be much better at writing legalese, reducing the cost of legal counsel.
  7. AI as cheap reasoning generator. More generally, AI chatbots are really good at generating persuasive arguments. Today, writing out a persuasive argument takes time and effort, and our systems reflect that. We can easily imagine AIs conducting lobbying campaigns, generating and submitting comments on legislation and rulemaking. This also has applications for the legal system. For example: if it is suddenly easy to file thousands of court cases, this will overwhelm the courts. Solutions for this are hard. We could increase the cost of filing a court case, but that becomes a burden on the poor. The only solution might be another AI working for the court, dealing with the deluge of AI-filed cases—which doesn’t sound like a great idea.
  8. AI as law enforcer. Automated systems already act as law enforcement in some areas: speed trap cameras are an obvious example. AI can take this kind of thing much further, automatically identifying people who cheat on tax returns or when applying for government services. This has the obvious problem of false positives, which could be hard to contest if the courts believe that “the computer is always right.” Separately, future laws might be so complicated that only AIs are able to decide whether or not they are being broken. And, like breathalyzers, defendants might not be allowed to know how they work.
  9. AI as propagandist. AIs can produce and distribute propaganda faster than humans can. This is an obvious risk, but we don’t know how effective any of it will be. It makes disinformation campaigns easier, which means that more people will take advantage of them. But people will be more inured against the risks. More importantly, AI’s ability to summarize and understand text can enable much more effective censorship.
  10. AI as political proxy. Finally, we can imagine an AI voting on behalf of individuals. A voter could feed an AI their social, economic, and political preferences; or it can infer them by listening to them talk and watching their actions. And then it could be empowered to vote on their behalf, either for others who would represent them, or directly on ballot initiatives. On the one hand, this would greatly increase voter participation. On the other hand, it would further disengage people from the act of understanding politics and engaging in democracy.

When I teach AI policy at HKS, I stress the importance of separating the specific AI chatbot technologies in November of 2023 with AI’s technological possibilities in general. Some of the items on my list will soon be possible; others will remain fiction for many years. Similarly, our acceptance of these technologies will change. Items on that list that we would never accept today might feel routine in a few years. A judgeless courtroom seems crazy today, but so did a driverless car a few years ago. Don’t underestimate our ability to normalize new technologies. My bet is that we’re in for a wild ride.

This essay previously appeared on the Harvard Kennedy School Ash Center’s website.

Posted on November 13, 2023 at 7:09 AM32 Comments


Doug November 13, 2023 7:57 AM

Items 1, 2, and 3 sound great until you factor in number 9. I think it reasonable to presume that disruptive actors will purposely train AI systems using false information and conspiracy data with the intent to reinforce a false narrative. People will tend to choose AI systems that confirm their beliefs. I don’t see how AI helps at all. I hope I am wrong but history tells me otherwise.

Frankly November 13, 2023 8:39 AM

I think most of these items are the over-hype that typically accompanies any new technology. Will robots replace humans in many jobs? No. The person-work-hours needed to make and maintain a human-like robot is more than the pwh you get from the robot. Same for AI. It is very expensive to do top notch AI. Too many person-work-hours of highly skilled persons to use it to replace people in the jobs described in the list. You have the cost of the hardware plus software and training. Also AI can make ridiculous mistakes because it is not sentient.

Erdem Memisyazici November 13, 2023 9:27 AM

Naw, most of this is not true. A.I. can’t choose for you and it would be pretty bad at unexpected situations.

As an interactive hologram maybe it can be made as a sort of fun app to learn from as an educator. Though it will likely be hacked and a class will require a person to oversee it anyways.

The suggestions here are mostly going to get you into jail and/or drive the subject matter off topic.

Wannabe techguy November 13, 2023 9:40 AM

“More importantly, AI’s ability to summarize and understand text can enable much more effective censorship.”

That is very concerning.

schuller4 November 13, 2023 10:01 AM

… there just ain’t enough armchair speculation & pompous opinion on the societal impact of AI — MORE please !

Dwayne Monroe November 13, 2023 10:02 AM

Projections should be based on what is presently known.

In the case of what is called “artificial intelligence” (a marketing term and not a description of existing capabilities if words have meaning) we see an ability to ingest vast quantities of data and manipulate it according to probabilistic rules.

Your projections infer some level of contextual knowledge or even true cognition. And yet, there is not only no indication of this existing in present systems, there is, beyond an expectation of improvement, no clearly defined path to achieve what you’re describing.

Also missing is a consideration of the computational power and corresponding resources for power and cooling (not to mention labor) required to run these systems at scale. We hear endlessly about “artificial intelligence” but rarely about resource extraction and data centers.

If any of this happens in any form it will be short lived as systems reach their growth limits.

jackson November 13, 2023 10:32 AM

Good essay, also reading 21 Lessons for the 21st Century, and a wide-range of other books on AI impact on society. One fear is that, the $$ of bigger better systems is so high fewer and fewer organizations can build and control. One said will soon be multi $B. Even countries unable to keep up on their own will be swept into their influence. Not good.

wiredog November 13, 2023 10:47 AM

Regarding #2. Hill staffers I’ve known tell me there’s a third pile best described as “What the heck?”

My understanding is that phone calls are often ignored, and email is almost always ignored, but that actual letters get at least a glance.

Mendoza November 13, 2023 12:30 PM


Will robots replace humans in many jobs? No.

Robots already have replaced humans in many jobs. Factories, for instance, have been using more and more since the 1970s. I’m told that a group of foreign engineers were once amazed to learn that the local sewage treatment plant was staffed by a single operator overnight. It’s looking increasingly doubtful that self-checkout machines will completely replace human cashiers, but they keep spreading. Earlier this year I rode an automated train to a Wal-Mart where I saw a floor-cleaning robot (waiting for someone to come and reset it after a person stood in front for 30 seconds… but they’ll improve); and of course lots of people use robot vacuum cleaners. McDonalds has been using order-taking “robots” for years, is looking to do voice recognition at drive-throughs, has experimented with burger-flippers, conveyor belts, …

There’s no shortage of jobs, and perhaps never will be, because somehow the total amount of work we want done always increases. But the robots continue to make inroads.

Clive Robinson November 13, 2023 12:37 PM

@ Bruce, ALL,

“I stress the importance of separating the specific AI chatbot technologies in November of 2023 with AI’s technological possibilities in general.”

Firstly current AI, LLM’s are little more than DSP “Matched filters” driven by a shaped noise signal. The ML is little more than DSP “adaptive filtering” which all to often turns out to be a multiband integration of the noise spectrum.

There is no sign of “I” artificial or not, you can write down very basic equations that both LLM’s and the ML addatives can be adiquately described by. Even the nonlinearity and rounding errors that come up due to limitations can be reproduced as equations.

But for all the brouhaha of marketing hype and nonsense it does not take long to find out that these so called artificial neural networks, do not in any real way behave like biological neural networks, and as such fail at massive scale and energy input to do what biological networks do simply and efficiently.

I hope people by now realise that whilst LLM’s can patten match by correlation, they can not even do simple reasoning reliably or effectively. Thus they are going to fail miserably as “teachers” for learning much as the old “Smack it in by rote” teaching methods that were thoroughly debunked something like fourty years back from the start of this millennium.

If neither LLMs or their argument of ML can reliably reason, just pattern match to existing data sets plus noise, what are they realy orher than overhyped over priced near usless toys running around a historical track.

In short they are stuck in a faux past and very poor present, and can not move forward…

If you must anthropomorphize LLM’s and ML, they are effectively as daft as those who look back a century or a half ago and think it was all wonderful because their cognative blindness makes them think that they would have been at the top of things… Some what worse in fact than the stories of the Walter Mitty character who’s fantasies were just about being a manly hero.

Winter November 13, 2023 12:52 PM


Firstly current AI, LLM’s are little more than DSP “Matched filters” driven by a shaped noise signal.

Even though it is “literally” true, it still is utter nonsense.

It is claiming viruses are dead heaps of polymerized amino acids and nucleotides that can never imitate real life. That is true, taken literally, but they are nevertheless one of the most potent entities shaping life on earth, regularly remodeling human demographics on continental scales.

An LLM produces output that condenses orders of magnitude more text and speech than any human has seen in their life. The results are very useful, as anyone who writes texts can attest.

Maybe you are expecting the TRUTH. You will not get that from a machine. But human can do well with less than the TRUTH. And LLMs are delivering useful results even now.

Jeremy November 13, 2023 2:33 PM

Here is one more way AI will change US-style ”democracy”

The Infrastructure bill currently in Congress, Section 24220.

This bill mandates installation of equipment to listen to in-car noises and conversation, monitor eyes, and “kill switches” to automatically turn off your car. It states, all new cars in the United States will be required to install these kill switches by 2026.

The monitoring and determination to use the kill switch needs to be done by AI due to the sheer number of vehicles on US roads.

For more info:

Clive Robinson November 13, 2023 5:11 PM

@ Winter, ALL,

Re : LLMs and ML are DSP not inteligence

“Maybe you are expecting the TRUTH. You will not get that from a machine”

Nor will you get REASONING or INTELLIGENCE from LLMs even if augmented with ML.

They are based on quantities of input to create statistical models or if you prefere distilled probabilities of the input. They have not created new information.

As such they can use the work of others, rearange it swap one set of words for another but they can not create original independent work that is new and moves the corpus forwards.

There was an argument that the LLM ML conjunction could some how distil from the original corpus and feed the distilate back and therefore improve the corpus… Unfortunately that is actually a bad idea, because it does not realy change the base of the corpus just adds fuzzing / noise which whilst it may look like new knowledge is not, and actually makes the corpus of less use not more.

Which brings us to,

“An LLM produces output that condenses orders of magnitude more text and speech than any human has seen in their life. The results are very useful, as anyone who writes texts can attest.”

The results realy are not that usefull just superficial. To see why consider,

1, The cat sat on the mat
2, The mat had a cat sat on it
3, The sitting cat was on not under the mat
4, The mat was under the sitting cat.

And several other sentance versions you can come up with all different so lots of text… But essentially with exactly the same information content “Cat sit on mat” so no information increase just a pile more garbage text saying the same thing less efficiently, thus effectively less than worthless from a knowledge basis…

But if your job is copytype where you are payed per word then yes initially the volume of potential words from the same atom of information could be seen as valuable… Till everyone does it then it’s debased back to as close to no worth as you want to get.

You have to increase not the verbage but the information if you want to realy increase value, and LLMs even augmented by ML are not going to do that.

Look at it another way, I have a dictionary and a random generator much like the XKCD passphrase generator. It can produce every combination of say a 20word sentance. How many of those sentences would read correctly? Of those how many would have real information content, and of those be original new information?

Then ask how would you tell if you have neither inteligence or reasoning ability?

There is an old joke about lots of pigeons nesting on a ledge in New York crapping over the side. One day the crap by chance gave the secret to imortality in venusion, but as venusions have not left venus there was nobody who could read the secret…

LLMs are the pigeons…

Ralph Haygood November 13, 2023 6:51 PM

“There are a lot of possibilities here.”: And many of them are nasty.

To be clear, the problem isn’t going to be fearsomely smart machines with their own agendas – the “singularity” silliness – but fearsomely stupid machines with their owner’s agendas, however obfuscated with technocratic patter. Educating, “sense making”, etc. are far from purely objective, values-free activities. I expect a general consequence of AI will be to make it easier than ever for definitely-not-disinterested actors to promulgate propaganda from behind a facade of neutrality.

“It could block hateful – or even just off-topic – comments.”: This is personal for me. I’m contemplating creating an online service that would host some user-generated content, which would, of course, have to be moderated. It’s tempting to imagine the moderation could be facilitated – cut down to manageable size – with machine learning. (I’ve developed and deployed machine-learning systems for both scientific and commercial purposes.) However, the biases typical of current machine-learning systems for content moderation give me pause aplenty. And I doubt they’re going to get much better very soon, because such problems are very messy, even for human moderators.

“A judgeless courtroom seems crazy today, but so did a driverless car a few years ago.”: Driverless cars still are crazy. I’m currently sitting a short walk from a place where, a couple of years ago, a Tesla was caught driving itself on the wrong side of the road. Of course, King-in-his-own-mind Elon has repeatedly proclaimed that “reliability in excess of human” is just around the corner. Uh huh. Any year now … (Yes, I expect it will happen eventually, but as with AI more broadly, human-equivalent performance is much harder to attain than many people seem to suppose.)

vas pup November 13, 2023 7:04 PM

#8 “This has the obvious problem of false positives, which could be hard to contest if the courts believe that “the computer is always right.”
[confirmed by video https://www.dw.com/en/algorithms-an-invisible-danger/video-67364376
by tax practice in Netherlands]

Separately, future laws might be so complicated that only AIs are able to decide whether or not they are being broken. And, like breathalyzers, defendants might not be allowed to know how they work.

Not all laws should have same level on complication based on targeted segment of population. Criminal law for general crime (e.g. murder, theft, assault) should have level of comprehension of high school graduate. For RICO, corporate fraud, aggravated tax evasion – should have higher level of comprehension – by specially trained lawyers defense and prosecutors. AI should assign required level of comprehension [spectrum thing]based of targeted segment of population.

AI should assign statute of limitation based on ability of LEAs collect incriminating evidence many years past after allegedly committed crime and not based on confusing memory of alleged victim or/and confession only which are very unreliable.

@wiredog has good point. They need your opinion just to confirm already made decision within their point but not contradicting it. AI could have more balanced approach if not trained by politically biased programmers selected politically biased testing input.

JonKnowsNothing November 13, 2023 10:26 PM

@Clive, @ Winter, ALL,

Re : LLMs and ML are DSP not intelligence

While the list of probable uses for AI is hardly inclusive of all possible uses, the problem is AI is hardly past the start gate for

  • moving “beyond the “AI-generated disinformation” trope”

When you go down the list, and create an example for each area, then ask the rhetorical AI question:

  • prove it

AI will happily provide all the proofs you want. It will cite chapter, page, verse and line. None of that will exist in the Matrix or Real World. It will never have happened.

But it sure looks good for advertising and propaganda.

The biggest effect current, ongoing and future for AI Democracy is:

  • There is No Spoon.

But AI Democracy will make one for you.

Himanshu Karnatak November 14, 2023 6:10 AM

I am not so sure about AI being a great educator in schools. Educating a child involves more of sentience than NLP. It can be a powerful information extractor and disseminator though. No to mention, a potent propaganda machine.

Their role in defining and interpreting laws seems pragmatic. Complex laws evolved by AI, may even need AI to arbitrate or judge cases. This sounds alarming. Given present growth rate of scaling infrastructure, any solution backed by the power centers would spread fast. It may seem like an equalizer from one stand point, but from another, it’s creating strongly skewed divide in another dimension.

postscript November 14, 2023 8:48 AM

I don’t think I would like learning from a textbook that presented random gibberish as truth. All texts are biased but there are whole armies of people involved in generating, writing, selecting, editing and validating the contents of, say, a biology or chemistry textbook. Such texts tend to be based in verifiable reality. Where are the guardians when an AI is the teacher?

Anonymous November 14, 2023 10:15 AM

Yes, we don’t need to reinvent our democracy to save it from AI if we remember that behind every major AI decision, we should have a human who makes the call.

Brodie November 14, 2023 12:02 PM

I couldn’t get past point one. If screens are so superior to teachers and books, why do Silicon Valley executives send their kids to schools where electronics are banned? And why didn’t Steve Jobs let his kids use the very devices he made — was he just waiting for chatbots to come along?

JonKnowsNothing November 14, 2023 12:41 PM


re: behind every major AI decision, we should have a human who makes the call

  • Loophole #1 major AI decision

For AI there is not such thing as “major or minor”. These are qualitative human values. They do not exist in the giant scrabble bag of AI datasets

  • Loophole #2 human makes the call

So, which human gets to do this? Which committee? Which government?

On what basis will a human make the call? Based on what the AI barfs up on the scrabble board of the dataset content?

How will the human know that the AI is not hallucinating?

AI is vastly different than other metering systems. Older metering systems are deterministic: they give the same results every time.

  • Drop a coin (now a credit card) into the parking meter which gives N-minutes of ticket-free parking

AI gives different answers every time. The data set mutates. There is no fixed base. There is no measurable accuracy.

  • Drop a coin (now a credit card) into the parking meter and AI generates Instant Ticket. Funds auto-deducted from your credit card. No refunds. No receipt. No redress. No proof.

It’s laudable that a human should vet the AI but there is no longer any verifiable authority on any topic. Ours is the last generation to know Before AI. We can know about the hallucinations but those behind us will never know.

  • Elvis has left the building.

They do not know who Elvis was or why he left the building; neither does AI.

lurker November 14, 2023 12:49 PM

@Clive Robinson
“thus effectively less than worthless from a knowledge basis…”

Ah, but think of the value to literature. The Critics will fawn over it, the Publishers will laught all the way to the bank, Christmas stockings will be stuffed with it …

bl5q sw5N November 15, 2023 11:02 AM

Human understanding proceeds by deduction from already known truths. The first known truths are obtained by induction, the discerning of the universal in the clearly grasped particular.

Something like induction operates in practical arts such as in politics, law, medicine. No theoretical rules can be given for answering questions. Every individual case is new and cannot be reduced to combinations of other cases. Prudent judgement and discernment has to be exercised to reach a correct answer amidst incessant change and differences in material details.

No algorithm is capable of this. Even where AI offers an answer that would agree with prudent judgement, this is entirely accidental and would need review by a human mind.

Clive Robinson November 15, 2023 11:49 AM

@ bl5q sw5N,

Re : The real “AI existential threat” is authoritarian followers.

“Human understanding proceeds by deduction from already known truths.”

Sadly not for all, they fail at reason and logic, and in some cases do not have the ability to think even consistantly thus appear to be a mess of contradictions driven by second hand mantras shouted at them over and over by something less entertaining than a clown in a ring.

Have a read of,


Such followers are actually very dangerous as C19 showed they are quite happy to behave compleatly irrationally to the point of self-harm. Worse as was seen in the Capitol attack they are a very real and ever present danger to life and limb.

Now think on how an LLM “AI device” especially with ML could be used to prompt such people.

There is an old saying,

“Poison goes where poison is welcome”

And there are plenty out there who would welcome it because of their own cognative limitations.

Mark Cottle November 16, 2023 7:49 AM

#8 is pretty scary given the existing record of “dumb” IT systems being taken as indisputable experts in legal settings. For example, the massive scandal in the UK in which the Post Office’s Horizon system was used to falsely convict hundreds of innocent people. Lives destroyed because courts accepted “the computer is always right”. Now the subject of a public inquiry (https://www.postofficehorizoninquiry.org.uk)

I think Australia’s “Robodebt” scandal possibly presents another example (https://robodebt.royalcommission.gov.au)

Those systems, and the flaws within them, at least seem to have been unpicked to a degree once the evidence of a widespread problem finally emerged. How much more difficult is that going to be if the system in question is an LLM?

pattimichelle November 17, 2023 2:02 PM

It’s hard to remember that AI’s require a lot of (electrical) power overall, so it’s hard to imagine how an AI could represent the poor (most americans live on a maximum of something around minimum wage). The Affluent can afford the electricity.

Clive Robinson November 17, 2023 4:39 PM

@ pattimichelle, ALL,

Re : Power pollutes badly.

“The Affluent can afford the electricity.”

Actually they can not…

As I’ve mentioned before the LLM AI-Rigs, are little different to Crypto-Coin “Mining-Rigs”.

The power consumption is ludicrous, and as it has to be reliable it’s going to be chewing through killo tones of hydrocarbons generating enormous amounts of polutants…

And those hydrocarbons are at the moment “non-renewable” resources… As for the “carbon capture” etc, that’s another inefficient “work process” chewing up yet more energy as burning hydrocarbons…

So those bonfires of the vanity AI today, are going to cost the successive generations rich and poor dearly…

And for what?

A load of tosh / re-boiled cabbage.

JonKnowsNothing November 17, 2023 5:42 PM

Well one guy isn’t going to be around in his former position: CEO Sam Altman just got fired.

atm no reason was given but the shakeout isn’t over…

from MSM History

Altman helped found the company in 2015, initially as a non-profit with a $1bn endowment from high-profile backers including Elon Musk, Peter Thiel and LinkedIn co-founder Reid Hoffman. Altman and Musk served as co-chairs with a goal “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. In 2019, however, OpenAI reshaped itself around a “capped profit” model with Altman as the CEO.

With Elon and Thiel why would anyone THINK this was a non-profit endeavor?


ht tps://arstechnica .c om/ai/2023/11/openai-fires-ceo-sam-altman-citing-less-than-candid-communications/

  • OpenAI fires CEO Sam Altman, citing less than “candid” communications
  • “The board no longer has confidence in his ability to continue leading OpenAI.”
  • OpenAI also announced that Chairman of the Board Greg Brockman will be stepping down from that role but staying with the company.

htt ps://www.theguardian .c om/technology/2023/nov/17/openai-ceo-sam-altman-fired

  • OpenAI fires co-founder and CEO Sam Altman for allegedly lying to company board
  • AI firm’s board said Altman was ‘not consistently candid in his communications with the board’ and had lost its confidence

Robert November 17, 2023 6:00 PM

Enjoyed this re the magic lantern from the Wikipedia Phantasmagoria page:

Athanasius Kircher warned in his 1646 edition of Ars Magna Lucis et Umbrae that impious people could abuse his stenographic mirror projection system by painting a picture of the devil on the mirror and projecting it into a dark place to force people to carry out wicked deeds. His pupil Gaspar Schott later turned this into the idea that it could be easily used to keep godless people from committing many sins, if a picture of the devil was painted on the mirror and thrown onto a dark place.

Didn’t check if it is true because it seemed very plausible. Back to rolling news and Love Island!

emily’s post November 17, 2023 6:41 PM

@ JonKnowsNothing

Well one guy isn’t going to be around

Probably the board asked are we in or not, said yes, then put its money where its mouth was, and asked AI what should we do next.

Anonymous November 17, 2023 9:29 PM

“We are already seeing AI serving the role of teacher. It’s much more effective for a student to learn a topic from an interactive AI chatbot than from a textbook.”

Uh, big giant COLOSSAL [citation needed].

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.