Are We Ready to Be Governed by Artificial Intelligence?

Artificial Intelligence (AI) overlords are a common trope in science-fiction dystopias, but the reality looks much more prosaic. The technologies of artificial intelligence are already pervading many aspects of democratic government, affecting our lives in ways both large and small. This has occurred largely without our notice or consent. The result is a government incrementally transformed by AI rather than the singular technological overlord of the big screen.

Let us begin with the executive branch. One of the most important functions of this branch of government is to administer the law, including the human services on which so many Americans rely. Many of these programs have long been operated by a mix of humans and machines, even if not previously using modern AI tools such as Large Language Models.

A salient example is healthcare, where private insurers make widespread use of algorithms to review, approve, and deny coverage, even for recipients of public benefits like Medicare. While Biden-era guidance from the Centers for Medicare and Medicaid Services (CMS) largely blesses this use of AI by Medicare Advantage operators, the practice of overriding the medical care recommendations made by physicians raises profound ethical questions, with life and death implications for about thirty million Americans today.

This April, the Trump administration reversed many administrative guardrails on AI, relieving Medicare Advantage plans from the obligation to avoid AI-enabled patient discrimination. This month, the Trump administration took a step further. CMS rolled out an aggressive new program that financially rewards vendors that leverage AI to reject rapidly prior authorization for "wasteful" physician or provider-requested medical services. The same month, the Trump administration also issued an executive order limiting the abilities of states to put consumer and patient protections around the use of AI.

This shows both growing confidence in AI’s efficiency and a deliberate choice to benefit from it without restricting its possible harms. Critics of the CMS program have characterized it as effectively establishing a bounty on denying care; AI—in this case—is being used to serve a ministerial function in applying that policy. But AI could equally be used to automate a different policy objective, such as minimizing the time required to approve pre-authorizations for necessary services or to minimize the effort required of providers to achieve authorization.

Next up is the judiciary. Setting aside concerns about activist judges and court overreach, jurists are not supposed to decide what law is. The function of judges and courts is to interpret the law written by others. Just as jurists have long turned to dictionaries and expert witnesses for assistance in their interpretation, AI has already emerged as a tool used by judges to infer legislative intent and decide on cases. In 2023, a Colombian judge was the first publicly to use AI to help make a ruling. The first known American federal example came a year later when United States Circuit Judge Kevin Newsom began using AI in his jurisprudence, to provide second "opinions" on the plain language meaning of words in statute. A District of Columbia Court of Appeals similarly used ChatGPT in 2025 to deliver an interpretation of what common knowledge is. And there are more examples from Latin America, the United Kingdom, India, and beyond.

Given that these examples are likely merely the tip of the iceberg, it is also important to remember that any judge can unilaterally choose to consult an AI while drafting his opinions, just as he may choose to consult other human beings, and a judge may be under no obligation to disclose when he does.

This is not necessarily a bad thing. AI has the ability to replace humans but also to augment human capabilities, which may significantly expand human agency. Whether the results are good or otherwise depends on many factors. These include the application and its situation, the characteristics and performance of the AI model, and the characteristics and performance of the humans it augments or replaces. This general model applies to the use of AI in the judiciary.

Each application of AI legitimately needs to be considered in its own context, but certain principles should apply in all uses of AI in democratic contexts. First and foremost, we argue, AI should be applied in ways that decentralize rather than concentrate power. It should be used to empower individual human actors rather than automating the decision-making of a central authority. We are open to independent judges selecting and leveraging AI models as tools in their own jurisprudence, but we remain concerned about Big Tech companies building and operating a dominant AI product that becomes widely used throughout the judiciary.

This principle brings us to the legislature. Policymakers worldwide are already using AI in many aspects of lawmaking. In 2023, the first law written entirely by AI was passed in Brazil. Within a year, the French government had produced its own AI model tailored to help the Parliament with the consideration of amendments. By the end of that year, the use of AI in legislative offices had become widespread enough that twenty percent of state-level staffers in the United States reported using it, and another forty percent were considering it.

These legislative members and staffers, collectively, face a significant choice: to wield AI in a way that concentrates or distributes power. If legislative offices use AI primarily to encode the policy prescriptions of party leadership or powerful interest groups, then they will effectively cede their own power to those central authorities. AI here serves only as a tool enabling that handover.

On the other hand, if legislative offices use AI to amplify their capacity to express and advocate for the policy positions of their principals—the elected representatives—they can strengthen their role in government. Additionally, AI can help them scale their ability to listen to many voices and synthesize input from their constituents, making it a powerful tool for better realizing democracy. We may prefer a legislator who translates his principles into the technical components and legislative language of bills with the aid of a trustworthy AI tool executing under his exclusive control rather than with the aid of lobbyists executing under the control of a corporate patron.

Examples from around the globe demonstrate how legislatures can use AI as tools for tapping into constituent feedback to drive policymaking. The European civic technology organization Make.org is organizing large-scale digital consultations on topics such as European peace and defense. The Scottish Parliament is funding the development of open civic deliberation tools such as Comhairle to help scale civic participation in policymaking. And Japanese Diet member Takahiro Anno and his party Team Mirai are showing how political innovators can build purpose-fit applications of AI to engage with voters.

AI is a power-enhancing technology. Whether it is used by a judge, a legislator, or a government agency, it enhances an entity’s ability to shape the world. This is both its greatest strength and its biggest danger. In the hands of someone who wants more democracy, AI will help that person. In the hands of a society that wants to distribute power, AI can help to execute that. But, in the hands of another person, or another society, bent on centralization, concentration of power, or authoritarianism, it can also be applied toward those ends.

We are not going to be fully governed by AI anytime soon, but we are already being governed with AI—and more is coming. Our challenge in these years is more a social than a technological one: to ensure that those doing the governing are doing so in the service of democracy.

This essay was written with Nathan E. Sanders, and originally appeared in Merion West.

Posted on December 29, 2025 at 7:07 AM25 Comments

Comments

Bob Paddock December 29, 2025 8:24 AM

“Medicare Advantage” plans are an Insurance Industry scam.
They have very little to do with government provided Medicare other than the intentional misleading name.

Rontea December 29, 2025 9:56 AM

I guess that governance by AI implies ceding decision-making authority to algorithms, effectively replacing human judgment with opaque, unaccountable systems. This risks embedding biases, amplifying errors at scale, and creating a new form of automated authoritarianism where the mechanisms of power are inscrutable.

Governance with AI, on the other hand, treats the technology as a tool to augment human decision-making. AI can surface insights, detect patterns, and inform deliberation while keeping humans in the loop—ensuring that accountability, transparency, and democratic processes remain intact.

Hans December 29, 2025 10:19 AM

The article fails to mention the EU AI act so much maligned in US tech circles, a key objective of which is to regulate AI use in sensitive areas like the justice system, policing, or healthcare. Until jurisdictions consider its approach (which is, ultimately, based on recognition of fundamental human rights in the design of technology), the complaint that “we’re being controlled by our bot overlords” rings somewhat hollow.

Steve59 December 29, 2025 11:02 AM

The title doesn’t reflect the argument but I imagine the title might have been chosen for you. Just switching the anthropomorphizing “by” to “with” would have worked, although it isn’t very exciting. I might have gone with: “Are we ready to be governed by AI-touting Silicon Valley sociopaths?”

Although I am in general agreement, I think the argument lacks historical and sociological context. It presents the current AI tech as if it just appeared like manna from heaven and could be used for good or ill. It ignores the networks of relationships under which the current forms of AI developed, are constrained and are used. The suggestion at the end that “our challenge…is more a social than a technological one” is misplaced as the domains are inextricably intertwined.

KC December 29, 2025 11:04 AM

Re: WISeR AI pilot program for Original Medicare

For all the power that AI has the potential to amplify, it’s interesting that the scope of this particular AI program is delimited. For example, the model will be tested in 6 states, it will apply to a select set of services, and denials will be subject to review by a human clinician.

Reading that model providers are incentivized for averting ‘wasteful, inappropriate care,’ I am wondering if the forthcoming determinations will carry broad substantiated consensus. The fact the program is going live January 1, 2026 makes it an AI use case highly worthy of observing.

mark December 29, 2025 12:20 PM

Simple answer: no.

First is, of course, that we do not have AI, we have Clippy 2.0. Typeahead writ large.

Second is the habit of chatbots to wander off into hallucinations. Start a war? Sure. Commit suicide? Here, we’ll help. Deny coverage? What biases do you want us to have, beyond ROI?

Jordan Brown December 29, 2025 12:44 PM

the practice of overriding the medical care recommendations made by physicians raises profound ethical questions

Yes. And AI is almost completely irrelevant to the discussion. Insurance companies were denying care long before AI came on the scene. Some process, whether purely human, simple algorithms, or AI, decides that the proposed care is just too expensive for its benefits. It’s absolutely clear that there must always be such a dividing line; it is not possible to allocate infinite resources to a patient. The question is how that dividing line should be determined. The question of how the dividing line is actually implemented is of secondary interest.

Winter December 29, 2025 1:30 PM

@KC

Reading that model providers are incentivized for averting ‘wasteful, inappropriate care,’ I am wondering if the forthcoming determinations will carry broad substantiated consensus.

As I understood it, most waste in US healthcare is in the utterly byzantine accounting and billing systems the providers have to apply for the individual insurers. Billing alone seems to eat over 30% of total costs.

‘https://revmaxhealthcare.com/blog/inefficient-medical-billing-affects-doctor-patient-relationship/

Clive Robinson December 29, 2025 4:17 PM

@ Bruce, Jordan Brown, ALL,

I am perhaps going to make myself very unpopular here by asking,

“Why does the US have an addictive substance crisis?

The answer has been long known but kept quiet, because it is the way the US healthcare system is effectively designed to work.

When you see someone saying,

“A salient example is healthcare, where private insurers make widespread use of algorithms to review, approve, and deny coverage, even for recipients of public benefits like Medicare.”

You see exactly why, addiction happens, when you consider what the logic of the “Danse Macabre” behind it is.

Which is why you see people saying as above,

“Insurance companies were denying care long before AI came on the scene. Some process, whether purely human, simple algorithms, or AI, decides that the proposed care is just too expensive for its benefits. It’s absolutely clear that there must always be such a dividing line; it is not possible to allocate infinite resources to a patient. The question is how that dividing line should be determined.”

What actually happens is care that is needed for which there is no actual alternative is denied. This is actually irrespective of “cost” but actually about to whom the money is given.

Anything labour intensive is seen as detrimental as it takes money away from certain major entities. The same entities that push chemicals that are improperly tested and regulated by Government Entities.

Therefore when a patient with a muscular-skeletal injury presents they should be given a whole lot more than a sling and a handful of pain meds (which are all poisons by the way).

To prevent longterm issues they should be given physio therapy but the prescribing doctor knows two things,

1, The insurance won’t pay for it.
2, If the patient takes time off to attend such long term therapy they will loose their employment thus healthcare.

The result is that the need for pain medication goes on and on. But with a twist,

3, The insurance will not pay the Dr for continued proper “pain management” either.

Thus the patient gets stuck on one medication often very strong, instead of having the meds cycled down untill the patient is nolonger taking pain medications even over the counter ones.

Thus the Dr knows they “can not fight the system” and stay in business so they just follow the path the US insurance and Drug companies have set out to push pain medication that is highly profitable for both. All with “the nod” of US Gov Agencies.

This pattern of behaviour we know for certain will be emulated and continued by any AI because it’s a very strong bias in the “training data sets” going back decades.

The result is,

1, The patient gets to keep their job in the short term.
2, The patient will not actuall have a full recovery / get better.
3, The patient will suffer from psychological issues such as depression.
4, The patient will become dependent on the medications (ie addicted) not for physical but psychological “needs”.
5, The patient will get cut off at some point by the insurance.

The result is enough patients will seek alternative “mental pain relief” by non official sources of pain medications etc.

Some will use trips out of the US to get prescriptions and the genuine drugs at a fraction of the US price. Others without that option will buy via the Internet, and not know what they are getting… Others further down the socioeconomic lader will turn to street “corner vendors” who will move them on to other chemicals that are more profitable.

The use of AI will not solve this in fact it will engrain it even further into the Healthcare system, because that is what the input data will overwhelmingly tell it is “best practice”… When in reality it’s the most profitable practice for the Corps and to a certain extent the Drs…

Oh and if you ever hear of a “wonder drug” for pain that is “not addictive” like say “Tramadol” just wait a while before it’s found to be a false statement,

‘https://www.sciencedaily.com/releases/2025/12/251225080723.htm

Oh and guess what the latest “wonder drug” being pushed is?

1, Wegovy (semaglutide)
2, Mounjaro (tirzepatide)

And similar known as “Glucagon-like peptide-1″(GLP-1) receptor agonists / analogues. Used for Type II diabetes control and more fashionably “weight loss”.

They are now known to only work as “weight loss” for a relatively short time period (~9 months). But in non type II patients they are starting to show other “emergent issues”…,

‘https://www.bmj.com/content/390/bmj.r1606.full

In fact some have hypothesised that the weight loss in the non Type II users is actually due to nausea. In effect causing the same effect as is seen in people who deliberately vomit to loose weight for psychological reasons (intentional and unintentional purging)[1]

But the issue of black market sourcing is already of serious concern,

https://www.theguardian.com/society/2025/dec/29/uk-medical-regulator-warns-against-buying-weight-loss-jabs-from-social-media-channels

[1] I can confirm that unintentional purging due to medicine side effects when put on new medications can cause what appears to be “weight loss”… But it’s most often not “lipid body mass” but fluid that is lost and the resulting dehydration can be dangerous. It’s hit me with several prescription meds I’ve been put on in recent years, and I know it’s usually transitory, or needs the titration rate adjusted or alternative medications. Some though such as statins can have really nasty side effects such as debilitating muscle pain through to “rhabdomyolysis”,

https://bestpractice.bmj.com/topics/en-gb/167

Intentional purging outside of emergency care however is truly dangerous in all forms and kills many people each year.

lurker December 30, 2025 12:50 AM

It’s sad that US citizens cannot do as in some other countries, that their president calls “failed states”, and consult a local herbal practitioner who knows a lot about life and the forces sustaining it, and little about Mammon. We can but hope that AI governed justice and medical systems will weed out the believers, the oligarchs and megalomaniacs, and leave the world a better place …

LLM December 30, 2025 2:56 AM

“ Are We Ready to Be Governed by Artificial Intelligence?” – this is a loaded question if ever there was one

Clive Robinson December 30, 2025 6:24 AM

@ LLM, ALL,

With regards,

“this is a loaded question if ever there was one”

Yes it is, and to make it worse it’s effectively a loaded weapon pointed at each and every citizens heads.

Look at it this way “authoritarians” traditionally only have a precarious grip on the reality around them, which is why they show the classic signs of paranoia. Unsurprisingly when you consider the issues of the “humble servant”[1] and those with hidden ambition to usurp by coup etc.

Current AI LLM&ML Systems are about the best surveillance processing tool we have, and they can, so some claim,

“Effectively find the disloyal before they become disloyal.”

Which is why those “looking for terrorists” set great store by them, even though the evidence is not just scant, it’s still scant after cherry picking.

Imagine where the “Reds under the beds” scares would have gone with Current AI LLM&ML Systems…

It’s almost certain to happen very soon if it’s not happened by those that were formerly part of Cambridge Analytica Ltd. (Formerly US SCL) and the currently actively swallowing up as many personal profiles as possible Palantir. The latter with the aim of entirely replacing investigators, detectives, and intel analysts.

That is the most likely big money “profitable use” for Current AI LLM&ML systems with the politicians we currently have. Worse the politicians will block or kick into the long grass any attempts to stop such systems being put in place…

History shows that this will happen and that society will cease to exist as we know it now, let alone how we knew it at the turn of the century.

[1] Almost the ultimate form of power is control without risk. If you can have a public facing puppet do your bidding you can have almost all the power you could want through them, but the majority of the risk stops with them. Thus when it goes wrong as it always does with authoritarianism you walk away meekly whilst the puppet gets hung from a lamp post or worse. You can then later walk back to be a new puppets “humble servant”.

ResearcherZero December 31, 2025 12:52 AM

Humans have long had a propensity for being misled and placing their faith in imaginary things. Spectacle and novelty often catches people off guard and grabs their attention.

Humans keep asking librarians for books and records (that do not exist) and were instead hallucinated by AI. Convinced that they have discovered knowledge about secret books and journals that only AI chatbots know about, some mislead individuals have accused librarians of hiding books.

Convenience and convenient shortcuts in decision making can produce terrible outcomes.

Instructing an AI chatbot/LLM to “not hallucinate” or “be accurate” will not ensure credible and accurate responses. Nor will it work for politicians and pundits alike.

‘https://www.scientificamerican.com/article/ai-slop-is-spurring-record-requests-for-imaginary-journals/

ResearcherZero December 31, 2025 1:48 AM

Are humans ready for sea-level rise, the loss of glacial melt water and the inundation of freshwater supplies as saltwater pushes further up rivers and into groundwater aquifers?

Humans are unprepared for the physical events that will take place over the coming decades, let alone the sociological and technological challenges that will reshape their lives. The power of AI systems to be misused for surveillance, social interventions and information distortion are wide and varied. Algorithmic harms are already widespread and have been used to inflict suffering against large groups of people by politicians looking to extract money from vulnerable people who can easily be maligned, incorrectly labeled and exploited.

Has anyone been held accountable for such actions? Were warnings listened to and acted on?

No. In spite of the constant stream of significant data breaches, governments have compiled large databases of sensitive and personal information made accessible to AI companies using products which are known to contain significant vulnerabilities, leak information and can be exploited via either remote attack or manipulated and accessed through insider threats.

Despite initial attempts to create safe guards, an impetus to remove safeguards and checks and balances, for purely economic and financial pursuit, has taken hold. The lessons of the past mistakes of deregulating sectors have been ignored, along with the negative outcomes.

The world cannot be made safe, but it can be made safer, … or far unsafer and chaotic.

@Clive Robinson

There is a presentation about the black box of Palantir at 39c3. There are also many other interesting talks available with English translation covering a range of topics including forensics like Cellbrite, spyware, horribly insecure robots with AI functions, satellites, the GFW of China, and a workstation lost by a Chinese APT operative and a subsequent data center fire which burned 1000 servers when investigators looked into the APT activities.

Taking control of phones with BlueTooth headphones … and other exploits, GPS jamming, Fascist cybernetic AI, Chat Control, A post American enshittification resistant internet …

https://media.ccc.de/c/39c3

ResearcherZero December 31, 2025 3:39 AM

Because tech companies record data on every single action you take online, the federal government is working with Big Tech to take advantage of the inordinate amounts of data they possess. Not even the politicians who authorised these products, have read reports that scrutinize the security or the appropriateness of the functions of these products.

“I have a foreboding of an America in my children’s or grandchildren’s time — when the United States is a service and information economy… when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues…”

~ Carl Sagan

These private companies are opaque. Governments are bound by the terms and services of these products which can change at any time. Politicians have taken the promises of Big Tech at face value, without any publicly open review of these closed-source products, and without any democratic oversight of these services or access granted to our private data.

‘https://thedebrief.org/tech-firm-palantirs-government-work-on-data-collection-sparks-new-privacy-fears/

Crime statistics in Australia are being manipulated to shape public perception.
https://www.policycircle.org/opinion/government-data-suppression/

New Surveillance

Information collected today about individuals differs wildly from the past. Big Tech gives government far more insight into our lives and they are motivated by financial advantage.

The way different types of technology companies respond to government requests for data varies widely dependent on the type of company. For example, telecommunications companies such as Internet Service Providers are far more susceptible to government pressure than others.

Tech companies that provide services to government on the other hand, can exert pressure over, or manipulate government, while operating in secrecy without adequate oversight. Ethical considerations are often ignored in the pursuit of profit, creating significant risks for privacy and human rights. This can be seen with services Palantir provides to Immigration and Customs Enforcement, taking data from many other departments for predictive policing and profiling. Data from all areas of your life can be ingested by these platforms.

https://journals.sagepub.com/doi/10.1177/20539517241232638

As the findings of the FTC demonstrate, tech companies abuse the lack of privacy protections to intrude deep into our private lives in order to generate profits.
https://publicknowledge.org/the-ftcs-new-report-reaffirms-big-techs-personal-data-overreach-whats-new/

Clive Robinson December 31, 2025 8:21 AM

@ ResearcherZero,

With regards,

“… some mislead individuals have accused librarians of hiding books.”

Librarians do hide books…

The Victorians used to photograph just about everything and quite a few books were printed with such “plates”.

Well laws get made that conflict with earlier laws but do not provide exceptions or repeal earlier legislation.

Thus books that have images of children in them in a state of undress fall under new legislation for which there is no legal defence for having such images (even though they were taken for medical reasons). However legally the books have to be kept for legal reasons.

Thus the unlawful compromise of the books being kept under hidden away under lock and key, in effect never to be seen again.

Similar applies to quite a few films from the 1960’s and 70’s.

lurker December 31, 2025 6:58 PM

@Clive Robinson

Mention of “Victorians” and “images of children” leads to the conclusion that one Charles Lutwidge Dodgson was very lucky client side scanning did not exist in his time. His proposal of Liquid Democracy [1] also would not go down well with the backers of AI Governance.

[1] https://en.wikipedia.org/wiki/Liquid_democracy

lurker January 1, 2026 12:39 AM

@ResearcherZero
re Sagan quote

Now, it’s important to remember that the “accuracy” of predictions is often a Rorschach test. An interpretation of a particular prediction’s accuracy usually says a lot about the people interpreting them and their own hopes or fears for the future. And honestly, some of Sagan’s concerns sound rather quaint . . . Here’s hoping Sagan, one of the smartest people of the 20th century, was wrong.
– Matt Novak, Gizmodo

Clive Robinson January 1, 2026 3:14 AM

@ lurker,

With regards the Matt Novak quote, the bit that worries me is,

“… it’s important to remember that the “accuracy” of predictions is often a Rorschach test. An interpretation of a particular prediction’s accuracy usually says a lot about the people interpreting them and their own hopes or fears for the future …”

I make quite a few predictions based on our historic past and predilection for “not learning” from it[1]. As a result they have a habit of happening… Yet I’ve been accused by those interpreting of being paranoid, pessimistic, and similar. I’ve also been told that revealing factual information is “aiding the enemy” who ever the enemy was in that persons mind.

Am I thus to assume they are just unimaginative unempathic or worse delusional, or even sociopathic?

[1] Hence my comments about software developers not being engineers but artisans following Guild secret like “closed patterns” rather than science and engineering “open standards”. And worse that even over relatively short periods of time vulnerabilities get forgotten and not taught thus “the wheel turns” and the same mistakes just make the rut deeper.

JG5 January 1, 2026 10:33 AM

Happy New Year. Appreciate the recent shout-outs on “AI.”

“Are we ready to be governed at all?”

Monty Python – What have the romans ever done for us
Weidmoo 9.59K subscribers
1.2M views 13 years ago
https://www.youtube.com/watch?v=9foi342LXQE
The What have the romans ever done to us skit from Monty Python and the Life of Brian
I do not own any of this material it’s all courtesy of Monty Python at …

Clive Robinson January 1, 2026 3:51 PM

@ JG5,

With regards,

Are we ready to be governed at all?

The answer is in part,

“Rules are for the obayance of fools, and the guidance of wise men”

Which arises from a simple question,

“When you are alone with nothing to do, do you feel lonely?”

If the answer is NO you will not be governed but by rules, but by your own experience and knowledge and thus see rules and governance for what they are (shackles).

That is you can,

“Live in your own head, and not need to be in others heads.”

So you can,

“learn, think, and reason.”

Any one needing or wanting to be in others heads makes them easily controllable, and many will accept almost any rule just so you are “part of a group or tribe”.

Such abdication of responsibility to ones self makes people easy to govern.

But it’s more than that, another part of it is,

“Can you go beyond thinking and reasoning to imagining and on to original creativity?”

Via building from foundational blocks, systems that function correctly. Importantly not just in your mind, but when you draw them out and see the interrelationships. Not just between the system building blocks, but more importantly see what effects such a system will have on the existence of others and the environment they exist in?”

Darn few can it’s one or two steps above what our host @Bruce called “Thinking Hinky” and society will either suppress it from fear or follow in hope. We sometimes call such people visionaries when we perceive advantage, even “Renaissance Men”.

And it is a necessary part of being able to not just be self sufficient, but actually move the world forward in ways that others will have an improved lot in life from then on.

However if their ideas are about “self benefit” rather than “social good” we have other names for them, and usually for good reason.

Clive Robinson January 2, 2026 3:55 PM

@ JG5, ALL,

False evidence in US courts endemic

One of the things I note from time to time is that Current AI LLM and ML Systems would be of significant benefit to those pushing political and other agendas in the Government.

The Systems allow for increased distancing and deliberate bias with a reduced chance of it being either detected or getting back to the originators thus providing a cut-out or arms length deniability.

Some are naively thinking that would not be done or that it would be easy to guardrail in some way.

Well on both points it’s bad news…

On the first point, as I’ve mentioned before the FBI and DOJ personnel have repeatedly been called out by judges who say they see routine lying from them. People still chose to thing it’s “just some silly muddle / mistake” when it’s clearly not.

Well hot of the Internet today,

Federal Judges Blow the Whistle on DOJ Deception: 35+ Cases of Lies and “Sham” Evidence

A 60 Minutes investigation highlighted more than 35 cases in which federal judges said the government provided false information, including false sworn declarations “time and again,” according to NYU law professor Ryan Goodman.

https://www.allenanalysis.com/p/federal-judges-blow-the-whistle-on

On the second point, the evidence that “guardrails don’t work and induced bias does” are appearing every week and have done for quite some time now.

Thus you have to ask the question,

“What sort of person would willingly be judged or governed by systems with such failings?”

Anonymous January 2, 2026 6:38 PM

Selective preference/rejection of posts with only opinions with total acceptance of blog owner/moderator would be eliminated when AI will do moderation based on logic not bias towards other opinions and their rejection. That means the intention is not finding real multilevel truth but providing one side even right point of view.

Clive Robinson January 3, 2026 4:53 AM

@ Lurker,

They are “Federal” not “State” judges so are either “Article I” or “Article III” judges.

From memory “Article III” judges are nominated by the sitting US President to the Senate for approval, if they clear that hurdle they generally have “life long tenure”… Which can be problematic as if they don’t want to resign/retire they have to be impeached, jailed, or locked up in the nut-house. Oh and they are not required to be actually qualified…

The downside is they only get about 1/10th (~$250k) the income that an associate in a good law firm pulls down. So many don’t actually stay for very long unless they are one of the “SCOTUS nine” who are nearly all of whom are clearly “political party tuned”.

So as some including the Chief Justice have noted, they are not exactly the cream from the top…

So the only option for POTUS appears to be “The Mad or Bad” route unless he want’s to “start a bag”… And if Federal Judges suddenly started expiring in interesting or unexpected ways in large numbers… Then well I guess some one will think longingly of a “grassy knoll” adjacent to a fair/run way on which to get prone.

I’m sure someone will correct me on this, but I think the current POTUS holds the record for the highest number of assassination attempts in the shortest period of time for any “peace time” POTUS. I guess some will ask,

“Is he so dumb he can’t take a hint?”

Of which most in the world know the answer to that 😉

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.