AI in Government

Just a few months after Elon Musk’s retreat from his unofficial role leading the Department of Government Efficiency (DOGE), we have a clearer picture of his vision of government powered by artificial intelligence, and it has a lot more to do with consolidating power than benefitting the public. Even so, we must not lose sight of the fact that a different administration could wield the same technology to advance a more positive future for AI in government.

To most on the American left, the DOGE end game is a dystopic vision of a government run by machines that benefits an elite few at the expense of the people. It includes AI rewriting government rules on a massive scale, salary-free bots replacing human functions and nonpartisan civil service forced to adopt an alarmingly racist and antisemitic Grok AI chatbot built by Musk in his own image. And yet despite Musk’s proclamations about driving efficiency, little cost savings have materialized and few successful examples of automation have been realized.

From the beginning of the second Trump administration, DOGE was a replacement of the US Digital Service. That organization, founded during the Obama administration to empower agencies across the executive government with technical support, was substituted for one reportedly charged with traumatizing their staff and slashing their resources. The problem in this particular dystopia is not the machines and their superhuman capabilities (or lack thereof) but rather the aims of the people behind them.

One of the biggest impacts of the Trump administration and DOGE’s efforts has been to politically polarize the discourse around AI. Despite the administration railing against “woke AI”‘ and the supposed liberal bias of Big Tech, some surveys suggest the American left is now measurably more resistant to developing the technology and pessimistic about its likely impacts on their future than their right-leaning counterparts. This follows a familiar pattern of US politics, of course, and yet it points to a potential political realignment with massive consequences.

People are morally and strategically justified in pushing the Democratic Party to reduce its dependency on funding from billionaires and corporations, particularly in the tech sector. But this movement should decouple the technologies championed by Big Tech from those corporate interests. Optimism about the potential beneficial uses of AI need not imply support for the Big Tech companies that currently dominate AI development. To view the technology as inseparable from the corporations is to risk unilateral disarmament as AI shifts power balances throughout democracy. AI can be a legitimate tool for building the power of workers, operating government and advancing the public interest, and it can be that even while it is exploited as a mechanism for oligarchs to enrich themselves and advance their interests.

A constructive version of DOGE could have redirected the Digital Service to coordinate and advance the thousands of AI use cases already being explored across the US government. Following the example of countries like Canada, each instance could have been required to make a detailed public disclosure as to how they would follow a unified set of principles for responsible use that preserves civil rights while advancing government efficiency.

Applied to different ends, AI could have produced celebrated success stories rather than national embarrassments.

A different administration might have made AI translation services widely available in government services to eliminate language barriers to US citizens, residents and visitors, instead of revoking some of the modest translation requirements previously in place. AI could have been used to accelerate eligibility decisions for Social Security disability benefits by performing preliminary document reviews, significantly reducing the infamous backlog of 30,000 Americans who die annually awaiting review. Instead, the deaths of people awaiting benefits may now double due to cuts by DOGE. The technology could have helped speed up the ministerial work of federal immigration judges, helping them whittle down a backlog of millions of waiting cases. Rather, the judicial systems must face this backlog amid firings of immigration judges, despite the backlog.

To reach these constructive outcomes, much needs to change. Electing leaders committed to leveraging AI more responsibly in government would help, but the solution has much more to do with principles and values than it does technology. As historian Melvin Kranzberg said, technology is never neutral: its effects depend on the contexts it is used in and the aims it is applied towards. In other words, the positive or negative valence of technology depends on the choices of the people who wield it.

The Trump administration’s plan to use AI to advance their regulatory rollback is a case in point. DOGE has introduced an “AI Deregulation Decision Tool” that it intends to use through automated decision-making to eliminate about half of a catalog of nearly 200,000 federal rules . This follows similar proposals to use AI for large-scale revisions of the administrative code in Ohio, Virginia and the US Congress.

This kind of legal revision could be pursued in a nonpartisan and nonideological way, at least in theory. It could be tasked with removing outdated rules from centuries past, streamlining redundant provisions and modernizing and aligning legal language. Such a nonpartisan, nonideological statutory revision has been performed in Ireland—by people, not AI—and other jurisdictions. AI is well suited to that kind of linguistic analysis at a massive scale and at a furious pace.

But we should never rest on assurances that AI will be deployed in this kind of objective fashion. The proponents of the Ohio, Virginia, congressional and DOGE efforts are explicitly ideological in their aims. They see “AI as a force for deregulation,” as one US senator who is a proponent put it, unleashing corporations from rules that they say constrain economic growth. In this setting, AI has no hope to be an objective analyst independently performing a functional role; it is an agent of human proponents with a partisan agenda.

The moral of this story is that we can achieve positive outcomes for workers and the public interest as AI transforms governance, but it requires two things: electing leaders who legitimately represent and act on behalf of the public interest and increasing transparency in how the government deploys technology.

Agencies need to implement technologies under ethical frameworks, enforced by independent inspectors and backed by law. Public scrutiny helps bind present and future governments to their application in the public interest and to ward against corruption.

These are not new ideas and are the very guardrails that Trump, Musk and DOGE have steamrolled over the past six months. Transparency and privacy requirements were avoided or ignored, independent agency inspectors general were fired and the budget dictates of Congress were disrupted. For months, it has not even been clear who is in charge of and accountable for DOGE’s actions. Under these conditions, the public should be similarly distrustful of any executive’s use of AI.

We think everyone should be skeptical of today’s AI ecosystem and the influential elites that are steering it towards their own interests. But we should also recognize that technology is separable from the humans who develop it, wield it and profit from it, and that positive uses of AI are both possible and achievable.

This essay was written with Nathan E. Sanders, and originally appeared in Tech Policy Press.

Posted on September 8, 2025 at 7:05 AM31 Comments

Comments

post script September 8, 2025 8:27 AM

“But we should also recognize that technology is separable from the humans who develop it, wield it and profit from it…”

I’m curious – how, exactly, do you plan to separate the technology from the people who own it?

KC September 8, 2025 9:05 AM

… but it requires two things: electing leaders who legitimately represent and act on behalf of the public interest and increasing transparency in how the government deploys technology.

I agree transparency is key. Are you see many impediments to this imperative?

Bryson and Schmitz also make the case that transparency is “definitionally” what enables the granular attribution of responsibility.

And that AI within this type of framework has the potential to improve the legitimacy of a government.

Clive Robinson September 8, 2025 2:54 PM

@ Bruce,

With regards,

“we have a clearer picture of his vision of government powered by artificial intelligence, and it has a lot more to do with consolidating power than benefitting the public.”

Seriously?

You were told this would be the result of the use of AI by anyone with any kind of ability to inflict it on others to their self advantage.

It’s basic behaviour of the “self entitled” and you are not going to stop them doing it.

If you think otherwise you need to go and study history some more.

“Even so, we must not lose sight of the fact that a different administration could wield the same technology to advance a more positive future for AI in government.”

Only when the sky is pink unicorns fly free with striped horns of gold and silver, and there are rainbows in the sky enough for everyone.

In short “Not Going to Happen”

Because the US is not a democracy or any other kind of free system.

At the very least you are only allowed to vote for a group of self selected “special people” who get a level of funding that can only happen by what is in effect “criminal behaviour”. That is they make what are in effect illegal promises to “major funders”

There is a famous quote

“The only difference between the Republican and Democratic parties is the velocities with which their knees hit the floor when corporations knock on their door. That’s the only difference.”

OK…

The self entitled own the system and will “send in the troops” if any incumbent tries to change.

If you want to change it you need another “revolution”.

The only future for AI in government is to favour the “self entitled” to think otherwise well it suggests a kevel of optimism that will never be forfiled.

Have a look at “RoboDebt” and other increasingly numerous “Political Mantra” legislation designed to discriminate against the working and middle classes.

AI is in effect “custom made” to support such “self entitled” behaviours and punish severely those who dare criticize or demand what they once thought were their “equitable rights”

At the very least AI will act as a “cut out” for “plausible deniability” / “blame others”.

It will also be ideal for generating “deceptive, discriminatory” legislation / regulations that will appear at best to be “equitable” but in reality have loop holes and back doors designed to discriminate and force rent seeking as well as ways to “rights strip” such that you can not defend yourself.

And that will be just the start of it, as history shows it will only get worse as power is accumulated by the “self entitled”.

lurker September 8, 2025 4:25 PM

Sure, AI might be able to improve the machinery of democracy. But to achieve that you first need to have a democracy working at some level of efficiency for the AI to improve upon. It could be claimed that the USA does not not at present meet that criterion. Examples of disfunction include, but are not limited to:

Gerrymandering;
The system(s) of voting and counting votes;
Appointment of judges;
Campaign funding.

Then of course, when you have an efficiently running democracy, with or without AI it is still possible for an artful autocrat to be elected to a position of power.

@Clive Robinson posted [humans have the ability of] “Deliberate deception to gain advantage”, which is the human condition the Trisolarians could not understand or deal with. And it is the condition we have baked into most of our current AI. Be careful what you wish for.

Chris Becke September 9, 2025 3:00 AM

@lurker
I’d say that any two party system is definitionally not a democracy. With only two parties, one party will always hold a strict majority and have the power to solidify single party rule.
While (historically) not an absolute defence against fascism, 3 parties represent the minimum a stable democracy can tolerate.

ResearcherZero September 9, 2025 3:12 AM

LLMs can act as backdoors for covert data collection.

PRC laws allow broad access by state authorities to data without independent oversight.

‘https://nukib.gov.cz/en/infoservis-en/news/2295-nukib-warns-against-the-transfer-of-the-data-to-and-remote-administration-from-people-s-republic-of-china/

China is using large language models to improve targeting and control of public discourse. The technology is being used to improve the power of the Great Firewall and products used for censorship and communications control that China is exporting to other authoritarian regimes alongside the Belt and Road Initiative.
https://techcrunch.com/2025/03/26/leaked-data-exposes-a-chinese-ai-censorship-machine/

References within the leaked dataset suggest the training set came from Baidu’s “Ernie Bot”. The models appears to be regularly updated and capable of recursive training.
https://netaskari.substack.com/p/llms-and-china-rules

ResearcherZero September 9, 2025 3:20 AM

@ALL

Such systems work well in China to control and target the population. Block communications when needed, monitor what citizens are saying and shape the narrative where required.

Geedge Networks and the state-owned China National Electronics Import and Export Corporation (CEIEC) have been collaborating with authoritarian governments to create censorship systems like China’s Great Firewall. CEIEC is also assisting the Myanmar Military regime in “a proposed location tracking system for the junta controlled communications ministry.”

Information within leaked documents from the Myanmar Junta’s censorship project states information obtained from telecommunications will be used to arrest and imprison people.

‘https://www.justiceformyanmar.org/press-releases/report-reveals-how-chinas-geedge-networks-and-myanmar-telecoms-companies-are-enabling-the-illegal-juntas-digital-terror-campaign

Pakistan hired Geedge Networks for its own “Great Firewall” designed to monitor citizens.
https://www.aljazeera.com/news/2024/11/26/pakistan-tests-china-like-digital-firewall-to-tighten-online-surveillance

The system can perform DPI in order to identify and block anti-censorship tools.
https://www.justiceformyanmar.org/stories/the-myanmar-juntas-partners-in-digital-surveillance-and-censorship

Warrick September 9, 2025 4:35 AM

Politics seems to have moved from arguing about policies to attempting to control the data, systems and tools that inform those policies. The current way that AI is being deployed exemplifies this – the inherent complexity around LLMs is ignored as long as it gives the answer that the deployers want.

It does not have to be this way. If we use AI correctly, we can manage the inherent biases and limitations, and have a level of confidence that it is doing what is intended. The problem is that this requires a level of self-confidence should the AI tell us that our personal beliefs or opinions are not supported by evidence… and our current crop of leaders seem to lack the ability to admit when they are wrong.

AI in government needs us to build systems and processes that we can trust, and then trust in the outcomes of those systems even when we may disagree with the result. This has always been the case for democracies, which is why attacks on the electoral, judicial and legislative processes are so concerning.

lurker September 9, 2025 1:52 PM

@Chris Becke

According to my dictionary a one party state can also be a democracy. But vote buying, gerrymandering and other forms of corruption are well known in one party states. These are human failings that are not likely to be fixed by AI, no matter how many parties.

lurker September 9, 2025 2:06 PM

@Warrick
“AI in government needs us to build systems and processes that we can trust”

Trust involves reliability, truth and ability. Trust is usually earned through experience. When current AI systems are trained on human experience there seems little hope of change from our existing situation. Removing bias and agenda from the build and training processes will be an interesting exercise.

ResearcherZero September 11, 2025 12:19 AM

@Warrick, lurker

Sovereignty is now shaped by infrastructure and is no longer defined by national borders.

It should be fairly easy to predict where confrontational approaches lead without moderation, in which one side only strives to outdo the other side. A collapse in useful and productive discourse.

With social media now subjected to less moderation, confrontational exchange becomes more likely. Tensions are unlikely to decrease, if the talking points between ideologically opposed sides are deliberately designed to prove each other wrong. The situation can easily degrade into a standoff, much like that of a confrontation between opposing gangs. Such a situation does not promote civil, authentic and constructive conversation that peacefully promotes helpful solutions or resolutions to outstanding local or international issues.

Who or what exactly is going to be in control of – or – influencing debate?

‘https://www.lawfaremedia.org/article/algorithmic-foreign-influence–rethinking-sovereignty-in-the-age-of-ai

A lack of moderation allows easy inauthentic infiltration into debate.
https://theconversation.com/how-foreign-operations-are-manipulating-social-media-to-influence-your-views-240089

Rethinking design to avoid simplistic or easily manipulated debate that goes nowhere or only produces more outrage.
https://www.noemamag.com/how-online-mobs-act-like-flocks-of-birds/

ResearcherZero September 11, 2025 1:08 AM

Getting the politicians to stop and think about their dick and flag waving competitions is the hard part. Dick and flag wavers seem to make up a good portion of the candidates.

They way people carry on you could imagine they live in rubbish dumps fighting over who gets to eat the last chip before it is consumed by seagulls. Instead they fight over who’s ideology and beliefs are less informed, ill considered and who will win at word soccer.

A progressive vs conservative Scrabble game for the intellectually challenged. We could all just shoot each other in the throats and then those who survive can march around playing jingoistic anthems, while making hand gestures at one another like a bunch of chumps.

Do they all want a medal that proves they are d–ckheads? Have mine.

ResearcherZero September 11, 2025 1:55 AM

@Bruce

The companies developing the technology are not so pessimistic either. Without ethical frameworks then such technology will be used for authoritarian and repressive means. Older tech can be rolled out in new initiatives regardless of when or how it was developed.

Western companies including IBM and NVIDIA assisted in the development of Chinese state surveillance technologies. The collaboration alongside Chinese secret police, the Ministry of State Security and the Chinese military included sophisticated analysis systems, fingerprint and DNA recognition, AI assisted surveillance cameras and other technology used during crackdowns on perceived dissident and ethnic groups targeted with repression.

Phillips and Motorola assisted Chinese police with tactical and communications equipment.

Dell, VMWare, Microsoft, Oracle, Cisco, Seagate, Toshiba, Western Digital, Hitachi and others all helped with storage solutions and cloud-based technologies to assist with managing the large amounts of information and databases required to manage the systems.

‘https://apnews.com/article/chinese-surveillance-silicon-valley-uyghurs-tech-xinjiang-a80904158b771a14d5a734947f28d71b

lurker September 11, 2025 5:31 AM

@ResearcherZero

All those tech companies said the money looked clean, smelled clean. It wasn’t their business to know how the customers had obtained it …

And thus the nation state is being torn down by techno-hegemons who don’t understand the concept, never mind believe in it.

ResearcherZero September 11, 2025 6:47 AM

@lurker

It is totally clean public money obtained via government contracts.

You cannot become a billionaire without a little startup money. China’s surveillance state runs on Oracle, generating significant returns for its largest shareholder Larry Ellison.

‘https://nationalinterest.org/blog/techland/oracle-is-powering-chinas-surveillance-state

Oracle was formed after the company was contracted to supply database software for the CIA.
https://gizmodo.com/larry-ellisons-oracle-started-as-a-cia-project-1636592238

ResearcherZero September 11, 2025 6:55 AM

@lurker

Another way to earn investment capital is via off-the-book operations and assassinations.

The use of drones in warfare is leading to more indiscriminate attacks.

‘https://www.tandfonline.com/doi/full/10.1080/14751798.2025.2546712

Drones are already being used in attacks against civilians outside of war zones.
https://www.nbcnews.com/news/latino/deadly-drones-colombia-militants-terrifies-residents-rcna228009

Drone warfare exacerbated tensions and led to greater civilian harm without accountability.
https://diplomatmagazine.eu/2025/03/13/silent-casualties-accountability-gaps-in-us-drone-warfare/

ResearcherZero September 11, 2025 7:32 AM

The AI arms race, acting as an accelerator to political and international trends, has strong parallels with the nuclear arms race.

The combination of confrontational politics, adversarial AI and automated warfare presents the potential for a dangerous environment where decision making can be delegated to autonomous systems. Scenarios have emerged where a hollowing out of moral considerations takes place where humans interact with machines, rather than in environments where other humans would normally have the chance to challenge decisions, or offer independent view points based on professional experience. Decision making which is not informed by credible independent scientific analysis or professional expert advice can often lead to harmful outcomes. The risk of mistake increases further under the fog of war.

The use of automated warfare does not reduce the psychological harm of killing civilians and combatants for operators. Most are still at risk of physiological conditions like PTSD.

Artificial Intelligence can reinforce poor decision making and lead people to make decisions that result in civilian harm. Because the operator is far removed from the situation and part of the decision making may be made by an automated system itself, the individual controlling the system cannot easily distinguish between who is and who is not a combatant. To make matters worse, vulnerable groups are at greater risk from drone warfare. The shift to automation in lethal decision making also reduces transparency and weakens accountability.

‘https://thebulletin.org/premium/2025-09/introduction-how-the-trump-administration-has-upended-international-relations-and-increased-existential-risk/

Jacquelyn Schneider observed that, “The AI is always playing Curtis LeMay.”
https://www.politico.com/news/magazine/2025/09/02/pentagon-ai-nuclear-war-00496884

The automated “OODA loop” supplies lethal decision making inputs directly to the headset.
https://observatory.wiki/The_Rise_of_AI_Warfare:_How_Autonomous_Weapons_and_Cognitive_Warfare_Are_Reshaping_Global_Military_Strategy

Automated weapons did not make the US safer, despite being the first nation to deploy them. Neither did nuclear weapons, although deterrence was a reason given for their development.

(I am actually Curtis LeMay in real life, or at least pretending to be in this moment)

“We need more drones/nuclear weapons/AI enhanced combat systems!!” 🤬 (shouty man face)

https://www.politico.com/news/magazine/2025/08/27/pentagon-drone-technology-deficiency-00525058

KC September 11, 2025 9:25 AM

@ResearcherZero, all

re: Jacquelyn Schneider observed that, “The AI is always playing Curtis LeMay.”

It’s noted there aren’t as many case studies on why war didn’t explode. Which, pardon my french, seems lax at worst and maybe distracted at best. The Cuban Missile Crisis is mentioned as one of the few examples.

In a rare win for deescalation George Mason U initiated a wargame simulation between the US and China where ChatGPT didn’t ‘go crazy.’ Unfortunately, the Chinese “red” team ‘interpreted this as weakness and attacked Taiwan anyway.’

Some specialists believe that an AI could eventually draw the rational conclusion that a failure to cooperate would be the greater peril.

“This would take into consideration the mutual benefits to each nation of participation in global markets, stopping the climate crisis and future pandemics, and stabilizing regions each country wants to exploit commercially.”

From the sounds of it, this may be a long way down the road.

david redel September 11, 2025 1:13 PM

Interesting breakdown. I think the real risk isn’t the AI itself but the lack of accountability in how it’s deployed. We’ve already seen with data breaches how weak oversight creates long-term damage. Without clear transparency and guardrails, AI in government could go the same way.

david redel September 11, 2025 1:15 PM

Really thoughtful piece. I agree that AI in government isn’t inherently good or bad—it depends entirely on the intentions and safeguards around it. The concern I see is that when transparency is missing, the same tech that could streamline services might also expose citizens to risks like misuse of personal data or even large-scale breaches.

I usually follow data breach cases and their legal side, and it’s surprising how often government-related systems come under fire for weak protections. If anyone’s interested, this site has a growing list of data breach cases and legal insights: mydatabreachattorney.com It shows how these issues impact real people when safeguards fail.

david redel September 11, 2025 1:16 PM

Great write-up. What stood out to me is how AI in government can amplify existing risks if accountability isn’t there. We’ve already seen how poor security practices lead to massive breaches, and government systems are no exception.

I track breach-related legal cases, and it’s striking how often these failures end up in class actions. For anyone interested, https://mydatabreachattorney.com/ keeps a record of recent cases and outcomes—it’s eye-opening to see how often the same mistakes repeat.

ResearcherZero September 12, 2025 3:08 AM

@KC

The following research into human bias and the effect of automation on that bias reveals a lot about how we learn, how we make decisions over time and what influences our attention.

It remains a complex area of study, as the emotional and psychological influence of AI is only beginning to be understood, just as our own cognition still requires further research.

AI as moral cover: How algorithmic bias exploits psychological mechanisms to perpetuate social inequality

‘https://spssi.onlinelibrary.wiley.com/doi/full/10.1111/asap.70031

The Automation Bias Phenomenon – The tendency to over-rely on automated recommendations.
https://www.nature.com/articles/s41562-024-02077-2

Identifying and mitigating bias in AI systems is critical to avoid bad outcomes.
https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1614105/full

Psychological and cultural impacts of AI integration in education.
https://journals.sagepub.com/doi/full/10.1177/1354067X251340190

ResearcherZero September 12, 2025 3:31 AM

@KC

I’m rather fond of deescalation and I would hope that many decision makers are too. The Curtis LeMay’s were very uncommon (at least in my experience) in those positions and fortunately well outnumbered by far calmer, more restrained and measured individuals.

Despite the bravado, sabre rattling and rhetoric, I suspect it presently remains so. There is a proposal that instead of one person (president) given the authority for the use of nuclear weapons, that two nut bags be required to authorize and initiate such madness.

In reality there would be a whole bunch of people milling about offering up opinions, like my old friend ChatGPT who I always defer to when making difficult and weighty decisions.

The effect of large datasets on the decision making of Robert S. McNamara

“I don’t want to put into your mind that it was my idea.” (low altitude firebombing)

‘https://www.youtube.com/watch?v=SfPwR00HXM0

Clive Robinson September 12, 2025 4:54 AM

@ ResearcherZero,

With regards,

“I’m rather fond of deescalation and I would hope that many decision makers are too. The Curtis LeMay’s were very uncommon (at least in my experience) in those positions and fortunately well outnumbered by far calmer, more restrained and measured individuals.”

I suspect your experience is about to get negatively impacted…

This was said at the UN by the UK,

https://www.gov.uk/government/speeches/israels-strikes-on-doha-are-a-flagrant-violation-of-the-sovereignty-and-territorial-integrity-of-qatar-uk-statement-at-the-un-security-council

However UK military aircraft were in the air close to Qata flying in an area that indicated to many they were not just aware of what Israel were going to do, but also give the Israeli military assistance, which is why we’ve had stories appearing in various global coverage media, making claims that UK RAF were refuling Israeli aircraft that commited the illegal atrocities that are easily “Primary Acts of War”.

However it appears this “story view” may not be true.

Later apparently independent reports indicate that UK RAF aircraft can not refuel the Israeli aircraft (I’ve no idea if this is true or not, but as the UK and Israel have been doing joint military in the past it would appear a little odd for various reasons see https://www.forcesnews.com/middle-east/healey-confirms-uk-forces-played-their-part-attempt-preventing-further-middle-east ).

Others reports point out that the aircraft were flying in “civilian flight rules” (which is why the showed up on ADS-B receivers that many observed).

Yet other “stories” say they were “flying interference” or “intelligence” for Israeli military.

Which was apparently not the case, the UK RAF was flying a scheduled and well advertised training event.

Which has been reported in several places including,

https://www.middleeasteye.net/news/raf-plane-seen-flying-over-qatar-during-israeli-attack-part-annual-exercise

Is it a case of the truth being slow to get it’s boots on whilst falsities go half way around the world due to an information vacuum?

Or something worse?

Thus the truth “may be” the Israeli military used it for “cover” and the nut jobs who sanctioned the attacks in the Israeli Government decided that putting out stories etc to embroil falsely another nation was “beneficial to their cause” for various reasons.

Thus your observation of,

“Despite the bravado, sabre rattling and rhetoric, I suspect it presently remains so.”

Is apparently not true in this case…

So far all we can do is “say past behaviour establishes an MO”.

Verified historical records show, that this sort of behaviour from Israeli Government seniors is effectively normal and carried out by various Israeli Government entities and personalities both before and after Israel came into existence by such tactics and what were and are terrorist activities…

Note I take care to separate out those who abuse their position and hide behind the rest of the Israeli Government, people of Israel and those who have jewish heritage who in the main will be shocked by this sort of behaviour that endangers them all and will further perpetuate war, devastation, death and worse to everyone.

Peace can not happen by non negotiation history shows this over and over. Likewise the idea you can wipe out the other side is never true either. Thus past actions always come back to haunt belligerents who see non rational behaviour as ok.

ResearcherZero September 12, 2025 7:16 PM

@Clive Robinson

I did have a few people in mind when I wrote “nut jobs”. To better qualify the exceptions to what we would hope would be a best case scenario. The people such as Bibi, who would want to see things as how they wish, comes the danger that they only listen to the intelligence they want to hear. Intelligence is uncertain. It is not what is known.

Intelligence requires trust, integrity and credibility. If we only listen to what we want to hear then we tend to make mistakes. If leaders create and environment in which they are told what they want to hear, then they tend to make mistakes. For instance, Putin might invade Ukraine because he is only getting the intelligence he wants to hear. Bush might invade Iraq because the White House has become so caught up in its own story it is gospel.

Automation bias follows on from confirmation bias. Plato basically said when he authored The Republic, “There will be no end to the troubles of states, or of humanity itself, till philosophers become kings in this world, or till those we now call kings and rulers really and truly become philosophers, and political power and philosophy thus come into the same hands.”

If we spent our time auditing our own behaviour, all of us, then we might have leaders who would audit their own. Perhaps consider the consequences before planning their actions.

What might be the actions of those providing intelligence assessments to the White House in a environment where the leadership attacks those very institutions and fires those who make unwanted statements? How might the people in those agencies behave in an environment where the leadership cannot be trusted? The same results will play out in any similar setting.

The scene had already been set by the 1990’s when hubris and arrogance began to reign, despite mistakes and dangerous precedent being the lessons of the proceeding decades. No sooner than when the INF treaty was ratified, did all parties set about unwinding the détente. All of the consequences of their own security assessments were ignored, and as such – exactly those consequences have played out – just as the intelligence warned.

Ignoring long outstanding issues and social problems eventually leads to aggressive confrontation. John Howard pledged to halve homelessness and hunger, through bombing runs one assumes. Aggressive confrontation eventually leads to violence, violence begets violence. They certainly did not spend their time building public housing, hospitals or flood mitigation. Lack of political honesty and special interests slowly eroded trust.

“The difference between the two classes is often a trivial concern; but in a state, and when affecting really important matters, becomes of all disorders the most hateful,” Plato later remarked in his work Statesman.

Russia, America and others might want to tell a different story where they blame each other. Is the Lesson of the casualties of the Vietnam war – the results of a naming convention? That Vietnam, Afghanistan, Iraq were “politically correct” wars?

ResearcherZero September 12, 2025 7:36 PM

At the end of the Cold War, reductions in defense spending did not move to public housing, education and health, but quid pro quo agreements with special interests and policy makers.

‘Greed is good’ has been the mantra ever since. It keeps everyone easily distracted, arguing and fighting amongst themselves no matter the belief system, ideology or politic.

See the Aryan bozo with the red guitar
Parachute on the White House lawn
Gonna bomb the commies with his air guitar
So dumb he can’t drive 55

Dance your problems away, GOVERNMENT MUSIC
Cheap escape to that mind control beat, GOVERNMENT MUSIC
Mellow out, life’s too hard
You won’t even want to think, no

‘https://www.youtube.com/watch?v=68ekngSyuLM

Rontea September 15, 2025 9:45 AM

Responsible AI use in government is vital for maintaining public trust and ensuring that technological advancements genuinely serve the people. Leaders must prioritize transparency and ethical applications of AI to streamline processes, enhance public services, and prevent the consolidation of power. Electing officials committed to these principles is key to leveraging AI’s potential for positive societal impact.

lurker September 15, 2025 2:07 PM

@Rontea

“Electing officials committed to these principles” of transparency and ethical bevaviour is key to maintaining healthy domocracy, before we even start to think about AI.

Rontea September 17, 2025 12:36 PM

@lurker
Absolutely agree! When we choose leaders who value transparency and ethics, it fosters trust and positivity within our communities. Here’s to building a brighter, more honest future together! 🌟

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.