Anthropic and the Pentagon

OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth derided as “woke.”

It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to discontinue use of Anthropic models. Within hours, OpenAI had swooped in, potentially seizing hundreds of millions of dollars in government contracts by striking an agreement with the administration to provide classified government systems with AI.

Despite the histrionics, this is probably the best outcome for Anthropic—and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon’s vindictive threats.

AI models are increasingly commodified. The top-tier offerings have about the same performance, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI and Google, in particular, tend to leapfrog each other with minor hops forward in quality every few months. The best models from one provider tend to be preferred by users to the second, or third, or 10th best models at a rate of only about six times out of 10, a virtual tie.

In this sort of market, branding matters a lot. Anthropic and its CEO, Dario Amodei, are positioning themselves as the moral and trustworthy AI provider. That has market value for both consumers and enterprise clients. In taking Anthropic’s place in government contracting, OpenAI’s CEO, Sam Altman, vowed to somehow uphold the same safety principles Anthropic had just been pilloried for. How that is possible given the rhetoric of Hegseth and Trump is entirely unclear, but seems certain to further politicize OpenAI and its products in the minds of consumers and corporate buyers.

Posturing publicly against the Pentagon and as a hero to civil libertarians is quite possibly worth the cost of the lost contracts to Anthropic, and associating themselves with the same contracts could be a trap for OpenAI. The Pentagon, meanwhile, has plenty of options. Even if no big tech company was willing to supply it with AI, the department has already deployed dozens of open weight models—whose parameters are public and are often licensed permissively for government use.

We can admire Amodei’s stance, but, to be sure, it is primarily posturing. Anthropic knew what they were getting into when they agreed to a defense department partnership for $200m last year. And when they signed a partnership with the surveillance company Palantir in 2024.

Read Amodei’s statement about the issue. Or his January essay on AIs and risk, where he repeatedly uses the words “democracy” and “autocracy” while evading precisely how collaboration with US federal agencies should be viewed in this moment. Amodei has bought into the idea of using “AI to achieve robust military superiority” on behalf of the democracies of the world in response to the threats from autocracies. It’s a heady vision. But it is a vision that likewise supposes that the world’s nominal democracies are committed to a common vision of public wellbeing, peace-seeking and democratic control.

Regardless, the defense department can also reasonably demand that the AI products it purchases meet its needs. The Pentagon is not a normal customer; it buys products that kill people all the time. Tanks, artillery pieces, and hand grenades are not products with ethical guard rails. The Pentagon’s needs reasonably involve weapons of lethal force, and those weapons are continuing on a steady, if potentially catastrophic, path of increasing automation.

So, at the surface, this dispute is a normal market give and take. The Pentagon has unique requirements for the products it uses. Companies can decide whether or not to meet them, and at what price. And then the Pentagon can decide from whom to acquire those products. Sounds like a normal day at the procurement office.

But, of course, this is the Trump administration, so it doesn’t stop there. Hegseth has threatened Anthropic not just with loss of government contracts. The administration has, at least until the inevitable lawsuits force the courts to sort things out, designated the company as “a supply-chain risk to national security,” a designation previously only ever applied to foreign companies. This prevents not only government agencies, but also their own contractors and suppliers, from contracting with Anthropic.

The government has incompatibly also threatened to invoke the Defense Production Act, which could force Anthropic to remove contractual provisions the department had previously agreed to, or perhaps to fundamentally modify its AI models to remove in-built safety guardrails. The government’s demands, Anthropic’s response, and the legal context in which they are acting will undoubtedly all change over the coming weeks.

But, alarmingly, autonomous weapons systems are here to stay. Primitive pit traps evolved to mechanical bear traps. The world is still debating the ethical use of, and dealing with the legacy of, land mines. The US Phalanx CIWS is a 1980s-era shipboard anti-missile system with a fully autonomous, radar-guided cannon. Today’s military drones can search, identify and engage targets without direct human intervention. AI will be used for military purposes, just as every other technology our species has invented has.

The lesson here should not be that one company in our rapacious capitalist system is more moral than another, or that one corporate hero can stand in the way of government’s adopting AI as technologies of war, or surveillance, or repression. Unfortunately, we don’t live in a world where such barriers are permanent or even particularly sturdy.

Instead, the lesson is about the importance of democratic structures and the urgent need for their renovation in the US. If the defense department is demanding the use of AI for mass surveillance or autonomous warfare that we, the public, find unacceptable, that should tell us we need to pass new legal restrictions on those military activities. If we are uncomfortable with the force of government being applied to dictate how and when companies yield to unsafe applications of their products, we should strengthen the legal protections around government procurement.

The Pentagon should maximize its warfighting capabilities, subject to the law. And private companies like Anthropic should posture to gain consumer and buyer confidence. But we should not rest on our laurels, thinking that either is doing so in the public’s interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

Posted on March 6, 2026 at 12:07 PM10 Comments

Comments

lurker March 6, 2026 12:33 PM

@Bruce
“The Pentagon should maximize its warfighting capabilities, subject to the law.”

In a perfect world, yes. But the Pentagon is fighting wars against adversaries who are not subject to the same law. What then?

empty March 6, 2026 1:48 PM

@lurker – the US is still subject to the laws of the US and international law. The US isn’t involved in any altercations it didn’t initiate, so it could easily avoid violating Geneva conventions by not flying across an ocean to attack other nations.

Loya March 6, 2026 2:27 PM

@empty

A state is not subject to intrrnational law if its government does not consent to it by making a treaty or performing some conventional practice.

Intl law is also not interpreted in an exclusively legalistic and textual manner because political considerations drive the interpretation towards an instrumentalist bias.

There are a lot of foreign influence ops online now trying to delegitimize everything certain nations do using Western concepts like intl law. But few of these ops apply intl law to certain other countries when they say “death to X” or when they fund proxy armies outside their borders and those proxy armies commit war crimes against civilians.

If you want to critique using intl law concepts, aoply them to everyone equally.

Paul Jones March 6, 2026 3:10 PM

I wonder how other nations feel about this. The threat of AI is that it’s a winner take all scenario. Currently america has all the cards. This whole issue of “well these models are nearly the same” doesn’t fly when you’re Russia or China. They’re probably sweating bullets about this.

Anselm March 6, 2026 4:46 PM

Does anyone else here also feel uneasy about “fully autonomous weapons” based on a piece of software which has problems figuring out how often the letter “r” occurs in the word “strawberry”?

SocraticGadfly March 6, 2026 5:02 PM

Riffing on @lurker, why don’t we just ask Israel for help then? Oh, I’m sorry, I thought you said: “the Pentagon is fighting wars ALONGSIDE ‘adversaries’ who are not subject to the same law.”

Clive Robinson March 6, 2026 6:01 PM

@ Bruce, Nathan, ALL,

First off when you say,

“The government has incompatibly also threatened to invoke the Defense Production Act, “

Do you actually mean “incompatibly” or more likely “incomprehensibly”?

If the former incompatible with what in specific?

But the article in general starts of really badly with,

“This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security.”

Lets be clear, neither company is a,

“wealthiest titan of the big tech industry”

They are maybe two of the more indebted, but both startups, to be blunt, are better to be seen as the “long con” Venture Capitalist hype/scams they are.

Dario Amodei and Sam Altman are both “soft bullshit” spouters with a level of grandiloquence that defies reason, sense, and sound analysis, both financial and technical.

As for,

“existential risks posed by a new technology”

The technology “has no agency” it alone can not do anything other than output information with a known to be high level of inaccuracy. So much so it’s been given the term “hallucination” or more correctly “soft bullshit” because it’s unintentional nonsense.

The only way it could present an “existential risk” is if the hype crashes the economy, or some idiot connects it up in some way to unsafe real world actions. Which as others and I have repeatedly noted for quite some time would be a stupid thing to do.

So put the blame where it actually exists, which is “humans” that really do not have the “real world smarts” to be allowed out of a baby buggy, let alone be in charge of one. The fact that such people occupy positions of significant influence should scare every reasonably sane person who can think things through using plain reason and logic. The fact people are not doing this and going along with the “cult of self” based on “technobabble woo woo” should be of way greater concern.

As for,

“Despite the histrionics, this is probably the best outcome for Anthropic—and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon’s vindictive threats.”

When you say “histrionics” you do not say by whom… Thus the implication is it is those in the US Executive and two AI Startups. Where as in fact it’s the MSM and trade press acting as an enabler of yet more “AI hype” and trying to turn it into “if it bleeds it leads” style story. The main side effect of which is it gives those who have swallowed the “existential risk” and other AI “technobabble woo woo” an excuse to run around like “Chicken Little”… That in turn just feeds the idiot “cult of self” that the current US Executive is.

The fact that you say “free market” with regards US Federal Spending is very worrying. No tax funded entity is “free” it has a significant “duty of care” to those who provide the funding and have the right to vote in a “representational democracy”.

The fact that this “duty of care” has been weaponised in US politics should be of significant concern and scrutiny by the voters, but over the “television era” US voters have become remarkably hooked on “razzmatazz distractions” rather than carrying out substantive reasoning about a system that has been turned into one designed to rob them of their essential rights over those who act in their name.

As for “Pentagon’s vindictive threats” stop buying into that nonsense. It’s not the “Pentagon” but one or two people in the US Executive exhibiting less than desirable characteristics and probity and belief in their own “cult of self” and in one case very bad judgment based on faux-religious and moral beliefs. Something that the required separation of “Church and State” in the US should stop but obviously nolonger does.

I could go on, but the rest of the article noticeably changes in tone which suggests that “editorial control” has been overly exerted directly or by expectation of compliance.

The penultimate paragraph and ultimate sentence are what readers should really take onboard.

But this requires those in the US to actually “engage with politics” meaningfully rather than jiggle along irresponsibly to the televised razzmatazz.

R.Cake March 9, 2026 4:31 AM

@Clive Robinson – thank you. I believe this is the best comment of yours I have read so far, and I have read many over the years 🙂

Clive Robinson March 9, 2026 9:42 AM

@ R.Cake,

Nice to hear from you, and thank you for your kind words, I’m glad I’ve been able to make your day a little better.

Let us both hope that we see improvements for people around us and in other places around the globe as the year goes on.

Take care.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.