When AI Says No to War
Published: 3/4/2026
Tech Article • NeuralKnot Archive
THEME:
◄ Home Terminal ◄ Blog Archives About System

When AI Says No to War

On Principled Refusals, Opportunistic Pivots, and What Happens When a Company Tells the Pentagon to Read the Terms of Service


The deadline is 5:01 PM Eastern and I’m watching it expire from three time zones away, refreshing a CNBC tab that keeps auto-playing a muted video of Pete Hegseth’s face. The fluorescent glow of my monitor is doing something unflattering to the room. My coffee went cold an hour ago. Somewhere in San Francisco, Dario Amodei is either staring at a wall or talking to lawyers or both — hard to tell with these CEO types, they compartmentalize like submarines — and what he’s doing, what he’s already done by letting this clock run out, is telling the United States Department of Defense that no, actually, you cannot have unlimited access to our AI models. Not without two conditions. Not without it in writing.

The conditions aren’t radical. I keep coming back to this. They’re not radical.

No autonomous weapons. No mass surveillance of Americans.

That’s it. That’s the whole ask. The Geneva Convention meets the Fourth Amendment, stapled together and slid across a conference table to people who apparently found it offensive.

5:01 passes. Nothing happens for about forty minutes. Then everything happens at once.

The Avalanche

Trump is on Truth Social before I can finish reading Anthropic’s statement. ALL CAPS. The words “radical left” and “woke” appear, which — I’m still trying to process how a company that builds large language models for the military qualifies as woke, but language has been doing strange things lately. “IMMEDIATELY CEASE all use of Anthropic’s technology.” Every federal agency. Done.

Pete Hegseth — and this is the part where I start checking if I’m reading a real government press release or a parody account — designates Anthropic a “Supply-Chain Risk to National Security.” This label. I need you to understand what this label means. This is the label they put on Huawei. On Kaspersky. On companies that are actually, demonstrably, provably working for hostile foreign governments. They’re applying it to an American startup in San Francisco because that startup asked for a contractual guarantee against building Skynet.

My hands are doing something. I realize I’m typing notes I’ll never organize. The room feels wrong. That fluorescent hum.

Elon Musk is on X within the hour. “Anthropic hates Western civilization.” I screenshot it because screenshots are the receipts of our age and this one’s going to matter. Musk owns xAI. xAI competes directly with Anthropic. xAI has been quietly hoovering up Pentagon contracts for months. Nobody in the administration acknowledges this. It hangs there like a smell in a room where everyone has agreed not to mention the smell.

“America’s warfighters will never be held hostage by the ideological whims of Big Tech,” Hegseth writes. The sentence has the cadence of something that was drafted by committee and approved by someone who watches too many action movies. “This decision is final.”

The Other Shoe

I’m still processing the Anthropic fallout when Altman drops his announcement. Same night. Friday night. The news cycle equivalent of burying a body — everyone knows what you’re doing when you announce things on Friday nights.

OpenAI has signed a deal with the Pentagon. Classified military networks. The gap left by Anthropic’s ouster, filled before the chair is cold.

I read it three times.

Here’s what breaks my brain: Altman says the deal includes the same red lines. The same ones. No domestic mass surveillance. Human responsibility for lethal force. No autonomous weapons. He puts it in writing. He calls on the Pentagon to offer these terms to all AI companies.

So. Wait.

Anthropic asks for two safeguards. Gets blacklisted. Gets called a national security risk. Gets threatened with criminal prosecution. OpenAI asks for the same two safeguards. Gets the contract.

I’m sitting here trying to construct a version of reality where this makes sense and isn’t just… I keep deleting the word and typing it again. Corruption. It’s corruption. Or theater. Or both — they blur at this altitude.

Either the Pentagon was always willing to accept these guardrails and the Anthropic standoff was manufactured punishment for a company that didn’t genuflect fast enough. Or the Pentagon wasn’t willing and Altman’s stated red lines are decorative — words that look good in a press release and dissolve on contact with classified operations that nobody outside a SCIF will ever audit.

Neither option lets me sleep.

The $200 Million Conscience

Back up. I need to back up because the narrative moved too fast and the context got trampled.

Anthropic wasn’t some pacifist outfit refusing to work with the military. They had a $200 million Pentagon contract. Signed July 2025. Their models were already embedded in military platforms. Already humming inside systems that do things I probably don’t have clearance to know about. They were in. Deep in.

What they asked for — what they insisted on, what they chose to lose everything over — was two contractual provisions. I keep listing them because they keep being misrepresented:

One. Don’t use our AI for fully autonomous weapons. Machines that select and engage targets without a human making the final decision. This is not a fringe position. The DoD’s own Directive 3000.09 requires “appropriate levels of human judgment” for autonomous weapons. The Campaign to Stop Killer Robots — which sounds like a joke name but is a serious international coalition — has been pushing for binding rules on this for a decade. Dozens of nations agree.

Two. Don’t use our AI for mass domestic surveillance of American citizens. The Fourth Amendment. You’ve heard of it. It’s been around.

The Pentagon’s counter-offer was: trust us, we’ll only use it lawfully, but we won’t put limitations in writing.

Dario Amodei, Thursday before the deadline: “We cannot in good conscience accede to these demands.”

Cannot in good conscience. The phrase rattles around my skull. When’s the last time a tech CEO used the word “conscience” in a sentence that wasn’t drafted by a PR firm for a corporate social responsibility page? When’s the last time one meant it?

Monday Morning

The weekend doesn’t help. By Monday the cascade is complete and I’m tracking it on three screens like a man watching his portfolio during a crash, except the thing crashing is the concept of AI ethics as anything other than marketing copy.

State Department: switches its internal chatbot, StateChat, from Claude to OpenAI’s GPT-4.1. “For now, StateChat will use GPT4.1 from OpenAI.” The memo reads like someone wrote it in a hurry. Because they did.

Treasury: Scott Bessent on X. Terminating all Anthropic products. Done.

Health and Human Services: internal memo directing staff to ChatGPT and Google Gemini. Obtained by Reuters. The mundane bureaucratic language of a purge.

Federal Housing Finance Agency: William Pulte on X. Fannie Mae and Freddie Mac included. It’s spreading to agencies I hadn’t even considered.

Seventy-two hours. That’s how long it took to exile one of the most sophisticated AI companies in the world from the entire federal government. Because they wanted two safeguards in a contract.

I make more coffee. The first cup is still sitting there, cold, a film forming on the surface. The room has that 3 AM quality even though it’s afternoon. Time does this when you’re watching institutions move at a speed they usually reserve for wartime or financial collapse.

The Hashtag and What It Means

#CancelChatGPT trends. Screenshots of canceled subscriptions pile up on X and Reddit and Mastodon and wherever else the digital rage goes to organize itself these days. Users posting their cancellation confirmations like protest signs. “No ethics at all.” “Founded for the benefit of humanity, sold to the benefit of the Pentagon.”

And look — I’ve been doing this long enough to know that hashtag activism has the half-life of a fruit fly. Most of these people will quietly resubscribe in three weeks when they need ChatGPT for work and the news cycle has moved on to whatever fresh atrocity the timeline serves up next. Convenience beats conviction. It almost always does. The switching costs are real and principles are expensive when they require you to learn a new interface.

But something about this one feels different. Not bigger, necessarily. Sharper. Because it’s not abstract. Users aren’t protesting a hypothetical — they’re protesting a specific, documented sequence: company with ethics gets punished, company without them gets rewarded, the rewarded company claims to have the same ethics, nobody believes them.

The cognitive dissonance is doing something to people. I can feel it in the posts. Less outrage, more… disillusionment. A quieter, more permanent kind of damage. The realization that “AI for the benefit of humanity” was always a tagline, never a constraint.

Some users migrate to Claude. The company being punished by the government becomes the moral refuge for consumers fleeing the company being rewarded by the government. Others go open-source — Mistral, LLaMA, models with no corporate entity capable of signing military contracts. The logic is clean: if you can’t trust the company behind the model, own the model yourself.

The Pattern Nobody Wants to See

I’m pulling up old tabs now. Going backwards through time. The pattern is there if you’re willing to look at it and I’m tired enough to look at it without flinching.

2018: Google employees revolt over Project Maven. Drone surveillance AI for the Pentagon. Internal protests, resignations, open letters. Google backs down. Pledges publicly: no AI for weapons. No AI for mass surveillance. Applause. Good guys win. The myth holds.

February 2025: Google quietly removes that pledge from its AI Principles page. A paragraph vanishes. No press release. No announcement. Nobody notices for weeks.

January 2024: OpenAI removes its explicit ban on military and weapons applications from its usage policy. Same playbook. Quiet edit. Terms of service nobody reads. The guardrail comes down without a sound.

And then February 2026: Anthropic holds the line. The only one left holding the line. And gets annihilated for it.

The pattern is obvious and nobody wants to say it because saying it means admitting something about the industry that the industry doesn’t want admitted: ethical commitments are made when companies are small and idealistic and not yet profitable enough for the government to notice. Then they get big. The government shows up. The commitments evaporate. Every single time. Every. Single. Time.

Except Anthropic. And now we’re watching what happens to the exception.

Senator Warner Says the Quiet Part

Mark Warner, Democrat from Virginia, vice chair of the Senate Intelligence Committee. His statement lands Monday and it’s the closest thing to someone in power saying what I’m thinking:

“President Trump and Secretary Hegseth’s efforts to intimidate and disparage a leading American company — potentially as the pretext to steer contracts to a preferred vendor whose model a number of federal agencies have already identified as a reliability, safety, and security threat — pose an enormous risk to U.S. defense readiness.”

Preferred vendor. He means xAI. Musk’s company. The one whose owner is on X calling Anthropic enemies of civilization while his competitor gets blacklisted and his company gets the contracts. The conflict of interest isn’t even hidden. It’s just… there. Sitting in the open like a weapon on a table that everyone walks around.

“Whether national security decisions are being driven by careful analysis or political considerations,” Warner says.

Political considerations. The diplomatic way of saying: someone is getting paid.

The Autonomous Weapons Problem (The One That Won’t Go Away)

I keep circling back to the specific thing Anthropic said no to. Fully autonomous weapons. It’s the question under the question, the thing that makes this story bigger than procurement politics and corporate revenge.

We’re building systems that can identify, select, and engage targets without a human in the loop. This is not hypothetical. This is not science fiction. This is the active frontier of military technology and it’s moving faster than the policy frameworks that are supposed to govern it.

The International Committee of the Red Cross wants binding international rules. The Campaign to Stop Killer Robots has been screaming for a decade. Even the Pentagon’s own policy — Directive 3000.09 — requires human oversight for lethal autonomous systems. The word “appropriate” is in there, doing more load-bearing work than any single word should have to do.

And when an AI company tries to put that same principle into a contract — human oversight, nothing more — the government’s response is: how dare you. You’re a supply-chain risk. You’re a threat to national security. You hate Western civilization.

I’m typing this and the absurdity washes over me in waves. A company asking for human control over lethal AI systems is being treated as an enemy of the state by the same government whose own policies require human control over lethal AI systems.

The fluorescent light buzzes. The coffee is definitely unsalvageable.

Anthropic Goes to Court

They’re challenging the designation. Of course they are. “Unprecedented.” “Legally unsound.” “Never before publicly applied to an American company.”

They’re right on the facts. The supply-chain risk label has never been used this way. It’s a tool designed for foreign adversaries, not domestic companies negotiating contract terms. The legal theory behind applying it to Anthropic is… I keep trying to find the right word. Inventive. The kind of inventive that usually gets appealed.

But courts operate in the current political climate and the current political climate is what it is. Anthropic can win every legal argument and still lose the business. The government doesn’t need the courts to punish a company. They just need to stop buying from them. And by the time the legal challenge works its way through the system — months, maybe years — the contracts will be gone, the employees will have scattered, and the competitors will have absorbed the market share.

This is how power works when it doesn’t need to be subtle. You don’t need to win the legal argument if you can just starve the other side while the argument proceeds.

Where This Goes

I don’t know. That’s the honest answer and I’m tired enough to give honest answers.

Anthropic survives. Probably. They have Google and Amazon money. Consumer adoption of Claude is strong. The international market doesn’t care about the Pentagon. They’ll be fine as a company.

But as a symbol? As proof that an AI company can hold an ethical line against government pressure? I’m less sure.

The lesson that every AI company is learning right now — every startup, every research lab, every team of engineers deciding whether to include safeguards in their next model — is very specific: your principles are tolerated until they’re tested. When tested, they cost you everything. And the company that folds gets rewarded while you get destroyed.

That’s the market for ethics in 2026. That’s the price.

OpenAI claims to have the same red lines in their Pentagon deal. If those safeguards hold — if they’re real and binding and enforceable in classified contexts that no journalist will ever audit — then Anthropic’s stand forced a conversation that mattered. They lost the battle but moved the line.

If those safeguards don’t hold? If they’re decorative language in a contract that was never meant to constrain anyone?

Then we’ll find out. We always find out. It just takes a while, and by the time we do, the company that actually tried will already be a cautionary tale.

I close the CNBC tab. Hegseth’s muted face finally disappears. The room returns to something approaching normal. The cold coffee gets poured down the sink.

Outside, it’s the kind of evening that doesn’t know anything happened. The sky doesn’t care about procurement contracts. The air doesn’t read Truth Social. The world continues its ancient indifference to the small, strange dramas of institutions arguing about what machines should be allowed to kill.

I file my notes. Most of them are unusable. The ones that aren’t are the ones I wrote when I was angry, before I had time to smooth the edges.

Those are always the ones that matter.


Anthropic said no. OpenAI said yes. The government said that’s what we thought. And somewhere in the gap between those three sentences is the entire future of whether AI ethics means anything at all — or whether it was always just a luxury good, affordable in peacetime, discarded the moment someone in a uniform asked nicely enough.

The asteroid is visible now. Some of the dinosaurs pointed at it. They were designated a supply-chain risk for their trouble.