---
title: "How Claude went to war"
date: "2026-03-16"
category: "TECH & HUMAN"
readTime: "15 min"
excerpt: "How an AI technology escaped the control of the company that created it, the government that banned it, and the military that uses it."
tldr: "January 2026: Claude (Anthropic) deployed for the first time in active combat operations - Venezuela (Operation Absolute Resolve, 83 dead), Iran (February 2026, 1000+ targets in 24h). Claude via Palantir synthesized intelligence, generated bombing targets. Irony: Anthropic positions itself as 'responsible alternative', founder Amodei compares AI to atomic bomb. AI technology escaped control of the company, government, and military."
---

*How an AI technology escaped the control of the company that created it, the government that banned it, and the military that uses it.*

---

At the beginning of 2026, a precedent was set in the US military that will be talked about for decades. For the first time in history, a large language model - the same type of AI you chat with daily - was deployed in active combat operations. Not as an experiment. Not as a simulation. As a key component of the system that chooses what to bomb.

That model is called Claude. It was created by Anthropic - a company that has positioned itself from its founding as the responsible alternative in the race for artificial intelligence. A company whose founder Dario Amodei publicly compares AI development to the creation of the atomic bomb and whose favorite book is The Making of the Atomic Bomb by Richard Rhodes.

The irony is cruel. And the story that followed is more important than most things being discussed in the AI community.

---

## What happened

### Venezuela, January 2026

On January 3, the US military conducted Operation Absolute Resolve - a complex military strike involving over 150 aircraft and drones, Delta Force special forces, FBI agents, and CIA intelligence officers. The goal was to capture Venezuelan President Nicolás Maduro in Caracas. The operation resulted in 83 deaths - 47 Venezuelan soldiers and 32 Cuban military advisers. Maduro was transported to the aircraft carrier USS Iwo Jima and subsequently to New York, where he was charged with narco-terrorism on January 5.

On February 13, the Wall Street Journal revealed that **Claude was used directly during the operation** - not just in planning, but in real-time during combat action. At that time, Claude was the only frontier AI model deployed on classified military networks, integrated through the Anthropic-Palantir-AWS partnership.

### Iran, February-March 2026

On February 28, Operation Epic Fury began *(author's note: seriously, do seven-year-olds name these operations?)* - a massive US attack on Iran, coordinated with the Israeli Operation Roaring Lion. In the first 12 hours, the US military struck over 900 targets. Within 24 hours, it was more than 1,000.

Claude played a confirmed, central role. The Washington Post and CBS News independently confirmed that Claude, through Palantir:

- **Synthesized intelligence information** - analyzed massive volumes of data for CENTCOM commanders
- **Identified and prioritized targets** - helped select those 1,000+ targets on the first day
- **Simulated scenarios** - modeled strike outcomes and war games
- **Optimized logistics** - planned supply chains

Admiral Brad Cooper stated that AI technology **"effectively doubled the speed and intensity of military strikes."**

---

## How Claude got there

### Partnership with Palantir (2024)

In November 2024, Anthropic, Palantir, and Amazon Web Services announced a partnership to deploy Claude on classified US defense and intelligence networks. Claude was integrated into Palantir's AI platform at Impact Level 6 (Secret) classification.

Even then, it sparked internal resistance. Anthropic employees held "many big discussions on Slack," which became a source of ongoing tension within the company.

### $200 million contract (2025)

In July 2025, the Pentagon awarded Anthropic a contract worth up to $200 million. Claude became the military's preferred AI model - officials called it "superior" to the competition. It was deployed for intelligence analysis, operational planning, cyber operations, and document review.

### What is Maven

Maven Smart System is Palantir's platform for military operations. Imagine it as a central brain that connects satellite imagery, drone footage, communications intercepts, and over 150 other data sources in real-time. Claude is its linguistic and analytical core - reading data, finding patterns, suggesting targets, calculating coordinates.

Key detail: Georgetown University found that Maven enabled **20 people to do the work of 2,000**. That sounds like a triumph of efficiency. In reality, it means that a decision cycle that previously involved hundreds of people with time to think is now compressed into a small group processing proposals faster than they can properly evaluate them.

---

## Minab: When efficiency kills

The darkest chapter of the entire story is the strike on a girls' elementary school in the town of Minab, which killed over 175 civilians - mostly children.

According to analysis by Mark Milanovic in the European Journal of International Law, this **was not collateral damage**. The school was struck individually, with precision munitions - just like every other building in the adjacent Islamic Revolutionary Guard Corps (IRGC) complex. Someone - or something - specifically selected it as a target.

The most likely explanation: the building was once part of the IRGC compound before being separated, walled off, and converted into a school. Whoever worked with outdated maps or satellite images wouldn't know the difference.

The Pentagon refused to say whether AI played a role in selecting this particular target. But we know that Maven suggested over 1,000 targets in the first 24 hours. And we know that the "human in the loop" - the person who was supposed to approve each target - had to process dozens of proposals per hour.

Legal analysis shows that the problem isn't with the principle of proportionality (the commander didn't expect civilian casualties because he didn't know it was a school). The problem is with the principle of **feasible precautions** - the obligation to do everything reasonably possible to verify that the target is actually military. Journalists from the New York Times verified this from open sources relatively quickly. The military with all its tools - including AI - did not.

---

## Double Black Box: Why nobody knows what's happening

Law professor Ashley Deeks from the University of Virginia described a concept that perfectly captures the situation: **double black box**.

On one side: tech companies **cannot see** how their product is used in classified environments. Anthropic has no access to the classified servers where Claude runs. They don't know what data is being sent to it, what instructions it receives, what it generates.

On the other side: the military **doesn't understand** how the model works inside. Claude isn't a deterministic program where input A always gives output B. It's a probabilistic system with billions of parameters whose decision-making process isn't fully explainable even to its creators.

Result: no one has the complete picture. The company can't control usage. The military can't fully control outputs. And Congress - by their own admission - doesn't know what's happening at all. Democrat Adam Smith, the ranking member of the Armed Services Committee: *"I think we should pay more attention to this."* Republican Mike Rogers, chairman of the committee: *"I don't have that kind of insight into it."*

---

## Anthropic vs Pentagon: Five phases of breakdown

The relationship between Anthropic and the Pentagon broke down in five clearly readable phases.

**1. Partnership (2024):** Public announcement of collaboration with Palantir and AWS. Internal employee resistance.

**2. Integration (2025):** $200M contract. Claude becomes "indispensable" for military operations.

**3. Tension (January 2026):** After the revelation of Claude's role in Venezuela, Defense Secretary Pete Hegseth issues a memo demanding that AI models have no restrictions for "lawful military applications." He publicly declares: *"We will not deploy AI models that won't let you wage wars."*

**4. Rupture (February 2026):** Hegseth gives Anthropic CEO Dario Amodei a deadline - February 27, 5:01 PM. Condition: unrestricted military use of Claude. Anthropic holds two red lines: **no mass surveillance of Americans** and **no fully autonomous weapons**. When the deadline expires, Trump designates Anthropic as a "supply chain risk to national security" - a category normally reserved for companies like Huawei - and orders all federal agencies to stop using its products.

**5. Lawsuits (March 2026):** On March 9, Anthropic files two federal lawsuits. They argue that the designation is "unprecedented and unlawful" and violates the First Amendment. CFO Krishna Rao states that government actions could reduce 2026 revenues by "several billion dollars."

---

## OpenAI jumped into the hole - and found it was just as deep

Within hours of the deadline expiring, OpenAI announced its own deal with the Pentagon. Sam Altman later admitted it was "sloppy and opportunistic."

The reaction came quickly and from an unexpected direction. Over 900 Google and OpenAI employees signed an open letter supporting Anthropic's red lines. The "No Tech for Apartheid" coalition, reportedly representing ~700,000 workers at Amazon, Google, Microsoft, and others, issued a joint statement. Caitlin Kalinowski, OpenAI's head of robotics, resigned in protest.

And then came the punchline of the whole story: on Saturday, OpenAI issued a statement that its Pentagon deal contains **"more guardrails than any previous classified AI deployment agreement, including Anthropic's."** Ban on mass surveillance. Ban on autonomous weapons. Ban on automated high-stakes decisions.

In other words: the Pentagon fired Anthropic for holding red lines, then signed a deal with OpenAI with **the same red lines**.

---

## Palantir: The one profiting

While Anthropic sues and OpenAI manages reputation, there's an actor talked about much less: Palantir.

Byline Times published an investigative article on what it called a **structural conflict of interest**. Palantir co-founders - Peter Thiel and Joe Lonsdale - had spent years publicly advocating for military confrontation with Iran. Lonsdale talked about looking forward to "investments in Iran" after regime change. Thiel argued that every historical case of an adversary acquiring nuclear weapons led to regional war.

Meanwhile, their company was providing the intelligence analysis that helped justify the war. And then profiting from it by selling real-time targeting services.

In the first week of conflict, Palantir stock rose ~15%. Thiel sold $280 million worth of Palantir stock during the same period.

CEO Alex Karp openly stated that war with Iran would prove the value of autonomous weapons systems.

This isn't an AI problem. This is the military-industrial complex with better technology and better PR.

---

## Lavender: We've seen this before

For those who followed the war in Gaza, this story is terrifyingly familiar.

In 2024, the Israeli military deployed an AI system called Lavender that profiled over 37,000 Palestinian men as potential targets connected to Hamas. The system had a known error rate of around 10%. Kill approvals took approximately 20 seconds. +972 Magazine documented how the military knowingly accepted this error rate.

Now the same logic is running over Iran - just with an American model, American servers, and American generals approving target lists generated faster than they can read them.

Peter Asaro from The New School formulated the key question: *"You can rapidly create long lists of targets much faster than humans can. The ethical and legal question is: to what extent are those humans actually reviewing particular targets, verifying their legality and military value, before they approve them?"*

Brianna Rosen from Oxford answers succinctly: *"Even with a human fully involved in the process, there is significant civilian harm, because human review of machine decisions is essentially perfunctory."*

---

## What it means

As of March 10, 2026, three realities exist side by side that don't fit together:

**First:** Frontier AI has become operationally indispensable for the US military. So much so that banning its manufacturer couldn't remove it from active combat systems. Full replacement of Claude will take an estimated 3-6 months.

**Second:** There is no adequate framework for governing commercial AI in warfare. Congress has no oversight. Rules of engagement are contractual, not statutory. And the line between "supporting human decisions" and "autonomous targeting" blurs at the speed of 1,000 targets in 24 hours.

**Third:** The market rewarded resistance. Claude surpassed ChatGPT on the App Store for the first time - downloads increased 240% month-over-month. ChatGPT saw a 295% increase in uninstalls. Anthropic reports over a million new registrations daily.

None of these realities has a clear solution. The next key dates are March 17 (Maduro's court hearing in New York) and federal court decisions on Anthropic's lawsuits. Negotiations between Amodei and the Pentagon have reportedly resumed, but Bloomberg cites "little chance" of an agreement after the lawsuits were filed.

And meanwhile, Claude continues processing targeting data for the Iran campaign. The AI that its own creator wanted to restrict is waging the war that the government wanted to ban it from - and no one has full control over either of these outcomes.

---

## Why you should care

This isn't a story about a bad company or bad AI. It's a story of systemic failure - a moment when technology outpaced all the institutions meant to govern it. The company can't control how its product is used. The government can't disconnect what it depends on. The military doesn't understand what it's using. Lawmakers don't know what's happening.

And somewhere in Iran lies a destroyed school where precision munitions hit a building that AI probably identified as a military target. Not because AI is evil. But because it worked with bad data, in a system where no one had the time or reason to verify whether that building was full of children.

Margaret Mitchell from Hugging Face summarized it best: *"They don't want to not kill people. They want to kill the right people. And who the right people are is decided by the government."*

The question for all of us: who decides when the machine decides?

---

---

## What happened next

On March 12, Pentagon CTO Emil Michael definitively ruled out resuming negotiations. He told CNBC "no chance." On the same day, Anthropic filed a lawsuit - claiming that the blacklist as a "supply-chain risk" (a category normally reserved for Huawei) violates the First Amendment. The Guardian reports that Anthropic is banned from ALL government networks, not just military ones. And starting March 16, the entire tech industry began standing behind Anthropic. The reason is clear - if the government can blacklist an AI company for setting ethical limits, it can do it to anyone.

Meanwhile, OpenAI claims their Pentagon deal contains "more guardrails than any previous agreement, including Anthropic's." Sounds nice. But reality is different - Altman's deal references "existing US law and Pentagon policy." In other words: no new standards, just prettier packaging of the existing status quo. Anthropic argued that current law isn't enough, because collecting "publicly available" data is legal, but combined with frontier AI becomes de facto mass surveillance. They wanted protection BEYOND the law. OpenAI accepted the status quo. A more detailed analysis of why these aren't the same thing is [here](https://www.jroh.cz/blog/chatgpt-je-oficialnim-nastrojem-pentagonu).

And meanwhile, Claude still runs on classified servers. Full replacement will take an estimated 3-6 months. The AI that its creator wanted to restrict and the government wanted to ban continues waging the war that neither can stop.

---

*Jakub Roh | jroh.cz | March 2026*

**Sources:** Washington Post, CBS News, Wall Street Journal, Axios, Bloomberg, The Guardian, NPR, CNBC, TechCrunch, Responsible Statecraft, EJIL Talk, Al Jazeera, France 24, Byline Times, Nature, Futurism, Business Insider, BBC, Common Dreams, Asia Times, Seznam Zprávy, Novinky.cz, iROZHLAS, ČT24
