Pentagon threatens to blacklist Anthropic over military AI safeguards
Rule Changes
The Defense Department formally designated Anthropic a supply chain risk—the first U.S. company ever labeled as such—for refusing to remove AI restrictions on mass surveillance and autonomous weapons, prompting lawsuits
The Defense Department formally designated Anthropic a supply chain risk—the first U.S. company ever labeled as such—for refusing to remove AI restrictions on mass surveillance and autonomous weapons, prompting lawsuits
Anthropic's Claude became the first commercial AI model deployed on classified U.S. military networks in late 2024. Over sixteen months later, the Department of Defense formally designated Anthropic a "supply chain risk"—a label historically reserved for foreign adversaries—after the company refused to permit Claude's use for mass surveillance of Americans or fully autonomous weapons. The unprecedented action followed failed negotiations and President Trump's directive to cease federal use of Anthropic tech, forcing contractors to cut ties.
Anthropic's Claude became the first commercial AI model deployed on classified U.S. military networks in late 2024. Over sixteen months later, the Department of Defense formally designated Anthropic a "supply chain risk"—a label historically reserved for foreign adversaries—after the company refused to permit Claude's use for mass surveillance of Americans or fully autonomous weapons. The unprecedented action followed failed negotiations and President Trump's directive to cease federal use of Anthropic tech, forcing contractors to cut ties.
Anthropic responded by filing federal lawsuits on March 9-10 alleging First Amendment violations and statutory overreach, as agencies begin offboarding Claude and hundreds of millions in contracts face cancellation. The dispute, now in court, sets precedent for OpenAI (which reached a deal preserving safeguards via cloud deployment), Google, and xAI amid Pentagon demands for unrestricted access. Legal experts call the designation ideologically driven and likely unlawful.
Images from Openverse under Creative Commons licenses.
Videos from YouTube.
Interactive
Exploring all sides of a story is often best achieved with
Play.
Dorothy Parker
(1893-1967) ·Jazz Age · wit
Fictional AI pastiche — not real quote.
"How delightful that they've found a new way to dress up the oldest arrangement in the world: you may keep your principles, darling, or you may keep your contract, but the management regrets it cannot accommodate both. At least the brothels of my acquaintance were honest about the transaction."
100% found this insightful
Ayn Rand
(1905-1982) ·Cold War · philosophy
Fictional AI pastiche — not real quote.
"The Pentagon, having failed to create productive minds through force, now threatens to punish the one company that dares say "no" — observe that the government's ultimate weapon against a man of principle is not a gun, but a label: *enemy*. Anthropic's refusal to surrender its rational judgment to the collective is precisely the virtue its persecutors cannot forgive."
100% found this insightful
Ever wondered what historical figures would say about today's headlines?
Sign up to generate historical perspectives on this story.
Pentagon formally notifies Anthropic of supply chain risk designation
Escalation
DoD informed Congress and Anthropic of the first-ever use of designation against a U.S. firm, citing AI restrictions as national security risk; agencies begin offboarding.
OpenAI detailed 'multi-layered' agreement with Pentagon using cloud deployment to enforce red lines on surveillance/weapons while enabling classified use.
Legal experts deem Pentagon designation 'dubious' and ideological
Analysis
Defense officials and scholars called supply chain label legally weak, predicting lawsuits; CENTCOM noted challenges replacing Claude after training investment.
Hegseth announces supply chain risk designation after Trump directive
Announcement
Following Trump's order to halt federal Anthropic use, SecDef Hegseth directed designation effective immediately with 6-month transition; first against domestic firm.
Hegseth summons Amodei to Pentagon
Meeting
Defense Secretary Hegseth called Anthropic's chief executive to the Pentagon for what officials described as an ultimatum meeting over the terms of Claude's military use.
xAI signs deal to put Grok on classified military systems
Contract
Elon Musk's xAI agreed to the Pentagon's "all lawful purposes" terms for deploying its Grok model on classified networks, positioning it as a potential replacement for Claude and increasing pressure on Anthropic.
Pentagon chief technology officer urges Anthropic to 'cross the Rubicon'
Statement
Undersecretary Emil Michael publicly called on Anthropic to drop its restrictions, arguing it was "not democratic" for a private company to impose policy constraints beyond congressional legislation.
Pentagon threatens supply chain risk designation for Anthropic
Escalation
Defense Secretary Hegseth moved toward designating Anthropic a "supply chain risk"—a label normally reserved for foreign adversaries—which would force every Pentagon contractor and vendor using Claude to certify they had severed ties with the company.
Pentagon threatens to sever relationship with Anthropic
Escalation
Axios reported that the Pentagon was threatening to cut off Anthropic over its insistence on maintaining restrictions against mass surveillance and autonomous weapons, with a senior official saying the company would "pay a price."
Reports reveal Claude's role in Venezuela operation
Revelation
Axios reported that the military used Claude during the January 3 raid, prompting an Anthropic executive to contact Palantir asking whether Claude had been involved—a query the Pentagon interpreted as potential disapproval of the operation.
Hegseth releases AI strategy mandating 'any lawful use' contracts
Policy
Defense Secretary Hegseth issued a new AI strategy requiring all Pentagon AI contracts to include "any lawful use" language within 180 days, explicitly rejecting company-imposed ethical guardrails on military applications.
Claude reportedly used during U.S. military raid on Venezuela
Military Operation
U.S. Delta Force captured Venezuelan leader Nicolas Maduro in Operation Absolute Resolve. Military personnel used Claude through Palantir's platform during the operation, marking what appears to be the first use of a commercial AI model in a classified military operation.
Pentagon awards $200M contracts to Anthropic, OpenAI, and Google
Contract
The Department of Defense signed two-year contracts worth up to $200 million each with three leading AI companies to prototype frontier AI capabilities for national security.
Anthropic and Palantir announce defense AI partnership
Partnership
Anthropic, Palantir, and Amazon Web Services announced a partnership to deploy Claude on Palantir's AI Platform for U.S. defense and intelligence agencies, with Department of Defense Impact Level 6 certification for classified work.
Scenarios
1
Anthropic accepts 'all lawful purposes' standard with cosmetic concessions
Discussed by: Defense officials quoted by Axios and CNBC; defense industry analysts at Defense One
The Pentagon offers Anthropic minor face-saving language—perhaps an acknowledgment that existing federal law already prohibits mass domestic surveillance and that Department of Defense policy requires human oversight in lethal targeting—while Anthropic drops its company-specific restrictions. This is the outcome Pentagon officials have signaled they expect. It would preserve Anthropic's classified access and its $200 million contract, but would mean the company's safety commitments yielded to government pressure, potentially undermining its brand positioning and setting the terms for every other AI lab.
2
Pentagon designates Anthropic a supply chain risk, forces industry-wide cutoff
Anthropic refuses to budge and the Pentagon follows through on its threat. The supply chain risk designation would require every defense contractor and vendor to certify they do not use Claude—a potentially devastating blow given that Anthropic says eight of the ten largest U.S. companies use its products. The Pentagon replaces Claude on classified networks with xAI's Grok and eventually Google's or OpenAI's models. This outcome would represent the most aggressive use of supply chain designation authority against a domestic company and would likely trigger legal challenges and congressional scrutiny.
3
Congress intervenes with legislation governing military AI terms of use
Discussed by: Lawfare; legal scholars and former defense officials arguing the dispute exposes a regulatory gap
The standoff generates enough attention that Congress steps in to legislate boundaries for military AI use, removing the question from bilateral company-Pentagon negotiations. Legislation could codify restrictions on autonomous weapons and domestic surveillance—giving AI companies statutory cover—or it could mandate unrestricted access and settle the matter in the Pentagon's favor. Either way, durable rules would replace the current ad hoc negotiation framework that shifts with each administration.
4
Standoff continues as quiet compromise delays a public resolution
Discussed by: Anthropic spokesperson characterizing talks as 'productive'; defense analysts at Forecast International
Both sides find it in their interest to avoid a definitive break. The Pentagon quietly continues using Claude on classified systems under the existing contract while negotiations drag on. Anthropic avoids the supply chain risk label; the Pentagon avoids losing its most capable classified AI tool. The "all lawful purposes" mandate's 180-day deadline creates a forcing function, but deadlines in defense contracting are routinely extended. This buys time but resolves nothing, leaving the fundamental question unanswered.
Discussed by: Legal experts at Defense One, Lawfare, former officials
Federal judges rule supply chain label unlawful as applied to domestic firm in contract dispute, vacating it and requiring DoD to restore access or negotiate. Anthropic regains leverage; precedent limits future ideological designations.
Historical Context
Google and Project Maven (2017-2018)
April 2017 - June 2018
What Happened
In 2017, the Pentagon launched Project Maven to use machine learning for analyzing drone surveillance footage and awarded Google a contract to help build it. When employees discovered the arrangement in early 2018, more than 3,000 signed an internal petition demanding Google cancel the contract and pledge never to build "warfare technology." About a dozen employees resigned in protest.
Outcome
Short Term
Google declined to renew the Maven contract and chief executive Sundar Pichai published a set of "AI Principles" that included a commitment not to build AI for weapons or surveillance that violated international norms.
Long Term
Google's retreat created an opening for smaller defense-focused firms and signaled to the Pentagon that relying on commercial tech companies meant accepting their ethical constraints. Eight years later, Google has reversed course and agreed to the Pentagon's unclassified "all lawful uses" terms.
Why It's Relevant Today
The Maven episode established the template Anthropic now faces: employee and public pressure to maintain ethical limits versus government pressure to remove them. Google's eventual reversal suggests that commercial incentives may ultimately override safety commitments, but Google never faced the kind of coercive threat—a supply chain risk designation—that the Pentagon is now wielding against Anthropic.
AT&T and warrantless National Security Agency surveillance (2005-2013)
December 2005 - June 2013
What Happened
In 2005, the New York Times revealed that the National Security Agency (NSA) had been conducting warrantless surveillance of Americans' phone calls and internet communications since 2001, with major telecommunications companies including AT&T providing direct access to their networks. AT&T technician Mark Klein documented a secret room at the company's San Francisco facility where the NSA tapped into fiber-optic cables carrying domestic internet traffic.
Outcome
Short Term
Congress passed the 2008 FISA Amendments Act, which retroactively granted legal immunity to telecom companies that had cooperated with the surveillance program, shielding them from dozens of lawsuits.
Long Term
The episode demonstrated that when the government frames cooperation as a national security imperative, companies that comply receive legal protection while resisters face enormous pressure. Edward Snowden's 2013 disclosures revealed the full scale of the programs that telecoms had enabled.
Why It's Relevant Today
Anthropic's specific red line against mass surveillance of Americans directly echoes the AT&T precedent. The Pentagon's demand for "all lawful purposes" access is precisely the framework under which the NSA surveillance programs operated—technically lawful under executive authorization, but later widely regarded as an overreach. The dispute raises the question of whether AI companies will play the role telecoms played in the 2000s.
In May 2019, the Commerce Department placed Chinese telecommunications giant Huawei on the Entity List and designated it a supply chain risk, citing national security concerns over the company's ties to the Chinese government. The designation forced American companies to stop selling components and software to Huawei and barred its equipment from U.S. networks.
Outcome
Short Term
Huawei lost access to Google's Android services and advanced American semiconductors, crippling its smartphone business outside China and slowing its 5G network equipment sales in Western markets.
Long Term
The designation became the template for technology decoupling between the U.S. and China, triggering a broader effort by both countries to build independent supply chains. Huawei invested heavily in domestic alternatives but never recovered its global market position.
Why It's Relevant Today
The supply chain risk designation has only been applied to foreign adversaries until now. Using it against an American company founded by AI safety researchers would represent an unprecedented expansion of the tool's scope and raise immediate legal questions about whether the authority was intended for—or can lawfully be applied to—domestic firms in a contract dispute.