© 2026 NervNow™. All rights reserved.

Anthropic Refuses Pentagon Pressure to Remove AI Safeguards, Vows to Stay Until the Door Closes
The AI company stands firm against Department of War demands to enable mass domestic surveillance and fully autonomous weapons, even as threats of expulsion and a never-before-used national security designation loom.

The AI company stands firm against Department of War demands to enable mass domestic surveillance and fully autonomous weapons, even as threats of expulsion and a never-before-used national security designation loom.
A high-stakes confrontation between Silicon Valley and the Pentagon has broken into the open. Anthropic, the San Francisco-based artificial intelligence company behind the Claude AI models, confirmed this week that the Department of War has threatened to eject it from classified military networks unless it agrees to remove two long-standing ethical guardrails, restrictions on enabling mass domestic surveillance and fully autonomous weapons systems.
Dario Amodei, Anthropic’s chief executive and co-founder, made the standoff public in a rare statement posted to the company’s website on Wednesday, laying out in stark detail the demands made and the reasoning for refusal. The statement marks one of the most significant public conflicts between a frontier AI developer and the United States military establishment since large language models became embedded in national security infrastructure.
THE TWO SAFEGUARDS AT ISSUE
- Mass Domestic Surveillance: Anthropic refuses to enable AI-driven aggregation of Americans’ location data, web browsing history, and associations at scale, even from publicly available sources, calling it incompatible with democratic values and a novel threat to civil liberties that existing law has not yet addressed.
- Fully Autonomous Weapons: The company declines to power weapons systems that remove humans entirely from targeting and engagement decisions, citing the unreliability of today’s frontier AI and the absence of adequate oversight guardrails. It has offered to collaborate on R&D to reach that reliability threshold, an offer the Department has not accepted.
The two exceptions, Amodei stressed, have never been written into Anthropic’s contracts with the Department of War. They are not new restrictions imposed in response to political pressure; they are boundaries that have existed from the outset of the government relationship. And yet, the Department has now signaled that maintaining those limits is incompatible with continued partnership.
ALSO READ: Amodei Warns of AI Tsunami as Claude Adds Finance, HR Tools
Amodei opened his statement by underlining Anthropic’s deep commitment to American defense. The company was, by his account, the first frontier AI developer to deploy models on classified government networks, the first to work with the National Laboratories, and the first to build custom AI models for national security customers. Claude is described as extensively in use across the Department and other intelligence agencies for intelligence analysis, cyber operations, operational planning, and simulation. All without the two disputed capabilities.
Anthropic has also reportedly absorbed significant financial costs to advance national security priorities, including forgoing several hundred million dollars in potential revenue by cutting off access to Claude for firms linked to the Chinese Communist Party, some of which the Department of War itself has designated as Chinese Military Companies.
But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters, with our two requested safeguards in place.
— Dario Amodei, CEO, Anthropic · February 26, 2026
The pressure campaign from the Department, according to Amodei’s statement, has escalated beyond contract negotiations. The Pentagon has threatened to designate Anthropic a “supply chain risk,” a classification historically reserved for foreign adversaries, while simultaneously invoking the Defense Production Act to compel the removal of the safeguards. Amodei described these two threats as inherently contradictory: one brands Anthropic a security danger; the other frames Claude as indispensable to national security.
Despite the escalating pressure, Anthropic’s position has not shifted. The company said it cannot “in good conscience” comply with the Department’s request, even as it acknowledged the government’s sovereign right to choose its own contractors. Should the Department proceed with an expulsion, Amodei said Anthropic would facilitate a smooth transition to another provider to avoid disruption to active military operations, and would keep its models available under the terms it has proposed for as long as needed.
The confrontation arrives at a moment when the United States military is accelerating its adoption of AI, and when definitions of acceptable AI use in conflict zones and at home remain deeply unsettled. Congress has raised bipartisan concern about warrantless AI-driven mass surveillance, while legal frameworks governing autonomous lethal systems lag years behind the technology’s capabilities.
Analysts watching the standoff noted that Anthropic’s public disclosure of the Pentagon’s demands was itself a calculated move, an attempt to shift the dispute from a contract renegotiation to a public debate about the role of AI companies in defining ethical boundaries for state power. Whether it succeeds in prompting the Department to reconsider, or instead accelerates Anthropic’s removal from military systems, may shape the precedent for how other AI companies navigate similar demands in the years ahead.
More on Anthropic/Claude
AI Safety Clash Deepens as Pentagon Pressures Anthropic
Anthropic AI Jobs in Bengaluru: Check Open Roles
Pentagon Questions Anthropic on Claude’s Military Use
Anthropic Opens Bengaluru Office at AI Summit
Altman vs Amodei: AI Rivals Refuse to Hold Hands at India AI Summit







