Federal Judge Blocks Trump’s Pentagon Ban on Anthropic’s AI Models

A federal judge granted Anthropic a preliminary injunction Thursday, temporarily blocking the Trump administration's Pentagon blacklist of the AI company.

Share your love

A federal judge granted Anthropic a preliminary injunction Thursday, temporarily blocking the Trump administration’s Pentagon blacklist of the AI company.

A federal judge granted artificial intelligence company Anthropic a preliminary injunction Thursday, temporarily blocking the Trump administration from enforcing a Pentagon directive that labeled the company a national security threat, a ruling the court called likely illegal retaliation for protected speech.

U.S. District Judge Rita F. Lin of the Northern District of California ordered the administration to pause its designation of Anthropic as a “supply chain risk” and halted President Donald Trump’s directive ordering all federal agencies to immediately cease use of the company’s AI technology.

In her 43-page ruling, Lin wrote that nothing in the governing statute supports “the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”

She further described the Pentagon’s actions as classic illegal First Amendment retaliation, finding that the government appeared to be punishing Anthropic for publicly criticizing its contracting position rather than acting on genuine national security grounds. The order carries a seven-day pause to allow the government time to file an appeal.

ALSO READ : Trump Bans Anthropic; OpenAI Wins Federal Deal Hours Later

The conflict traces back to negotiations between Anthropic and the Defense Department over the deployment of Claude the company’s flagship AI model on the Pentagon’s GenAI.mil platform. Talks broke down in late 2025 after the two sides could not agree on terms of use.

Anthropic had signed a $200 million contract with the Pentagon in July 2025 and was the first AI lab to deploy its models across the agency’s classified networks. But the company’s CEO, Dario Amodei, drew a line, Claude would not be used for fully autonomous lethal weapons or for mass surveillance of American citizens.

The Defense Department pushed back, insisting on unrestricted access to the technology for all lawful purposes. Defense Secretary Pete Hegseth publicly declared in February 2026 that the military would cut ties with Anthropic, and Trump amplified the move in a Truth Social post ordering federal agencies to “immediately cease” all use of the company’s products.

In early March, the Pentagon formally designated Anthropic a supply chain risk, a label typically reserved for foreign adversaries and intelligence threats, not U.S. companies. The designation had far-reaching consequences: it required defense contractors including Amazon, Microsoft and Palantir to certify they do not use Claude in any Pentagon-related work.

Anthropic filed two lawsuits on March 9 in the U.S. District Court for the Northern District of California, alleging the administration violated the company’s First Amendment rights and exceeded the legal scope of supply chain risk law. Lawyers for the company said the designation appeared to be the first ever applied to an American firm.

At a hearing Tuesday, Lin pressed government attorneys on why the action went beyond simply ending the contractual relationship. “It looks like an attempt to cripple Anthropic,” she said, questioning why, if the Pentagon’s concern was over how Claude could be used in military operations, it did not simply stop using the product rather than effectively barring the company from the entire federal marketplace.

A government lawyer argued the actions were not retaliatory and were based on Anthropic’s insistence on limiting its technology’s use, not on the company’s decision to go public. Counsel for the Defense Department also argued that Anthropic posed a theoretical future risk if it were to update Claude in a way that undermined national security systems.

Legal analysts say the ruling carries significance beyond the immediate dispute. Jennifer Huddleston, a senior technology policy fellow at the Cato Institute, said the preliminary injunction signals that the court believes Anthropic is likely to succeed on the merits, a high bar for an early-stage ruling. A final verdict on the merits of the case could still be months away.

Following the ruling, Anthropic issued a statement saying it was “grateful to the court for moving swiftly” and that the company remains focused on productive engagement with the federal government.
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” the company said. The Pentagon did not immediately respond to requests for comment and has previously stated it does not comment on ongoing litigation.


Avatar photo
NN Desk

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay updated with NervNow Weekly

Subscribe now