Anthropic Secures Preliminary Injunction Against Trump DoD

A federal judge has granted AI startup Anthropic a preliminary injunction against the Trump administration’s blacklisting, pausing the Pentagon’s designation of Anthropic as a supply chain risk. The ruling effectively allows Anthropic to continue its work with federal agencies while litigation proceeds. The judge criticized the administration’s actions as illegal First Amendment retaliation, stating the government’s stance was an “Orwellian notion.” Anthropic expressed gratitude and a commitment to working productively with the government.

Dario Amodei, CEO and co-founder of Anthropic, speaks onstage during the 2025 New York Times Dealbook Summit in New York City.

Michael M. Santiago | Getty Images

A federal judge in San Francisco has granted a significant victory to artificial intelligence startup Anthropic, issuing a preliminary injunction against the Trump administration’s controversial blacklisting of the company. The ruling, delivered by Judge Rita Lin, effectively pauses the Pentagon’s designation of Anthropic as a supply chain risk and President Trump’s directive to federal agencies prohibiting the use of Anthropic’s Claude AI models.

The injunction comes as Anthropic navigates a high-stakes legal battle aimed at reversing the administration’s actions, which the company argues have inflicted substantial monetary and reputational damage. The court’s decision offers a crucial reprieve as the litigation proceeds, with a final verdict potentially months away.

In a statement, Anthropic expressed gratitude for the court’s swift action, noting, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

Judge Lin’s order did not mince words, directly addressing the administration’s rationale. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she wrote. She further characterized the government’s stance as an “Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”

Anthropic’s legal challenge follows a tumultuous period in Washington, D.C., marked by escalating tensions between the Department of Defense (DOD) and the AI firm. In late February, Defense Secretary Pete Hegseth publicly declared Anthropic a “supply chain risk,” a designation typically reserved for foreign adversaries and implying that the company’s technology poses a threat to U.S. national security. This designation was formally communicated to Anthropic via a letter from the DOD earlier this month.

The implications of this designation are far-reaching. It mandates that defense contractors, including major players like Amazon, Microsoft, and Palantir, must certify that they do not employ Anthropic’s Claude models in their work with the military. This has created significant ripple effects across the defense industrial base, forcing companies to re-evaluate their AI dependencies and procurement strategies.

The Trump administration invoked two distinct legal statutes – 10 U.S.C. § 3252 and 41 U.S.C. § 4713 – to underpin its actions. To effectively challenge these, Anthropic has initiated separate legal proceedings, including a formal review of the DOD’s determination at the U.S. Court of Appeals in Washington.

This legal entanglement gained further momentum shortly after Hegseth’s declaration, when President Trump issued a directive on Truth Social ordering federal agencies to “immediately cease” all use of Anthropic’s technology, citing a six-month phase-out period. Trump’s post declared, “WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”

The administration’s aggressive stance surprised many in Washington who had previously lauded Anthropic’s technological contributions. The company had been notably the first to deploy its advanced AI models across the DOD’s classified networks and was recognized for its seamless integration capabilities with established defense contractors like Palantir. This collaboration was underscored by a significant $200 million contract inked with the Pentagon in July.

However, negotiations regarding Claude’s deployment on the DOD’s GenAI.mil platform, which commenced in September, encountered a significant roadblock. The core of the disagreement centered on data access and ethical considerations. The DOD sought unfettered access to Anthropic’s models for all lawful purposes, while Anthropic insisted on assurances that its technology would not be repurposed for fully autonomous weapons systems or domestic mass surveillance. The inability to bridge this gap led directly to the current legal confrontation.

“Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and look for a more permissive AI vendor,” Judge Lin articulated during the hearing. “I don’t see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law.” This framing highlights the central legal question: not whether the DOD can choose its vendors, but whether its actions against Anthropic were legally sound and free from retaliatory motives.

Judge says Pentagon actions appear aimed at crippling Anthropic
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/20179.html

Like (0)
Previous 21 hours ago
Next 18 hours ago

Related News