The Pentagon’s move to effectively ban Anthropic’s artificial intelligence models “looks like an attempt to cripple” the company, a federal judge stated during a court hearing. Lawyers for Anthropic appeared before Judge Rita Lin in San Francisco federal court, seeking a temporary halt to the Pentagon’s blacklisting and a prior directive from the Trump administration prohibiting federal agencies from using the AI company’s technology.
Anthropic’s legal team emphasized that a court-ordered injunction would not compel the U.S. government to continue using their models, nor would it prevent them from exploring alternative AI solutions.
During the proceedings, Judge Lin probed both Anthropic and government legal representatives on the specifics of the case. A central concern she raised was whether Anthropic was facing adverse actions due to its public critiques of the government’s contracting positions. “Everyone, including Anthropic, agrees that the Department of Defense is free to cease using Claude and seek a more accommodating AI vendor,” Judge Lin remarked. “I don’t perceive this case as being about that. The core question here, in my view, is whether the government acted unlawfully.”
Judge Lin indicated that she anticipates issuing a ruling on Anthropic’s motion within the coming days. If a preliminary injunction is granted, the AI startup would be permitted to continue its business relationships with government contractors and federal agencies while its lawsuit against the Trump administration proceeds. Without such relief, the company has warned in its filings that it could face billions of dollars in lost business and significant reputational damage.
Earlier in March, the Department of Defense classified Anthropic as a “supply chain risk,” citing purported threats to U.S. national security posed by the company’s technology. This designation, if upheld, would mandate that defense contractors, including major players like Amazon, Microsoft, and Palantir, verify that they are not utilizing Anthropic’s Claude AI in their work with the military.
Representing the U.S. government, attorney Eric Hamilton argued that the Department of Defense’s concerns stemmed from a potential future risk of Anthropic “taking action to sabotage or subvert IT systems.” He elaborated, stating, “What happens if Anthropic installs a kill switch or functionality that alters how it operates? That is an unacceptable risk.”
Judge Lin pressed Hamilton on the criteria for such a designation. “What I’m hearing from you, though, is that it’s sufficient if an IT vendor is unyielding and insists on certain terms and asks inconvenient questions, then it can be designated as a supply chain risk because they might not be trustworthy. That appears to be a rather low threshold.”
Anthropic maintains that there is no legitimate basis for classifying the company as a supply chain risk. Furthermore, the company alleges that it is being subjected to retaliatory measures because it requested that the Department of Defense refrain from using Claude for fully autonomous weapons systems or for mass surveillance of American citizens. The Pentagon, however, denies using the AI models for such purposes.
“This is an unprecedented action taken against an American company,” stated Michael Mongan, Anthropic’s counsel, during the hearing. “It involves a very specific and limited authority that is not applicable in this instance, nor is it a standard response to the concerns articulated by the opposing side.”
Prior to this dispute, Anthropic was among the pioneering AI firms to establish partnerships with numerous federal agencies, as the government sought to rapidly enhance its systems and capabilities with advanced AI technologies. In July, Anthropic secured a $200 million contract with the Pentagon and was the first AI laboratory to deploy its technology across the department’s classified networks.
However, negotiations surrounding Claude’s integration into the Department of Defense’s GenAI.mil platform, which commenced in September, encountered difficulties concerning the military’s intended use of the models. The department has maintained its demand for unrestricted access to the company’s technology for all lawful applications. Hamilton asserted that Anthropic’s actions extended beyond the typical scope of a contractor’s obligations. “Anthropic is not merely being obstinate. It’s not simply refusing contractual terms. Instead, it is raising objections to the DOD regarding its use of its technology in military operations.”
Following the impasse in February, former President Donald Trump issued a directive via Truth Social, ordering federal agencies to “immediately cease” all use of Anthropic’s technology. Trump stated, “WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/20084.html