Anthropic Faces Pentagon Scrutiny Over AI Model Usage Restrictions
The Department of Defense (DOD) is reportedly reviewing its relationship with AI startup Anthropic, signaling potential friction over the terms of use for its advanced artificial intelligence models. A Pentagon spokesperson confirmed to CNBC that Anthropic’s work with the agency is “under review,” citing disagreements regarding the deployment of its AI technologies within defense operations.
Last year, the five-year-old AI firm secured a significant $200 million contract with the DOD. As of February, Anthropic stood as the sole AI company to have successfully integrated its models onto the agency’s classified networks and provided tailored AI solutions to national security clients. However, discussions concerning the future contractual framework have encountered obstacles.
Emil Michael, the undersecretary of defense for research and engineering, indicated at a recent summit that negotiations have stalled. According to reports, Anthropic is seeking assurances that its AI models will not be utilized for autonomous weapons systems or for widespread surveillance of American citizens. In contrast, the DOD’s position appears to favor the unrestricted application of Anthropic’s models across “all lawful use cases.”
“If any one company doesn’t want to accommodate that, that’s a problem for us,” Michael stated. “It could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it.” This situation represents the latest development in a complex relationship between Anthropic and government entities, particularly given recent public critiques from administration officials.
David Sacks, a prominent venture capitalist and the administration’s AI and crypto czar, has previously voiced concerns, accusing Anthropic of promoting “woke AI” due to its advocacy for AI regulation.
An Anthropic spokesperson expressed optimism, stating that the company is engaged in “productive conversations, in good faith” with the DOD to navigate these intricate issues. The spokesperson added, “Anthropic is committed to using frontier AI in support of U.S. national security.”
Anthropic’s competitors, including OpenAI, Google, and xAI, also received DOD contract awards of up to $200 million last year. These companies have reportedly agreed to allow the DOD to employ their models for all lawful purposes within the military’s unclassified systems, with at least one entity consenting to usage across “all systems,” according to a senior DOD official who spoke on condition of anonymity due to the confidential nature of the negotiations.
Should Anthropic ultimately decline to align with the DOD’s terms, the agency could classify the company as a “supply chain risk.” Such a designation would necessitate that its vendors and contractors formally certify their non-use of Anthropic’s models. This classification, typically reserved for foreign adversaries, would represent a significant and complex setback for Anthropic, a company founded in 2021 by former OpenAI researchers and executives, and known for its Claude family of AI models.
Earlier this month, Anthropic announced the successful closure of a $30 billion funding round, valuing the company at an impressive $380 billion, more than double its valuation from its previous fundraising in September. This substantial financial backing underscores the company’s perceived value and potential in the rapidly evolving AI landscape, even as it navigates critical strategic discussions with key governmental partners.
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/18929.html