Altman: OpenAI’s Defense Deal Was Opportunistic and Sloppy

OpenAI’s CEO Sam Altman admitted rushing the Defense Department AI deal, announcing revisions to prevent domestic surveillance of U.S. persons. The agreement now explicitly states OpenAI’s AI won’t be used for this purpose, nor by intelligence agencies like the NSA. Altman acknowledged AI’s current limitations and the need for safety safeguards, regretting the deal’s rushed appearance. This follows controversy over Anthropic’s AI use in military operations and concerns about AI’s role in national security. The situation highlights the complex relationship between AI development, government, and public trust.

OpenAI CEO Sam Altman acknowledged that the company “shouldn’t have rushed” its recent agreement with the U.S. Department of Defense, announcing revisions to the contract amidst growing concerns about AI’s role in national security and domestic surveillance.

In a post on X, formerly Twitter, Altman shared what he described as a repost of an internal memo detailing amendments to the Defense Department agreement. Key among these changes is the explicit stipulation that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” This clarification comes on the heels of a Friday announcement regarding the new deal, which occurred shortly after President Donald Trump directed federal agencies to halt the use of AI tools from rival company Anthropic, and just hours before U.S. military actions in Iran.

Altman further stated that the Defense Department has affirmed that OpenAI’s technologies, including those powering ChatGPT, would not be utilized by intelligence agencies like the NSA. He candidly admitted, “There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” adding that OpenAI would collaborate with the Pentagon on implementing robust technical safeguards.

The OpenAI chief executive conceded that rushing the deal was a misstep. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” he remarked.

This acknowledgment follows a public dispute between Anthropic and Washington regarding safeguards for its Claude AI systems, which concluded without a resolution. Earlier, Defense Secretary Pete Hegseth had indicated that Anthropic would be classified as a supply-chain threat. Last year, Anthropic became the first AI firm to deploy its models within the Defense Department’s classified network. Subsequently, the company sought assurances that its tools would not be employed for domestic surveillance or in the development of autonomous weapons systems without human oversight.

The controversy gained traction after reports emerged that Anthropic’s Claude AI had been involved in a U.S. military operation in Venezuela in January, a use case that Anthropic did not publicly object to at the time.

The timing of OpenAI’s agreement with the Pentagon, directly after talks with Anthropic faltered, has raised questions. While Altman had previously informed OpenAI employees that the company shared the same “red lines” as Anthropic, he later posted that the Defense Department had agreed to OpenAI’s restrictions. The differing outcomes remain unclear, though government officials have, in recent months, reportedly expressed criticism of Anthropic for what they perceive as an excessive focus on AI safety.

The rapid progression of OpenAI’s deal triggered considerable online backlash, with reports indicating a significant surge in ChatGPT uninstalls from app stores shortly after the Department of Defense agreement was announced, and a corresponding rise in downloads of Claude.

In his X post, Altman also addressed the wider controversy, stating, “In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we’ve agreed to.”

Anthropic, founded in 2021 by former OpenAI researchers including Dario Amodei, has positioned itself as a “safety-first” alternative in the rapidly evolving AI landscape. This latest development highlights the complex and often fraught interplay between cutting-edge AI development, government interests, and public perception.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/19604.html

Like (0)
Previous 22 hours ago
Next 17 hours ago

Related News