“`html
Dario Amodei, co-founder and chief executive officer of Anthropic, at the World Economic Forum in 2025.
Stefan Wermuth | Bloomberg | Getty Images
Artificial intelligence startup Anthropic finds itself navigating a complex landscape, striving to maintain its competitive edge against industry behemoth OpenAI, which is fueled by substantial backing from Microsoft and Nvidia. However, Anthropic is currently facing a potentially stiffer challenge: mounting scrutiny from the U.S. government.
David Sacks, the venture capitalist now serving as President Donald Trump’s AI and crypto czar, has openly criticized Anthropic, alleging that the company is orchestrating a campaign to promote a regulatory framework for AI that aligns with “the Left’s vision.”
Sacks’s recent broadside came after Anthropic co-founder Jack Clark, the AI startup’s head of policy, authored an essay titled “Technological Optimism and Appropriate Fear.” Sacks took to X to voice his concerns.
“Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering,” Sacks wrote.
Contrastingly, OpenAI has cultivated a close rapport with the White House since the commencement of Trump’s second term. Just a day after the inauguration on January 21st, President Trump unveiled Stargate, a joint venture with OpenAI, Oracle, and Softbank, committing billions of dollars to fortify U.S. AI infrastructure. This collaboration signals a strong endorsement of OpenAI’s vision and capabilities at the highest levels of government.
Sacks’s criticisms strike at the heart of Anthropic’s core mission and raison d’être. Siblings Dario and Daniela Amodei departed OpenAI in late 2020 to establish Anthropic with a singular focus: building safer AI systems. This departure stemmed from OpenAI’s shift towards commercialization, driven by significant funding from Microsoft, a move that the Amodeis believed risked compromising the emphasis on safety.
Today, OpenAI and Anthropic stand as the two most valuable private AI entities in the United States. OpenAI commands a valuation of approximately $500 billion, while Anthropic is valued at around $183 billion. OpenAI dominates the consumer AI space with its widespread ChatGPT and Sora applications, whereas Anthropic’s Claude models are gaining traction within the enterprise sector, lauded for their robustness and dependability in complex data analysis.
However, the two companies diverge significantly on the subject of AI regulation. OpenAI has advocated for a lighter regulatory touch, arguing that excessive restrictions could stifle innovation. Anthropic, in contrast, has opposed aspects of the Trump administration’s efforts to curtail protections.
Specifically, Anthropic has resisted federal attempts to preempt state-level AI regulation, particularly a Trump-backed measure that would have imposed a 10-year moratorium on such rules. This provision was ultimately dropped from the “Big Beautiful Bill.” Subsequently, Anthropic endorsed California’s SB 53, a bill mandating transparency and safety disclosures from AI companies – a clear divergence from the administration’s stance.
“SB 53’s transparency requirements will have an important impact on frontier AI safety,” Anthropic stated in a blog post on September 8th. “Without it, labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete.”
Neither Anthropic nor Sacks provided comments regarding the matter.
U.S. President Donald Trump sits next to Crypto czar David Sacks at the White House Crypto Summit at the White House in Washington, D.C., U.S., March 7, 2025.
Evelyn Hockstein | Reuters
For Sacks, the imperative is to accelerate AI innovation to ensure U.S. supremacy in the global AI race, particularly against China. He sees the rapid development of AI as a matter of national security and economic competitiveness.
“The U.S. is currently in an AI race, and our chief global competition is China,” Sacks emphasized at Salesforce’s Dreamforce conference in San Francisco. “They’re the only other country that has the talent, the resources, and the technology expertise to basically beat us in AI.” He has repeatedly stressed the need for policies that foster innovation and prevent regulatory hurdles from slowing down progress.
However, Sacks has vehemently denied any intention to undermine Anthropic in his pursuit of advancing U.S. AI capabilities.
In a post on X, Sacks refuted a Bloomberg story suggesting his comments were linked to increased federal scrutiny of Anthropic.
“Nothing could be further from the truth,” he asserted. “Just a couple of months ago, the White House approved Anthropic’s Claude app to be offered to all branches of government through the GSA App Store.” This approval, he argues, demonstrates that Anthropic is not being unfairly targeted.
Instead, Sacks contends that Anthropic is strategically portraying itself as a political underdog, positioning its leadership as principled defenders of public safety while framing any form of criticism as partisan attacks.
“It has been Anthropic’s government affairs and media strategy to position itself consistently as a foe of the Trump administration,” Sacks said. “But don’t whine to the media that you’re being ‘targeted’ when all we’ve done is articulate a policy disagreement.”
Sacks cites Dario Amodei’s past comparison of Trump to a “feudal warlord” during the 2024 election and Amodei’s public support for Kamala Harris’s presidential campaign. He also points to Anthropic’s op-eds opposing key elements of the Trump administration’s AI policy agenda, including the proposed state-level regulation moratorium and aspects of the administration’s Middle East and chip export strategies. The hiring of former Biden-era officials to lead Anthropic’s government relations team further fuels Sack’s suspicion.
Clark’s essay, in particular, drew Sacks’s ire, specifically the warnings about the potentially transformative and destabilizing power of AI.
“My own experience is that as these AI systems get smarter and smarter, they develop more and more complicated goals. When these goals aren’t absolutely aligned with both our preferences and the right context, the AI systems will behave strangely,” Clark wrote. “Another reason for my fear is I can see a path to these systems starting to design their successors, albeit in a very early form.”
Sacks views such concerns as “fear-mongering” that stifles vital innovation and provides regulatory capture by rent-seeking firms and lobbyists.
“It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem,” Sacks argued on X.
Anthropic has also notably refrained from engaging in actions that many other tech titans have taken to explicitly appease the Trump administration.
Executives from Meta, OpenAI, and Nvidia have actively courted Trump and his allies, participating in White House dinners, pledging vast sums to U.S. infrastructure projects, and softening their public stances. Amodei, however, was conspicuously absent from a recent White House gathering attended by numerous industry leaders.
Despite the apparent tensions, Anthropic maintains significant federal contracts, including a $200 million agreement with the Department of Defense, and access to federal agencies through the General Services Administration. Furthermore, it recently established a national security advisory council to align its work with U.S. interests and has made a version of its Claude model available to government customers for $1 per year.
Adding another layer to the narrative, Keith Rabois, a prominent Republican tech investor whose husband serves in the Trump administration, recently voiced his own criticism of Anthropic.
“If Anthropic actually believed their rhetoric about safety, they can always shut down the company,” Rabois wrote on X. “And lobby then.” This statement underscores the intensity of the debate surrounding AI safety and regulation within the tech community and the broader political landscape.
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/11200.html