Anthropic Alleges Chinese AI Firms Engaged in Distillation Campaigns

U.S. AI firm Anthropic accuses Chinese companies DeepSeek, Moonshot AI, and MiniMax of coordinated “distillation attacks.” The firms allegedly used tens of thousands of fake accounts to extract proprietary information from Anthropic’s Claude model, aiming to quickly replicate its capabilities. This follows similar accusations against DeepSeek by OpenAI, raising national security concerns over potential misuse of advanced AI by authoritarian regimes.

## AI Race Heats Up: Anthropic Accuses Chinese Firms of Coordinated “Distillation Attacks

Anthropic, a leading U.S. artificial intelligence firm, has leveled accusations against three Chinese AI companies, alleging coordinated campaigns to extract proprietary information from its advanced Claude language model. This move places Anthropic in a growing cohort of American tech giants sounding the alarm over what they describe as sophisticated knowledge-transfer tactics by Chinese competitors.

In a detailed statement, Anthropic identified DeepSeek, Moonshot AI, and MiniMax as the entities engaged in these “distillation attack” efforts. The companies are accused of inundating Claude with a massive volume of specially crafted prompts, designed to systematically siphon its capabilities. This technique, known as distillation, allows smaller, less resource-intensive AI models to effectively learn from and mimic the performance of larger, more advanced models. While distillation is a common practice within the AI development community for creating more efficient versions of existing models, Anthropic argues that its use by these Chinese firms constitutes an unfair competitive advantage, enabling rivals to acquire powerful functionalities in a fraction of the time and cost of independent development.

Despite Anthropic’s service restrictions that prohibit commercial access to Claude within China, the accused firms allegedly circumvented these limitations. They reportedly utilized commercial proxy services to access vast networks, simultaneously operating tens of thousands of Claude accounts. Once access was secured, the firms systematically generated millions of carefully designed prompts to extract specific capabilities from Claude. The responses gathered were then used either for direct training of their proprietary models or to fuel reinforcement learning processes. Anthropic estimates that these collective efforts resulted in over 16 million exchanges with Claude, originating from approximately 24,000 fraudulently created accounts. MiniMax was identified as the most prolific in this activity, contributing over 13 million exchanges.

This is not an isolated incident. Earlier this month, OpenAI, another prominent U.S. AI firm, formally alerted U.S. lawmakers to similar observed activities by DeepSeek, suggesting attempts to distill frontier models using novel and obfuscated methods. Evidence of such practices by Chinese firms has been surfacing since early last year, with initial reports in January 2025 highlighting the striking similarities between China’s DeepSeek model and OpenAI’s ChatGPT, according to internal sources cited by the Financial Times.

The implications of these alleged distillation campaigns extend beyond competitive concerns. Both Anthropic and OpenAI have framed these activities as potential national security threats. The potential for authoritarian governments to leverage advanced AI for offensive cyber operations, disinformation campaigns, and mass surveillance is a significant worry. This narrative gains further traction with recent reports indicating that DeepSeek may have acquired and utilized Nvidia’s high-end Blackwell chip for training its AI models, potentially in violation of U.S. export controls.

These developments occur amid increasing anxiety within the U.S. administration regarding China’s rapid advancements in artificial intelligence, particularly when those gains appear to be fueled by American-developed systems. In response to these geopolitical and technological shifts, the White House recently announced the establishment of a “Tech Corps” within the Peace Corps, an initiative aimed at promoting American AI interests globally and assisting partner nations in adopting advanced AI systems.

The situation raises complex questions about the balance between fostering AI innovation, maintaining competitive advantages, and mitigating potential national security risks. While the U.S. firms highlight the security implications and the potential for misuse of advanced AI capabilities, some observers note the inherent irony, given that these same U.S. companies also employ distillation techniques to enhance their own product offerings. Furthermore, Anthropic’s consistent advocacy for stricter export controls on advanced AI chips to China, framed as a national security priority, suggests that these accusations could also serve to bolster arguments for enhanced regulatory measures. The ongoing technological arms race in AI continues to unfold, with profound implications for global economic and security landscapes.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/19253.html

Like (0)
Previous 8 hours ago
Next 7 hours ago

Related News