
Anthropic is claiming that three Chinese AI firms have established over 24,000 fraudulent accounts using its Claude AI model to enhance their own systems.
The companies — DeepSeek, Moonshot AI, and MiniMax — reportedly created over 16 million interactions with Claude via these accounts utilizing a method known as “distillation.” Anthropic accuses the labs of focusing on Claude’s most unique features: agentic reasoning, tool application, and coding.
These allegations arise during discussions on the enforcement of export regulations regarding advanced AI chips, a strategy intended to restrict China’s AI advancement.
Distillation is a standard training technique employed by AI laboratories on their models to produce smaller, cost-effective iterations, though rivals can exploit it to essentially replicate the efforts of other labs. Earlier this month, OpenAI sent a notice to House members alleging that DeepSeek used distillation to imitate its offerings.
DeepSeek gained attention last year when it introduced its open source R1 reasoning model, which nearly equaled the performance of leading American lab models at a significantly lower cost. The company is anticipated to soon unveil DeepSeek V4, its newest model, which reportedly has the capability to outperform both Anthropic’s Claude and OpenAI’s ChatGPT in coding.
The extent of each infringement varied. Anthropic identified over 150,000 interactions from DeepSeek aimed at enhancing fundamental logic and alignment, particularly concerning censorship-resistant alternatives for sensitive policy queries.
Moonshot AI had over 3.4 million interactions directed at agentic reasoning and tool usage, coding and data analysis, the development of computer-use agents, and computer vision. Last month, the company launched a new open source model, Kimi K2.5, along with a coding agent.
Techcrunch event
Boston, MA
|
June 9, 2026
MiniMax’s 13 million interactions focused on agentic coding and tool application and orchestration. Anthropic noted it could observe MiniMax as it redirected nearly half its traffic to extract features from the newly launched Claude model.
Anthropic states it will persist in developing defenses that complicate the execution and detection of distillation attacks, whilst urging for “a collaborative approach across the AI industry, cloud service providers, and legislators.”
The distillation attacks occur amidst ongoing debates regarding American chip exports to China. Last month, the Trump administration officially permitted U.S. firms like Nvidia to export advanced AI chips (such as the H200) to China. Critics contend that this relaxation of export restrictions enhances China’s AI computing power at a crucial juncture in the global competition for AI supremacy.
Anthropic asserts that the level of extraction undertaken by DeepSeek, MiniMax, and Moonshot “necessitates access to advanced chips.”
“Distillation attacks thereby strengthen the justification for export controls: restricted chip access limits both direct model training and the scale of unlawful distillation,” according to Anthropic’s blog.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, informed TechCrunch he was not surprised by these attacks.
“It has been evident for a while that part of the reason for the swift advancement of Chinese AI models has been the pilfering via distillation of U.S. leading models. Now we have confirmation,” Alperovitch stated. “This should provide even stronger justifications for refusing to sell any AI chips to any of these [companies], which would only give them further advantages.”
Anthropic further mentioned that distillation not only poses a threat to undermine American AI leadership, but also could introduce national security hazards.
“Anthropic and other U.S. companies create systems that deter state and non-state actors from leveraging AI to, for instance, produce bioweapons or engage in harmful cyber activities,” states Anthropic’s blog entry. “Models developed through illicit distillation are unlikely to maintain those safeguards, meaning perilous capabilities could spread with many protections completely removed.”
Anthropic highlighted authoritarian regimes utilizing advanced AI for purposes such as “offensive cyber operations, disinformation efforts, and mass surveillance,” a danger that escalates if those models are open sourced.
TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for their responses.

