Anthropic accuses Chinese AI labs of cloud mining while US discusses AI chip exports


Anthropic is accused Three Chinese AI companies created more than 24,000 fake accounts using the Claude AI model to improve their own models.

Factories — Deep Sick, Moonshot AIand Mini Max – Over 16 million exchanges were allegedly made with Cloud through those accounts using a technique called “distillation.” Anthropic said the labs “targeted Claude’s most differentiated abilities: active thinking, tool use, and programming.”

The accusations come amid debates over how stringent export controls should be imposed on advanced AI chips, a policy aimed at curbing AI development in China.

Distillation is a common training method that AI labs use in their own models to create smaller, cheaper versions, but competitors can use it to essentially copy other labs’ homework. OpenAI sent a memo to House lawmakers earlier this month accusing DeepSeek of using distillation to mimic its products.

Deep sec first Making waves A year ago when it released its open source R1 inference model that nearly matched the performance of US frontier laboratories at a fraction of the cost. DeepSeek is expected to soon release DeepSeek V4, its latest model, which It is said It can outperform Anthropic’s Claude and OpenAI’s ChatGPT in programming.

Each attack varies in scope. Anthropic tracked more than 150,000 DeepSeek exchanges that appear to be aimed at improving the underlying logic and alignment, specifically around censored, secure alternatives to policy-sensitive queries.

Moonshot AI had over 3.4 million exchanges targeting logical reasoning and tool use, coding and data analysis, computer agent development, and computer vision. company last month Released New open source Kimi K2.5 model and coding agent.

TechCrunch event

Boston, MA
|
June 9, 2026

MiniMax’s 13 million exchanges targeted proxy encryption, use and orchestration of the tools. Anthropic said it was able to observe the MiniMax in action as it redirected nearly half of its traffic to pull capabilities from the latest Claude model when it launched.

Anthropic says it will continue to invest in defenses that make distillation attacks more difficult to carry out and easier to identify, but is calling for a “coordinated response across the AI ​​industry, cloud providers, and policy makers.”

Distillation attacks come at a time US chip exports To China is still hotly debated. Last month, the Trump administration officially allowed US companies such as Nvidia to do so Export advanced artificial intelligence chips (such as H200) to China. Critics argue that this easing of export controls increases China’s AI computing capacity at a critical time in the global race for AI dominance.

Anthropic says the scale of extraction performed by DeepSeek, MiniMax and Moonshot “requires access to advanced chips.”

“Distillation attacks thus reinforce the rationale for export controls: restricting access to chips limits live model training and the volume of illicit distillation,” according to the Anthropic blog.

Dmitry AlperovichThe head of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike told TechCrunch he’s not surprised to see these attacks.

“It has been clear for some time that part of the reason for the rapid progress of Chinese AI models was theft by distillation of US frontier models,” Alperovich said. “Now we know this as a fact.” “This should give us even more compelling reasons to refuse to sell any AI chips to any of these (companies), Which would benefit them more.

Anthropic also said that distillation not only threatens to undermine US AI dominance, but may also create national security risks.

“Anthropic and other U.S. companies are building systems that prevent state and non-state actors from using artificial intelligence, for example, to develop biological weapons or carry out malicious cyber activities,” Anthropic’s blog post said. “Models created through illicit distillation are unlikely to retain those safeguards, meaning dangerous capabilities can proliferate with many protections completely removed.”

Anthropic noted that authoritarian governments are deploying frontier AI for things like “offensive cyber operations, disinformation campaigns, and mass surveillance,” a risk that doubles if these models are open source.

TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for comment.

Leave a Reply

Your email address will not be published. Required fields are marked *