Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Anthropic claims that DeepSeek and two other Chinese AI companies abused Claude AI’s model in an attempt to improve their own products. in Announcement on MondayAnthropic says the “campaigns on an industrial scale” included the creation of about 24,000 fraudulent accounts and more than 16 million exchanges with Cloud, as reported. earlier before The Wall Street Journal.
The three companies – DeepSeek, MiniMax, and Moonshot – are accused of hacking crimes “Distillation” ClaudeOr train a smaller AI model based on a more advanced model. Although Anthropic says distillation is a “legitimate training method,” it adds that it “can also be used for illicit purposes,” including “to acquire powerful capabilities from other laboratories in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
Anthropic adds that illicitly distilled models are “unlikely” to continue to apply existing safeguards. “Foreign labs that mine American models could feed these unprotected capabilities into military, intelligence, and surveillance systems — enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” Anthropic wrote.
Deep Sick, which Create a buzz in the artificial intelligence industry For her more powerful but more efficient models, she held more than 150,000 exchange sessions with Claude and targeted her reasoning abilities, according to Anthropic. It is also accused of using Cloud to generate “safe alternatives to censorship on politically sensitive questions about dissidents, party leaders or authoritarianism.” In a letter to lawmakers last week, OpenAI Likewise accused DeepSeek for “continued efforts to make free use of capabilities developed by OpenAI and other U.S. frontier laboratories.”
Moonshot and MiniMax had more than 3.4 million and 13 million exchanges with Cloud, respectively. Anthropic calls on other members of the AI industry, cloud providers, and regulators to address distillation, adding that “restricted access to chips” could limit model training and “the scale of illicit distillation.”