NVIDIA

After OpenAI, Anthropic Flags Chinese AI Firms Over Distillation

After OpenAI raised similar concerns, Anthropic said it detected and blocked attempts by Chinese AI firms to extract knowledge from its Claude models through distillation, as U.S. debates over AI chip exports intensify.

Share your love

After OpenAI raised similar concerns, Anthropic said it detected and blocked attempts by Chinese AI firms to extract knowledge from its Claude models through distillation, as U.S. debates over AI chip exports intensify.

Anthropic said it identified patterns of automated queries designed to replicate aspects of Claude’s capabilities. The company described these efforts as “distillation attacks,” a technique where developers use outputs from a more advanced AI system to train another model.

In a blog post, Anthropic said it has strengthened its detection systems to monitor suspicious usage patterns and prevent large-scale extraction of model outputs. The company did not publicly release detailed technical evidence but said it has taken steps to block activity it believes was aimed at copying its models.

Anthropic named several Chinese AI labs in its disclosure, including DeepSeek, Moonshot AI and MiniMax. According to the company, MiniMax accounted for more than 13 million exchanges targeting agentic coding and tool orchestration. Moonshot AI was linked to more than 3.4 million exchanges focused on reasoning, coding and computer-use agents. DeepSeek, Anthropic said, conducted over 150,000 exchanges, including prompts designed to extract reasoning traces and generate step-by-step explanations.

Anthropic claimed some prompts asked Claude to articulate its internal reasoning behind completed responses, which the company described as an attempt to generate chain-of-thought training data at scale. It also said it observed attempts to create censorship-safe alternatives to politically sensitive queries. The company stated that it attributed the campaigns through request metadata and traffic analysis.

The issue comes as U.S. policymakers debate tighter export controls on advanced AI chips to China. According to reporting by CNBC and the Financial Times, both Anthropic and OpenAI have expressed concerns that some Chinese AI firms may be attempting to use Western-developed models to accelerate their own progress.

Do not miss: What’s Up With DeepSeek? The AI Release That Crashed Nvidia

The broader concern is that model distillation could allow competitors to narrow capability gaps without having the same level of access to advanced computing hardware. As AI systems become more capable and commercially valuable, companies are increasing investments in model security and monitoring.

Anthropic said it will continue improving safeguards to protect its systems, while emphasizing that responsible AI development requires both innovation and strong protections against misuse.

The developments highlight growing geopolitical tensions around artificial intelligence, particularly as governments weigh national security considerations alongside commercial competition.

Do not miss:
Four US States Advance AI Chatbot Safety Bills in One Week
Why Europe Leads AI Regulation, Lags in AI Power
With Sarvam and Krutrim, Has ‘Make in India’ in AI Finally Arrived?
OpenAI Partners with Tata, Anthropic Opens Bengaluru Office at AI Summit
Sam Altman Calls for IAEA-Like Global Body to Regulate AI at Summit

Avatar photo
NN Desk

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay updated with NervNow Weekly

Subscribe now