Artificial intelligence firm Anthropic has revealed that three China based AI companies allegedly conducted large scale extraction campaigns targeting its Claude language model. According to the company, the activity involved millions of automated interactions designed to replicate Claude’s advanced capabilities.
The organizations named in the disclosure include DeepSeek, Moonshot AI, and MiniMax. Anthropic claims the coordinated campaigns violated its terms of service and regional access restrictions.
16 Million Queries Through Fraudulent Accounts
Anthropic stated that more than 16 million query exchanges were generated against its large language model using approximately 24,000 fraudulent accounts. The campaigns reportedly relied on commercial proxy networks to bypass safeguards and distribute traffic across a wide infrastructure base.
The company emphasized that its services are restricted in China due to legal, regulatory, and security concerns. Despite these restrictions, the activity allegedly persisted over an extended period using sophisticated traffic masking techniques.
What Is Model Distillation and Why It Matters
Model distillation is a recognized AI development method in which a smaller or less capable system is trained using outputs from a more advanced model. When conducted internally, it enables companies to create efficient and cost effective versions of their own frontier models.
However, Anthropic argues that competitors extracting outputs at scale to replicate proprietary capabilities represents an abuse of the technique. According to the company, such practices allow rivals to shortcut research timelines and reduce development costs significantly.
Anthropic further warned that improperly distilled systems may lose built in safeguards. This, it said, could increase risks related to national security, cyber operations, surveillance technologies, and misinformation campaigns.
Capabilities Targeted by Each AI Lab
Anthropic detailed the focus areas of each alleged campaign:
- DeepSeek reportedly concentrated on Claude’s reasoning functions, rubric based evaluation tasks, and generation of censorship safe alternatives to politically sensitive topics. The campaign involved more than 150,000 exchanges.
- Moonshot AI allegedly targeted agentic reasoning, tool integration, advanced coding features, computer use agents, and computer vision capabilities across more than 3.4 million interactions.
- MiniMax was said to have focused on agentic coding and tool usage functions, generating over 13 million exchanges.
Anthropic noted that the structure and repetition of prompts differed from standard user behavior, suggesting systematic capability extraction rather than routine experimentation.
Proxy Networks and Hydra Architecture
The campaigns reportedly relied on commercial proxy services that resell API access to leading AI models. These networks use what Anthropic described as hydra style cluster infrastructures. Thousands of fraudulent accounts are managed simultaneously, enabling traffic to be rotated and masked.
In one instance, a single proxy system was said to operate more than 20,000 fraudulent accounts at the same time. By blending extraction traffic with unrelated customer activity, detection became more difficult.
When accounts were banned, replacements were quickly deployed, ensuring operational continuity.
Defensive Measures and Industry Response
To counter these threats, Anthropic announced it has implemented behavioral fingerprinting systems and detection classifiers designed to identify suspicious traffic patterns. The company also strengthened identity verification for educational accounts, startup programs, and security research access.
The disclosure follows a similar report from Google Threat Intelligence Group (GTIG) which stated it had disrupted model extraction attempts against Gemini through more than 100,000 prompts.
Google previously clarified that model extraction attacks generally do not pose direct risks to average users. Instead, the impact is concentrated among AI developers and service providers whose proprietary models are targeted.
Found this article interesting? Follow us on X (Twitter) , Facebook, Blue sky and LinkedIn to read more exclusive content we post.


