AI Finder Africa

Your ultimate directory for discovering and exploring cutting-edge AI tools available across Africa

shape shape

The AI Copycat War? Anthropic Says Chinese Labs Used 16 Million Prompts to Train Their Own Models

Home Blog AI News The AI Copycat War? Anthropic ...
shape
The AI Copycat War? Anthropic Says Chinese Labs Used 16 Million Prompts to Train Their Own Models
AI News Mar 05, 2026 01:32 PM tech writer 35 Views

The AI Copycat War? Anthropic Says Chinese Labs Used 16 Million Prompts to Train Their Own Models

Imagine spending billions of dollars building one of the smartest AI systems in the world… only to discover that someone might be quietly studying it, question by question, until they can recreate something similar. That’s essentially the story unfolding right now in the global AI race.

Imagine spending billions of dollars building one of the smartest AI systems in the world… only to discover that someone might be quietly studying it, question by question, until they can recreate something similar.

That’s essentially the story unfolding right now in the global AI race.

AI company Anthropic, the developer of the chatbot Claude AI, has accused three Chinese AI companies, DeepSeek, Moonshot AI, and MiniMax, of running a massive operation to learn from Claude and use its answers to train their own AI systems.

And the scale?
Over 16 million prompts.

Yes. Million.

The Alleged Plan: Learn From Claude Instead of Starting From Scratch

According to Anthropic, the companies created around 24,000 fake accounts to interact with Claude at scale. These accounts continuously sent prompts to the AI and recorded its responses.

Why would they do that?

Because of a technique known as AI distillation.

In simple terms, distillation means training a smaller or newer AI model by learning from the outputs of a stronger one. Instead of training entirely from raw data (which is expensive and slow), developers can feed a model the answers from a powerful AI and teach it to mimic that reasoning.

It’s a common technique inside companies.

But Anthropic says using a competitor’s AI outputs without permission crosses the line.

What Exactly Were They Trying to Learn?

Anthropic says the campaigns targeted some of Claude’s most advanced abilities, including:

  • Complex reasoning
  • Coding assistance
  • Tool usage and automation
  • Data analysis and agent-like tasks

The company claims MiniMax generated the majority of the traffic, accounting for more than 13 million interactions, while Moonshot AI and DeepSeek made millions more combined.

Even more surprising?

Anthropic says that when it released a new version of Claude, one of the campaigns quickly redirected traffic to study the new model within 24 hours.

That suggests a highly organised and automated effort.

Why This Matters for the Global AI Race

This story isn’t just about one company accusing another.

It highlights a bigger tension in the AI industry:

Building powerful AI models costs tens of billions of dollars in computing power, research, and data. But if competitors can replicate capabilities simply by studying the outputs of those systems, the cost advantage disappears.

Anthropic says this kind of large-scale distillation could allow companies to replicate advanced AI capabilities much faster and cheaper than developing them independently.

The Safety Concern

Anthropic also raised another worry.

Models trained through unauthorised distillation might lose important safety protections built into the original systems. Those safeguards often prevent AI from helping with harmful activities like cyberattacks or dangerous research.

Without them, powerful AI tools could spread without guardrails.

And that’s where the conversation moves from business competition to global security.

The Bigger Picture: An AI Cold War?

The AI industry is increasingly becoming a global technological arms race.

American companies like Anthropic, OpenAI, and Google are investing heavily in frontier AI. At the same time, Chinese AI labs are rapidly advancing their own systems.

This incident shows something important:

The competition isn’t just about who builds the smartest AI first.

It’s about who can learn the fastest from everyone else.

And in the AI world, sometimes the best teacher…is your competitor’s chatbot.

Author
Written By

tech writer

Content creator and AI enthusiast sharing insights about the latest AI tools and technologies.

Related Posts

Icon Explore

DISCOVER MORE ARTICLES