Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI

Anthropic logo on an orange and grey background.

Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the “industrial-scale campaigns” involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal.

The three companies – DeepSeek, MiniMax, and Moonshot – are accused of “distilling” Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a “legitimate training method,” it adds that it can “also be used for illicit purpose …

Read the full story at The Verge.

Read more @ TheVerge

Latest posts

Billions of dollars later and still nobody knows what an Xbox is

The last few years of Xbox have been expensive. Under Phil Spencer's leadership, Microsoft has spent billions of dollars in an attempt to build...

Will Trump’s DOJ actually take on Ticketmaster?

In mid-February, the Department of Justice lost its head antitrust enforcer - just weeks before it was scheduled to argue one of the year's...

Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Anthropic is issuing a call to action against AI "distillation attacks," after accusing three AI companies of misusing its Claude chatbot. On its website,...