Feb 24 β€’ 06:36 UTC πŸ‡ΆπŸ‡¦ Qatar Al Jazeera

Anthropic accuses Chinese companies of exploiting 'Claude' in training their AI models

Anthropic has accused several Chinese companies of using illegal methods to train their AI models by exploiting its tool, Claude.

Anthropic, an artificial intelligence company, has made serious allegations against several Chinese firms including Deep Sea by accusing them of employing illegal tactics in training their AI models through the misuse of Anthropic's proprietary tool, Claude. According to a report by The New York Times, the company claims that these firms, which also include Moonshot and MiniMax, utilized a well-known 'distillation' method that allowed them to tap into the unique capabilities of Claude. This allegation comes at a time when the integrity of AI development practices is under scrutiny globally.

The report indicates that the accused companies reportedly created over 24,000 fake accounts, generating more than 16 million conversations with Claude as part of their distillation process, which Anthropic asserts is a blatant violation of its usage terms. This manipulation not only breaches the ethical guidelines set forth by Anthropic but also raises questions about the security practices of foreign companies in accessing sensitive AI resources developed in the United States.

In response to these allegations, Anthropic has urged the U.S. government and other American AI companies to collaborate in creating new methods that prevent Chinese firms from using distillation techniques to exploit American AI models. The company emphasizes that access to its models by Chinese firms poses a risk to U.S. national security, highlighting the urgent need for protective measures in the rapidly evolving AI landscape.

πŸ“‘ Similar Coverage