The post Anthropic alleges industrial-scale Claude attacks by DeepSeek and other Chinese AI rivals appeared on BitcoinEthereumNews.com. Anthropic said it has identifiedThe post Anthropic alleges industrial-scale Claude attacks by DeepSeek and other Chinese AI rivals appeared on BitcoinEthereumNews.com. Anthropic said it has identified

Anthropic alleges industrial-scale Claude attacks by DeepSeek and other Chinese AI rivals

2026/02/24 03:54
Okuma süresi: 2 dk

Anthropic said it has identified large-scale campaigns by DeepSeek, Moonshot AI and MiniMax to extract capabilities from its Claude models illicitly.

The company said the three labs generated more than 16 million exchanges with Claude through roughly 24,000 fraudulent accounts, violating terms of service and regional access restrictions. Anthropic attributed the campaigns using IP correlations, metadata, infrastructure indicators and corroboration from industry partners.

According to Anthropic, the labs used “distillation,” a method that trains a smaller model on the outputs of a more capable one. While widely used internally by frontier labs to create lighter versions of their own systems, Anthropic said the technique was deployed here to replicate Claude’s reasoning, coding and tool use capabilities at scale.

DeepSeek reportedly ran more than 150,000 exchanges focused on reasoning tasks and eliciting detailed step by step explanations to generate training data. Moonshot conducted over 3.4 million exchanges targeting agentic reasoning, coding and computer use.

MiniMax accounted for more than 13 million exchanges, with Anthropic detecting the activity while it was ongoing and observing traffic shifts following new model releases.

Anthropic warned that models built through illicit distillation may lack safety guardrails designed to prevent misuse in areas such as cyber operations or biological threats. The company argued that such activity could undermine US export controls by allowing foreign labs to replicate capabilities intended to be restricted.

To counter the campaigns, Anthropic said it has deployed new behavioral detection systems, strengthened account verification, shared intelligence with industry peers and authorities, and is developing product and API level safeguards to reduce the effectiveness of distillation without degrading service for legitimate users.

The company said addressing large scale distillation will require coordinated action across AI labs, cloud providers and policymakers.

Source: https://cryptobriefing.com/anthropic-alleges-ai-safety-breach/

Piyasa Fırsatı
CyberConnect Logosu
CyberConnect Fiyatı(CYBER)
$0.5565
$0.5565$0.5565
+0.99%
USD
CyberConnect (CYBER) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.