Written by: Ada, Deep Tide TechFlow Tuesday, February 24. Washington, D.C., Pentagon. Anthropic CEO Dario Amodei sat opposite Defense Minister Pete Hegseth. AccordingWritten by: Ada, Deep Tide TechFlow Tuesday, February 24. Washington, D.C., Pentagon. Anthropic CEO Dario Amodei sat opposite Defense Minister Pete Hegseth. According

72 Hours, Triple Crisis: Anthropic's Soul is Being Auctioned Off

2026/02/27 19:04
Okuma süresi: 10 dk

Written by: Ada, Deep Tide TechFlow

Tuesday, February 24. Washington, D.C., Pentagon.

72 Hours, Triple Crisis: Anthropic's Soul is Being Auctioned Off

Anthropic CEO Dario Amodei sat opposite Defense Minister Pete Hegseth. According to multiple media outlets, including NPR and CNN, citing sources familiar with the matter, the atmosphere of the meeting was "polite," but the content was anything but.

Hegseth gave him an ultimatum: lift the military use restrictions on Claude by 5:01 p.m. Friday, allowing the Pentagon to use it for "all legitimate purposes," including autonomous weapon targeting and domestic mass surveillance.

Otherwise, cancel the $200 million contract. Invoke the Defense Production Act and forcibly requisition the equipment. Designate Anthropic as a "supply chain risk," which is tantamount to blacklisting it as a hostile entity to Russia and China.

On the same day, Anthropic released the third version of its Responsible Scaling Policy (RSP 3.0), quietly removing one of the company's core commitments: it would not train more powerful models if it could not guarantee that safety measures were in place.

On the same day, Elon Musk posted on X, "Anthropic stole training data on a massive scale, that's a fact." Meanwhile, X's community notes added reports that Anthropic paid a $1.5 billion settlement for using pirated books to train Claude.

Within 72 hours, this self-proclaimed AI company played three roles simultaneously: a security martyr, an intellectual property thief, and a traitor to the Pentagon.

Which one is true?

Perhaps they are all of them.

The Pentagon's "Obey or Get Out" Strategy

The first layer of the story is very simple.

Anthropic was the first AI company to receive classified access from the U.S. Department of Defense. The contract it received last summer was capped at $200 million. OpenAI, Google, and xAI subsequently each secured contracts of similar size.

According to Al Jazeera, Claude was used in a U.S. military operation in January of this year, reportedly related to the kidnapping of Venezuelan President Maduro.

However, Anthropic drew two red lines: it does not support fully autonomous weapon targeting, and it does not support mass surveillance of U.S. citizens. Anthropic believes that artificial intelligence is not reliable enough to control weapons, and there are currently no laws or regulations governing the application of artificial intelligence in mass surveillance.

The Pentagon is not buying it.

White House AI advisor David Sacks publicly accused Anthropic on X last October of “using fear as a weapon to engage in regulatory capture.”

The competition has already collapsed. OpenAI, Google, and xAI have all agreed to allow the military to use their AI in "all legal scenarios." Musk's Grok was just approved for access to classified systems this week.

Anthropic was the last one standing.

As of press time, Anthropic stated in its latest press release that they have no intention of backing down. However, the 5:01 PM deadline on Friday is fast approaching.

An anonymous former Justice Department and Defense liaison officer expressed bewilderment to CNN: "How can you simultaneously declare a company a 'supply chain risk' and force that company to work for your military?"

Good question, but that's not on the Pentagon's radar. What they care about is taking coercive measures if Anthropic doesn't compromise, or becoming a Washington outcast.

"Distillation Attack": A Slap in the Face to the Accusation

On February 23, Anthropic published a strongly worded blog post accusing three Chinese AI companies of launching an "industrial-grade distillation attack" against Claude.

The defendants are DeepSeek, Moonshot AI, and MiniMax.

Anthropic alleges that they used 24,000 fake accounts to initiate more than 16 million interactions with Claude, specifically targeting and extracting Claude's core capabilities in agent reasoning, tool invocation, and programming.

Anthropic characterized this as a national security threat, claiming that the distilled model is "unlikely to retain security fencing" and could be used by authoritarian governments for cyberattacks, disinformation campaigns, and mass surveillance.

The narrative is perfect, and the timing is perfect.

This comes just after the Trump administration eased chip export controls to China, and just when Anthropic needed to find ammunition for its lobbying efforts on chip export controls.

But Musk fired back: "Anthropic stole training data on a massive scale and paid billions of dollars to settle for it. That's a fact."

Tory Green, co-founder of AI infrastructure company IO.Net, said: "You train your own model with data from the entire internet, and then others use your public API to learn from you, and that's called a 'distillation attack'?"

Anthropic calls distillation an "attack," but it's commonplace in the AI ​​industry. OpenAI uses it to compress GPT-4, Google uses it to optimize Gemini, and even Anthropic itself is doing it. The only difference is that this time, it's itself being distilled.

According to Erik Cambria, an AI professor at Nanyang Technological University in Singapore, who spoke to CNBC, "The line between legitimate use and malicious exploitation is often blurred."

Ironically, Anthropic just paid a $1.5 billion settlement for using pirated books to train Claude. It trained its model with data from the entire internet, then accused others of using its public API to learn from it. This isn't just double standards; it's triple standards.

Anthropic intended to play the victim, but ended up being exposed as the defendant.

Removal of safety commitments: RSP 3.0

On the same day that Anthropic confronted the Pentagon and clashed with Silicon Valley, it released the third version of its Responsible Scaling Policy.

In a media interview, Anthropic Chief Scientist Jared Kaplan stated, "We feel that stopping the training of AI models does no good for anyone. In the context of rapid AI development, making unilateral commitments... while competitors are moving at full speed, makes no sense."

In other words, if others don't practice martial ethics, we won't pretend either.

At the heart of RSP 1.0 and 2.0 is a hard commitment: if a model's capabilities exceed the coverage of safety measures, training will be paused. This commitment has earned Anthropic a unique reputation in the AI ​​safety community.

But it was removed in version 3.0.

Instead, a more "flexible" framework has been adopted, separating security measures that Anthropic can implement on its own from security recommendations that require industry-wide collaboration. A risk report will be issued every 3-6 months for review by external experts.

Sounds very responsible?

Chris Painter, an independent reviewer from the nonprofit organization METR, commented after reviewing an early draft of the policy: "This shows that Anthropic believes it needs to move into 'triage mode' because the methods for assessing and mitigating risks are not keeping pace with the growth in capabilities. This further demonstrates that society is not prepared for the potentially catastrophic risks of AI."

According to TIME, Anthropic spent nearly a year internally discussing the rewrite, which was unanimously approved by CEO Amodei and the board. The official explanation is that the original policy was designed to promote industry consensus, but the industry simply didn't keep up. The Trump administration adopted a laissez-faire approach to the development of artificial intelligence, even attempting to repeal relevant state regulations. Federal-level AI legislation remains a distant prospect. While establishing a global governance framework by 2023 seemed possible, three years later, that door has clearly closed.

An anonymous researcher who has long followed AI governance put it more bluntly: "RSP is Anthropic's most valuable brand asset. Removing the promise to pause training is like an organic food company quietly tearing the word 'organic' off the packaging and then telling you that their testing is now more transparent."

Identity Disconnect Under a Valuation of 380 Billion

In early February, Anthropic completed a $30 billion funding round at a valuation of $380 billion, with Amazon as the anchor investor. Since its inception, it has achieved $14 billion in annualized revenue. Over the past three years, this figure has grown more than tenfold annually.

Meanwhile, the Pentagon threatened to blacklist him. Musk publicly accused him of data theft. His core security commitments were removed. After resigning, Anthropic's head of AI security, Msrinank Sharma, wrote on X: "The world is in danger."

contradiction?

Perhaps contradiction is in the DNA of the Anthropic.

This company was founded by former OpenAI executives who were concerned that OpenAI was moving too fast on security issues. They then started their own company to build more powerful models at a faster pace, while simultaneously telling the world how dangerous these models were.

The business model can be summarized in one sentence: We are more afraid of AI than anyone else, so you should pay us to build AI.

This narrative worked perfectly in 2023-2024. AI security was a hot topic in Washington, and Anthropic was the most popular lobbyist.

In 2026, the winds changed.

"Woke AI" has become a hashtag for attacks, state-level AI regulatory bills have been blocked by the White House, and although California SB 53, supported by Anthropic, has been signed into law, there is a complete lack of federal legislation.

Anthropic's safe-haven strategy is sliding from a "differentiation advantage" into a "political liability."

Anthropic is playing a complex balancing act, needing to be "safe" enough to maintain its brand, yet "flexible" enough to avoid being abandoned by the market and the government. The problem is, the tolerance space on both ends is shrinking.

How much is a security narrative still worth?

When you look at the three things together, the picture becomes clear.

Accusing the Chinese company Claude of distillation is part of a lobbying narrative aimed at strengthening chip export controls. The security moratorium commitment was removed in order to avoid falling behind in the arms race. Rejecting the Pentagon's demands for independent weapons is a way to preserve a last shred of moral pretense.

Each step has its own logic, but they also contradict each other.

You can't claim that Chinese companies "distilling" your models will endanger national security while simultaneously removing promises to prevent your own models from spiraling out of control. If the models are truly that dangerous, you should be more cautious, not more aggressive.

Unless you are an anthropic.

In the AI ​​industry, identity isn't defined by your statements, it's defined by your balance sheet. Anthropic's "security" narrative is essentially a brand premium.

In the early stages of the AI ​​arms race, this premium is valuable. Investors are willing to pay higher valuations for "responsible AI," governments are willing to give the green light to "trustworthy AI," and customers are willing to pay for "safer AI."

But by 2026, this premium is evaporating.

Anthropic is no longer facing a choice between "whether to compromise" and "who to compromise with first." Compromising with the Pentagon damages its brand. Compromising with competitors undermines its security promises. Compromising with investors means conceding to both sides.

Anthropic will release its answer at 5:01 PM on Friday.

But whatever the answer may be, one thing is certain: Anthropic, which once stood on the premise that "we are different from OpenAI," is becoming just like everyone else.

The end of an identity crisis is often the disappearance of one's identity.

Piyasa Fırsatı
Notcoin Logosu
Notcoin Fiyatı(NOT)
$0,0003754
$0,0003754$0,0003754
-0,26%
USD
Notcoin (NOT) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

“We Cannot in Good Conscience Agree”: Anthropic Defies Pentagon Over AI Weapons

“We Cannot in Good Conscience Agree”: Anthropic Defies Pentagon Over AI Weapons

TLDR The Pentagon is demanding Anthropic remove safety guardrails from its Claude AI so it can be used for any lawful purpose, including autonomous weapons and
Paylaş
Coincentral2026/02/27 20:18
Wormhole Unleashes W 2.0 Tokenomics for a Connected Blockchain Future

Wormhole Unleashes W 2.0 Tokenomics for a Connected Blockchain Future

TLDR Wormhole reinvents W Tokenomics with Reserve, yield, and unlock upgrades. W Tokenomics: 4% yield, bi-weekly unlocks, and a sustainable Reserve Wormhole shifts to long-term value with treasury, yield, and smoother unlocks. Stakers earn 4% base yield as Wormhole optimizes unlocks for stability. Wormhole’s new Tokenomics align growth, yield, and stability for W holders. Wormhole [...] The post Wormhole Unleashes W 2.0 Tokenomics for a Connected Blockchain Future appeared first on CoinCentral.
Paylaş
Coincentral2025/09/18 02:07
Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse?

Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse?

Whales offload 200 million XRP leaving market uncertainty behind. XRP faces potential collapse as whales drive major price shifts. Is XRP’s future in danger after massive sell-off by whales? XRP’s price has been under intense pressure recently as whales reportedly offloaded a staggering 200 million XRP over the past two weeks. This massive sell-off has raised alarms across the cryptocurrency community, as many wonder if the market is on the brink of collapse or just undergoing a temporary correction. According to crypto analyst Ali (@ali_charts), this surge in whale activity correlates directly with the price fluctuations seen in the past few weeks. XRP experienced a sharp spike in late July and early August, but the price quickly reversed as whales began to sell their holdings in large quantities. The increased volume during this period highlights the intensity of the sell-off, leaving many traders to question the future of XRP’s value. Whales have offloaded around 200 million $XRP in the last two weeks! pic.twitter.com/MiSQPpDwZM — Ali (@ali_charts) September 17, 2025 Also Read: Shiba Inu’s Price Is at a Tipping Point: Will It Break or Crash Soon? Can XRP Recover or Is a Bigger Decline Ahead? As the market absorbs the effects of the whale offload, technical indicators suggest that XRP may be facing a period of consolidation. The Relative Strength Index (RSI), currently sitting at 53.05, signals a neutral market stance, indicating that XRP could move in either direction. This leaves traders uncertain whether the XRP will break above its current resistance levels or continue to fall as more whales sell off their holdings. Source: Tradingview Additionally, the Bollinger Bands, suggest that XRP is nearing the upper limits of its range. This often points to a potential slowdown or pullback in price, further raising concerns about the future direction of the XRP. With the price currently around $3.02, many are questioning whether XRP can regain its footing or if it will continue to decline. The Aftermath of Whale Activity: Is XRP’s Future in Danger? Despite the large sell-off, XRP is not yet showing signs of total collapse. However, the market remains fragile, and the price is likely to remain volatile in the coming days. With whales continuing to influence price movements, many investors are watching closely to see if this trend will reverse or intensify. The coming weeks will be critical for determining whether XRP can stabilize or face further declines. The combination of whale offloading and technical indicators suggest that XRP’s price is at a crossroads. Traders and investors alike are waiting for clear signals to determine if the XRP will bounce back or continue its downward trajectory. Also Read: Metaplanet’s Bold Move: $15M U.S. Subsidiary to Supercharge Bitcoin Strategy The post Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse? appeared first on 36Crypto.
Paylaş
Coinstats2025/09/17 23:42