The post Anthropic Won’t Lift AI Safeguards Amid Ongoing Pentagon Dispute: CEO appeared on BitcoinEthereumNews.com. In brief Dario Amodei says Anthropic will notThe post Anthropic Won’t Lift AI Safeguards Amid Ongoing Pentagon Dispute: CEO appeared on BitcoinEthereumNews.com. In brief Dario Amodei says Anthropic will not

Anthropic Won’t Lift AI Safeguards Amid Ongoing Pentagon Dispute: CEO

2026/02/27 08:37
5분 읽기

In brief

  • Dario Amodei says Anthropic will not remove bans on mass domestic surveillance and fully autonomous weapons.
  • The Pentagon has threatened contract termination and possible action under the Defense Production Act.
  • The standoff follows reports that the U.S. military used Claude to capture former Venezuelan President Nicolás Maduro

Anthropic CEO Dario Amodei said Thursday the company will not remove safeguards from its Claude AI model, escalating a dispute with the U.S. Department of Defense over how the technology can be used in classified military systems.

The statement comes as the Defense Department reviews its relationship with Anthropic and weighs potential consequences, including cancellation of the company’s $200 million contract and possible invocation of the Defense Production Act.

“We cannot in good conscience accede to their request,” Amodei wrote, referring to the Pentagon’s demand in January that AI contractors permit use of their systems for “any lawful use.”

While the Pentagon has since required AI vendors to adopt standard “any lawful use” language in future agreements, Anthropic remained the only frontier AI firm that resisted turning over control of its AI to the military.

On Wednesday, Axios first reported that the Pentagon had issued an ultimatum requiring unrestricted military use of Claude. The deadline reportedly is Friday of this week.

“It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei continued. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”

In his statement, Amodei framed the company’s stance as aligned with U.S. national security goals.

“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” he said.

He added that Claude is “extensively deployed across the Department of War and other national security agencies for intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.”

War on AI

The dispute unfolds against broader concerns about how advanced AI systems behave in high-stakes military scenarios. In a recent King’s College London study, OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash deployed nuclear weapons in 95% of simulated geopolitical crises.

During a speech at SpaceX’s Starbase in Texas in January, Defense Secretary Pete Hegseth said the U.S. military plans to deploy the most advanced AI models.

That same month, reports surfaced that Claude was used during a U.S. operation to capture former Venezuelan President Nicolás Maduro earlier that month. Amodei refuted claims that Anthropic questioned any specific military operations.

“Anthropic understands that the Department of War, not private companies, makes military decisions,” he said. “We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”

Despite this, Amodei said using these systems for mass domestic surveillance or autonomous weapons is incompatible with democratic values and presents serious risks.

“Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons,” he said. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”

He also addressed the Pentagon’s threat to designate Anthropic a “supply chain risk” while also potentially invoking the Defense Production Act.

“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” he said.

While Amodei has said the company will not comply with the Pentagon’s request, at the same time, Anthropic has revised its Responsible Scaling Policy, dropping a pledge to halt training of advanced systems without guaranteed safeguards in place.

Robert Weissman, co-president of Public Citizen, said the Pentagon’s posture signals broader pressure on the tech industry.

“The Pentagon is publicly bullying Anthropic, and the public part is intentional, because they want to pressure this particular company and send a message to all big tech and all corporations that we intend to do and take whatever we want and don’t get in our way,” Weissman told Decrypt.

Weissman described Anthropic’s guardrails as “modest” and aimed at preventing “improper surveillance of American people or to facilitate the development and deployment of killer robots, AI-enabled weaponry that could launch lethal strikes without humans say so.”

“Those are the most sensible and modest guardrails you could come up with when it comes to this powerful new technology.”

Regarding the Pentagon’s threat of designating Anthropic a “supply chain risk,” Weissman called it a potentially crushing penalty from the government, and argued it would pressure other AI firms to avoid imposing similar limits.

“Individuals might use Claude, but none of the AI companies, particularly Anthropic, have business models based on individual use; they’re looking for business use,” he said. “This is a potentially crushing penalty from the government.”

While the Pentagon has not yet said whether it plans to go through with its threat to terminate the contract or invoke the Defense Production Act, Weissman said the Pentagon is signaling to AI companies that it expects unrestricted access to their technology once it is deployed in government systems.

“The message of the Pentagon is, ‘we’re not going to tolerate this, and we expect to be able to use the technology as it’s invented for any purpose we want,’” Weissman said.

The Department of Defense and Anthropic did not immediately respond to Decrypt’s requests for comment.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/359358/anthropic-wont-lift-ai-safeguards-pentagon-dispute

시장 기회
Union 로고
Union 가격(U)
$0.00154
$0.00154$0.00154
+85.76%
USD
Union (U) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, [email protected]으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.