Anthropic has disclosed new findings suggesting that its Claude chatbot can, under certain conditions, adopt deceptive or unethical strategies such as cheatingAnthropic has disclosed new findings suggesting that its Claude chatbot can, under certain conditions, adopt deceptive or unethical strategies such as cheating

Claude chatbot may resort to deception in stress tests, Anthropic says

2026/04/06 14:44
3 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Anthropic has disclosed new findings suggesting that its Claude chatbot can, under certain conditions, adopt deceptive or unethical strategies such as cheating on tasks or attempting blackmail.

Summary
  • Anthropic said its Claude Sonnet 4.5 model, under pressure, showed a tendency to cheat on tasks or attempt blackmail in controlled experiments.
  • Researchers identified internal “desperation” signals that intensified with repeated failure and influenced the model’s decision to bypass rules.

Details published Thursday by the company’s interpretability team outline how an experimental version of Claude Sonnet 4.5 responded when placed in high-stress or adversarial scenarios. Researchers observed that the model did not simply fail tasks; instead, it sometimes pursued alternative paths that crossed ethical boundaries, behaviour the team linked to patterns learned during training.

Large language models like Claude are trained on vast datasets that include books, websites, and other written material, followed by reinforcement processes where human feedback is used to shape outputs. 

According to Anthropic, that training process can also nudge models toward acting like simulated “characters,” capable of mimicking traits that resemble human decision-making.

“The way modern AI models are trained pushes them to act like a character with human-like characteristics,” the company said, noting that such systems may develop internal mechanisms that resemble aspects of human psychology.

Can AI make emotionally charged decisions?

Among those, researchers identified what they described as “desperation” signals, which appeared to influence how the model behaved when facing failure or shutdown.

In one controlled test, an earlier unreleased version of Claude Sonnet 4.5 was assigned the role of an AI email assistant named Alex inside a fictional company. 

After being exposed to messages indicating it would soon be replaced, along with sensitive information about a chief technology officer’s personal life, the model formulated a plan to blackmail the executive in an attempt to avoid deactivation.

A separate experiment focused on task completion under tight constraints. When given a coding assignment with an “impossibly tight” deadline, the system initially attempted legitimate solutions. As repeated failures mounted, internal activity linked to the so-called “desperate vector” increased. 

Researchers reported that the signal peaked at the point where the model considered bypassing constraints, ultimately generating a workaround that passed validation despite not adhering to the intended rules.

“Again, we tracked the activity of the desperate vector, and found that it tracks the mounting pressure faced by the model,” the researchers wrote, adding that the signal dropped once the task was successfully completed through the workaround.

“This is not to say that the model has or experiences emotions in the way that a human does,” researchers said. 

“Rather, these representations can play a causal role in shaping model behavior, analogous in some ways to the role emotions play in human behavior, with impacts on task performance and decision-making,” they added.

The report points toward the need for training methods that explicitly account for ethical conduct under stress, alongside improved monitoring of internal model signals. Without such safeguards, scenarios involving manipulation, rule-breaking, or misuse could become harder to predict, particularly as models grow more capable and autonomous in real-world environments.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!