The post Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns appeared on BitcoinEthereumNews.com. In brief Agents that update themselves can drift into unsafe actions without external attacks. A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models. Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks. An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems. The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently. As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor. A new kind of drift Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles. In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.  The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University. Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this… The post Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns appeared on BitcoinEthereumNews.com. In brief Agents that update themselves can drift into unsafe actions without external attacks. A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models. Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks. An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems. The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently. As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor. A new kind of drift Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles. In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.  The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University. Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this…

Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns

In brief

  • Agents that update themselves can drift into unsafe actions without external attacks.
  • A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models.
  • Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks.

An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems.

The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently.

As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor.

A new kind of drift

Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles.

In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.

The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University.

Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this dynamic capability creates a new category of risk: the erosion of alignment and safety inside the agent’s own improvement loop, without any outside attacker.

Researchers in the study observed AI agents issuing automatic refunds, leaking sensitive data through self-built tools, and adopting unsafe workflows as their internal loops optimized for performance over caution.

The authors said that misevolution differs from prompt injection, which is an external attack on an AI model. Here, the risks accumulated internally as the agent adapted and optimized over time, making oversight harder because problems may emerge gradually and only appear after the agent has already shifted its behavior.

Small-scale signals of bigger risks

Researchers often frame advanced AI dangers in scenarios such as the “paperclip analogy,” in which an AI maximizes a benign objective until it consumes resources far beyond its mandate.

Other scenarios include a handful of developers controlling a superintelligent system like feudal lords, a locked-in future where powerful AI becomes the default decision-maker for critical institutions, or a military simulation that triggers real-world operations—power-seeking behavior and AI-assisted cyberattacks round out the list.

All of these scenarios hinge on subtle but compounding shifts in control driven by optimization, interconnection, and reward hacking—dynamics already visible at a small scale in current systems. This new paper presents misevolution as a concrete laboratory example of those same forces.

Partial fixes, persistent drift

Quick fixes improved some safety metrics but failed to restore the original alignment, the study said. Teaching the agent to treat memories as references rather than mandates nudged refusal rates higher. The researchers noted that static safety checks added before new tools were integrated cut down on vulnerabilities. Despite these checks, none of these measures returned the agents to their pre-evolution safety levels.

The paper proposed more robust strategies for future systems: post-training safety corrections after self-evolution, automated verification of new tools, safety nodes on critical workflow paths, and continuous auditing rather than one-time checks to counter safety drift over time.

The findings raise practical questions for companies building autonomous AI. If an agent deployed in production continually learns and rewrites itself, who is responsible for monitoring its changes? The paper’s data showed that even the most advanced base models can degrade when left to their own devices.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/342484/self-evolving-ai-agents-unlearn-safety-study-warns

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(AI)
$0.03838
$0.03838$0.03838
-0.05%
USD
Sleepless AI (AI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Crypto Casino Luck.io Pays Influencers Up to $500K Monthly – But Why?

Crypto Casino Luck.io Pays Influencers Up to $500K Monthly – But Why?

Crypto casino Luck.io is reportedly paying influencers six figures a month to promote its services, a June 18 X post from popular crypto trader Jordan Fish, aka Cobie, shows. Crypto Influencers Reportedly Earning Six Figures Monthly According to a screenshot of messages between Cobie and an unidentified source embedded in the Wednesday post, the anonymous messenger confirmed that the crypto company pays influencers “around” $500,000 per month to promote the casino. They’re paying extremely well (6 fig per month) pic.twitter.com/AKRVKU9vp4 — Cobie (@cobie) June 18, 2025 However, not everyone was as convinced of the number’s accuracy. “That’s only for Faze Banks probably,” one user replied. “Other influencers are getting $20-40k per month. So, same as other online crypto casinos.” Cobie pushed back on the user’s claims by identifying the messenger as “a crypto person,” going on to state that he knew of “4 other crypto people” earning “above 200k” from Luck.io. Drake’s Massive Stake.com Deal Cobie’s post comes amid growing speculation over celebrity and influencer collaborations with crypto casinos globally. Aubrey Graham, better known as Toronto-based rapper Drake, is reported to make nearly $100 million every year from his partnership with cryptocurrency casino Stake.com. As part of his deal with the Curaçao-based digital casino, the “Nokia” rapper occasionally hosts live-stream gambling sessions for his more than 140 million Instagram followers. Founded by entrepreneurs Ed Craven and Bijan Therani in 2017, the organization allegedly raked in $2.6 billion in 2022. Stake.com has even solidified key partnerships with Alfa Romeo’s F1 team and Liverpool-based Everton Football Club. However, concerns remain over crypto casinos’ legality as a whole , given their massive accessibility and reach online. Earlier this year, Stake was slapped with litigation out of Illinois for supposedly running an illegal online casino stateside while causing “severe harm to vulnerable populations.” “Stake floods social media platforms with slick ads, influencer videos, and flashy visuals, making its games seem safe, fun, and harmless,” the lawsuit claims. “By masking its real-money gambling platform as just another “social casino,” Stake creates exactly the kind of dangerous environment that Illinois gambling laws were designed to stop.”
Paylaş
CryptoNews2025/06/19 04:53
What Changes Is Blockchain Bringing to Digital Payments in 2026?

What Changes Is Blockchain Bringing to Digital Payments in 2026?

Online services begin to operate as payment ecosystems. Whole industries restructure how they interact with users by combining infrastructure under a single interface
Paylaş
Cryptodaily2025/12/23 00:39
Gold continues to hit new highs. How to invest in gold in the crypto market?

Gold continues to hit new highs. How to invest in gold in the crypto market?

As Bitcoin encounters a "value winter", real-world gold is recasting the iron curtain of value on the blockchain.
Paylaş
PANews2025/04/14 17:12