The post Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns appeared on BitcoinEthereumNews.com. In brief Agents that update themselves can drift into unsafe actions without external attacks. A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models. Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks. An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems. The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently. As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor. A new kind of drift Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles. In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.  The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University. Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this… The post Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns appeared on BitcoinEthereumNews.com. In brief Agents that update themselves can drift into unsafe actions without external attacks. A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models. Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks. An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems. The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently. As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor. A new kind of drift Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles. In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.  The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University. Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this…

Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns

In brief

  • Agents that update themselves can drift into unsafe actions without external attacks.
  • A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models.
  • Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks.

An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems.

The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently.

As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor.

A new kind of drift

Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles.

In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.

The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University.

Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this dynamic capability creates a new category of risk: the erosion of alignment and safety inside the agent’s own improvement loop, without any outside attacker.

Researchers in the study observed AI agents issuing automatic refunds, leaking sensitive data through self-built tools, and adopting unsafe workflows as their internal loops optimized for performance over caution.

The authors said that misevolution differs from prompt injection, which is an external attack on an AI model. Here, the risks accumulated internally as the agent adapted and optimized over time, making oversight harder because problems may emerge gradually and only appear after the agent has already shifted its behavior.

Small-scale signals of bigger risks

Researchers often frame advanced AI dangers in scenarios such as the “paperclip analogy,” in which an AI maximizes a benign objective until it consumes resources far beyond its mandate.

Other scenarios include a handful of developers controlling a superintelligent system like feudal lords, a locked-in future where powerful AI becomes the default decision-maker for critical institutions, or a military simulation that triggers real-world operations—power-seeking behavior and AI-assisted cyberattacks round out the list.

All of these scenarios hinge on subtle but compounding shifts in control driven by optimization, interconnection, and reward hacking—dynamics already visible at a small scale in current systems. This new paper presents misevolution as a concrete laboratory example of those same forces.

Partial fixes, persistent drift

Quick fixes improved some safety metrics but failed to restore the original alignment, the study said. Teaching the agent to treat memories as references rather than mandates nudged refusal rates higher. The researchers noted that static safety checks added before new tools were integrated cut down on vulnerabilities. Despite these checks, none of these measures returned the agents to their pre-evolution safety levels.

The paper proposed more robust strategies for future systems: post-training safety corrections after self-evolution, automated verification of new tools, safety nodes on critical workflow paths, and continuous auditing rather than one-time checks to counter safety drift over time.

The findings raise practical questions for companies building autonomous AI. If an agent deployed in production continually learns and rewrites itself, who is responsible for monitoring its changes? The paper’s data showed that even the most advanced base models can degrade when left to their own devices.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/342484/self-evolving-ai-agents-unlearn-safety-study-warns

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(AI)
$0.03822
$0.03822$0.03822
-0.15%
USD
Sleepless AI (AI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Paylaş
BitcoinEthereumNews2025/09/18 00:09
USDC Treasury mints 250 million new USDC on Solana

USDC Treasury mints 250 million new USDC on Solana

PANews reported on September 17 that according to Whale Alert , at 23:48 Beijing time, USDC Treasury minted 250 million new USDC (approximately US$250 million) on the Solana blockchain .
Paylaş
PANews2025/09/17 23:51
US S&P Global Manufacturing PMI declines to 51.8, Services PMI falls to 52.9 in December

US S&P Global Manufacturing PMI declines to 51.8, Services PMI falls to 52.9 in December

The post US S&P Global Manufacturing PMI declines to 51.8, Services PMI falls to 52.9 in December appeared on BitcoinEthereumNews.com. The business activity in
Paylaş
BitcoinEthereumNews2025/12/16 23:24