If autonomous agents become the dominant users of DeFi, blockchains start to do a different job. They operate as coordination and settlement systems for software rather than spaces driven by human timing, sentiment, and speculation.
Federico Variola, CEO of Phemex, says this could improve how on-chain activity develops. He says:
In his view, “agents may behave in a more cooperative way rather than an extractive one, simply because they tend to act more rationally than human participants.”
Dmitry Lazarichev, co-founder of Wirex, focuses on how this changes behaviour:
That activity increases efficiency while introducing new stress points. If agents rely on similar inputs, Lazarichev says:
Fernando Lillo Aranda, Marketing Director at Zoomex, argues that the transition goes deeper. He says:
In that environment, blockchains start operating as execution systems for machine-native strategies.
Pauline Shangett, CSO at ChangeNOW, corroborates:
In exclusive interviews with these four crypto executives, BeInCrypto examined how DeFi changes as AI agents become its main users.
Agentic Liability Has no Clean Answer Yet
If AI agents can execute transactions, deploy contracts, or move funds autonomously, liability becomes harder to pin down when something goes wrong.
Lazarichev says autonomy cannot serve as an excuse.
“The key point is that ‘the agent did it’ can’t become a liability loophole,” he says.
In his view, an agent still acts “under someone’s authority, with permissions and limits set by a person or an organisation.” That puts the focus on “who deployed it, who configured it, who benefits from it, and who provided the model and the execution environment.”
He says the response will rely on familiar standards.
Shangett argues that current legal thinking is still relying on outdated foundations:
She also points to a deeper issue. “Agency law assumes the agent can be sued. Your AI agent can’t. It has no wallet, no insurance, no legal personality.”
Identity Stops Meaning Human Only
As more autonomous systems operate on-chain, identity, too, takes on a different role. Networks need to determine what kind of actor they are interacting with and what that actor is allowed to do.
Lazarichev says that “DID can help, but it won’t solve ‘human vs bot’ in a clean, binary way.”
In his view, that distinction does not capture how these systems work. “Many bots will be legitimate participants,” he says. “What matters is being able to establish what type of actor something is, and what level of assurance sits behind it.”
That leads to more defined access controls. “The more realistic model is tiered access: different credentials for different privileges,” Lazarichev says.
He adds that identity systems will need to work alongside behavioural monitoring, especially when agents handle higher-value actions.
Lillo Aranda agrees. “In a machine economy, the ‘user’ is an agent – so reliability, determinism, and composability replace simplicity as design priorities,” he says.
Shangett also reinforces this point. “The bots aren’t the problem anymore. The agents are.”
All three expert views point to a model where identity focuses on role, permissions, and accountability.
Wallet Security Breaks at the Prompt Layer
For autonomous wallets, the biggest security risk may not be stolen keys, but manipulated decisions.
Lazarichev says prompt injection is dangerous because it “targets the decision layer rather than the cryptography.” If an agent is pulling from outside inputs, attackers may be able to “steer it into doing something it shouldn’t: change a destination address, approve a malicious contract, widen permissions, or bypass an internal check.”
That risk grows fast when the wallet has broad authority. “You don’t need to break encryption if you can manipulate the system into authorising the wrong action,” Lazarichev says.
Shangett points to a more specific threat model.
She cites Owockibot as an example.
Naturally, this changes the security model.
She adds:
In her view, this is why key custody alone is not enough.
Both experts point to an adjustment in how wallet security is defined. In an agentic economy, it covers custody as well as what the agent can interpret and act on.
Final Thoughts
The rise of the agentic economy could influence what blockchains are built to do, who they serve, and where risk begins.
If autonomous systems become major on-chain participants, networks will need to support constant machine-driven activity while handling a very different set of pressures around execution, accountability, identity, and security.
As Variola suggests, a market driven by rational agents could be more cooperative than the extractive, emotion-driven environments crypto has often produced.
Lazarichev, Lillo Aranda, and Shangett also show that this future brings harder questions. Once agents can transact, deploy, and react without human input at every step, liability becomes harder to assign, identity becomes harder to define, and wallet security extends beyond key protection into decision-making itself.
If AI agents become primary on-chain actors, Web3 will need systems that can support autonomous activity while preserving accountability, control, and trust. That may prove just as important as the automation itself.
The post When AI Agents Become DeFi’s Main Users appeared first on BeInCrypto.
Source: https://beincrypto.com/ai-agents-defi-main-users/


![Trojan Trading Bot on Solana [Ultimate Guide 2026]](https://i0.wp.com/imagedelivery.net/4-5JC1r3VHAXpnrwWHBHRQ/123b7558-fded-4543-7468-bbc875bac700/w=1600,h=900,fit=cover.jpg)




