In the report “State of AI 2025”, Messari dedicates an entire chapter to Decentralized AI (deAI), defining it not as an ideological alternative to traditional AI, but as a necessary complement to ensure transparency, security, and global participation.
In a world where models become black boxes and the power of private labs grows, the role of deAI is not theoretical: it is a structural response to the challenges of the new order of intelligence.
Artificial intelligence is becoming the most strategic digital infrastructure on the planet. However, as tech giants consolidate their dominance, a parallel movement is emerging that aims to build a radically different AI: open, verifiable, permissionless, and distributed.
What is Decentralized AI (deAI)?
The deAI is an AI system built on distributed networks, where:
- data can be collected, labeled, and exchanged in a permissionless manner;
- the computation is performed on global networks of independent GPUs;
- the models can be trained and used in a coordinated manner, without a single controlling authority;
- privacy, verifiability, and reputation are ensured through blockchain, cryptography, and attestation systems;
- AI agents can transact, identify themselves, and collaborate in a trustless environment.
In other words:
DeAI is the infrastructure that enables the creation of an open AI “for anyone and by anyone,” without having to rely on a private giant.
Why does deAI become necessary?
Messari divides the reasons into two categories: philosophical and practical.
Philosophy
- Concentration of Power
Centralized AI grants enormous control to a few companies (OpenAI, Google, Anthropic). This influences narratives, data access, technological standards, and even social processes.
- Opacity
We do not know how the models were trained, what data they use, or what biases they incorporate.
- Limited trust
There are no verifiable guarantees that the model provided is as claimed or that it processes data correctly.
Practice
- Global Coordination
Blockchains enable the coordination of millions of devices and contributors without the need for trust.
- On-chain Verifiability
Identity, reputation, model status, and integrity can be recorded immutably.
- Native Payments
AI agents require instant payments, microtransactions, and immediate settlement: here, crypto is indispensable.
- Scalability through distributed networks
deAI leverages existing hardware (gaming PCs, edge devices, small data centers), not just hyperscaler GPUs.
The deAI Stack: The 6 Layers Comprising the Ecosystem
The report details the technological stack of deAI, consisting of 6 interconnected layers: Data → Compute → Training → Privacy/Verification → Agents → Applications.
Let’s examine them one by one.
1. Data Layer
The heart of every AI system is the dataset.
In deAI, data is collected, labeled, stored, and exchanged through distributed networks.
Main activities:
- data collection (video, audio, sensors, real interactions)
- labeling through incentivized marketplaces
- cleaning & preprocessing
- storage on distributed networks (Filecoin, Arweave, Jackal)
- data marketplaces (Ocean, Vana, Cudis)
Decentralization allows:
- greater data diversity
- direct financial incentives to contributors
- verifiability (provenance, timestamp, identity)
- reduction in the cost of proprietary datasets
With the “data famine” anticipated by 2030, this layer becomes crucial.
2. Compute Layer
This is where the most expensive part of AI takes place: performing training and inference.
Decentralized Compute Networks (DCN):
- Akash
- Render
- io.net
- Aethir
- Hyperbolic
- EigenCloud
- Exabits
The main advantage: they make on-demand compute available at market prices, not dictated by a cloud provider.
Historically ineffective for large-scale training (due to latencies and synchronizations), today DCNs are perfect for serving inference, because:
- requires less communication between GPUs
- can be executed on heterogeneous hardware
- is the segment expected to represent 50–75% of the compute demand by 2030
3. Training & Inference Layer
Messari makes a clear distinction:
Pre-training
Extremely difficult to decentralize:
requires enormous datasets, tight synchronization, and extremely high bandwidth.
Post-training (SFT / RLHF / RL)
Perfect for distributed networks:
- more asynchrony
- less communication
- more scalability
- possibility of data crowdsourcing
Decentralized Inference
It is the missing link that makes deAI usable in real life.
Examples cited in the report:
- Prodia
- Declines
- Fortytwo Network
- dria
- inference.net
4. Privacy & Verification Layer
This is where the most complex cryptographic technologies come into play.
Fundamental Techniques:
- ZKML (zero-knowledge machine learning)
- Optimistic ML (verification through challenge period)
- TEE-based ML (trusted execution environments)
- FHE (fully homomorphic encryption)
- MPC (multi-party computation)
- Federated learning
Objective:
Ensure that a model has been calculated correctly, without modifications and without exposing sensitive data.
Mentioned projects:
- Phala (TEE)
- Zama (FHE)
- Nillion (MPC)
- EZKL (ZKML)
- Lagrange (zkML + verification infra)
This is the most important layer for enterprise adoption.
5. Agents & Orchestration Layer
The report analyzes how autonomous agents are becoming the new “interface” of AI.
A full stack includes:
- base model (LLM or SLM)
- tooling (API, wallet, browser automation)
- framework (ElizaOS, Daydreams, Olas, Questflow)
- communication standards
- multi-agent coordination
- verifiable integrity (tamper-proof prompt, verified reasoning)
Blockchains unlock for agents:
- identity
- reputation
- self-custodial payments
- trustless access to financial services
- auditability
Agents will be the primary “users” of blockchain in the next 5 years.
6. Applications Layer
The final level: apps built on the entire stack.
Examples:
- trading agents
- autonomous DeFi bots
- autonomous browsers
- cybersecurity systems
- AI-powered data labeling
- multi-agent universes for gaming, discovery, or e-commerce
- decentralized recommendation engines
deAI apps function like regular AI, but with three differences:
- transparency
- verifiability
- interoperability with crypto
Why Now? The 5 Forces Driving deAI
Messari identifies five megatrends that create a perfect environment for the growth of decentralized AI:
- Inference Demand in Vertical Boom
- Depletion of Public Data and Demand for Proprietary Data
- Explosion of AI agents that must transact autonomously
- Global War for Talent and Prohibitive Compute Costs
- Advancements in the Decentralization of Training and Verification
Centralized AI cannot meet all needs: complementarity is required.
Conclusion: deAI is the Foundation of Open, Verifiable, and Participatory AI
Decentralized AI is not a trend: it is a structural response.
As models grow and the power of Big Tech concentrates, the need to:
- verify
- decentralize
- certify
- coordinate
- offset
- protect
- distribute
becomes central.
DeAI is the infrastructure that enables AI to be not only powerful, but also:
- open
- secure
- distributed
- globally accessible