How borrowed AI becomes leverage over the people who use it.
With new capabilities come new responsibilities, and AI is a capability that spreads fast because it fits inside everything we already do. We can’t view it as a single invention. It’s a total disruptor of the way we’re doing things, already integrated in search, customer support, design, trading, education, hiring, and governance. Each of these individual integrations are small enough to accept without debate, and together large enough to change how society makes decisions and, ultimately, how society works.
With new knowledge come new forms of authority, because whoever controls the production of knowledge eventually controls the terms of reality. AI compresses expertise into a tool that can be used by anyone. At the same time, it also concentrates leverage in whoever owns the training system behind it, the distribution channels, and the permissions that decide how the tool can be used by the public but, more importantly, by the owners to affect the public.
These are some of the various layers and some of the tension at the center of AI’s rapid expansion: it democratizes capability while centralizing control. And most of us experience only the first half — the convenience, the speed, the usefulness — while losing sight of the second half.
Researchers have been documenting and debating these layers for some years now. Shoshana Zuboff’s work on surveillance capitalism traces how platforms turned human behavior into raw material for prediction — extracting data far beyond what’s needed to provide a service, then selling those predictions to advertisers, insurers, employers, and governments.
Kate Crawford’s Atlas of AI follows the supply chains behind the clean interfaces: the mines, the data centers, the underpaid workers labeling images so the systems appear to run themselves. Stuart Russell, one of the field’s most respected voices, warns that the standard approach to AI development — define an objective, optimize for it — breaks down when the objective doesn’t actually align with human preferences, which are uncertain, contextual, and often contradictory.
What connects these different critiques is a shared observation: the way AI is currently being built serves particular interests, and those interests are not primarily yours. The convenience is real, but it’s not the point. The point is the data, the predictions, the leverage. You get a better search result while they get a more accurate model of your behavior. When a service is free, the question to ask is what’s being sold instead. In most cases, it’s access to you: your attention, your patterns, your future decisions. The AI gets smarter with every interaction, and that intelligence becomes an asset owned by whoever controls the platform. You contribute to it constantly. You don’t own any of it.
The concentration aspect deepens the problem. Right now, a handful of companies control the foundational models that everyone else builds on. OpenAI, Google, Anthropic, Meta are not just tech companies anymore. They’re becoming infrastructure providers, and the rest of the economy is starting to depend on them the way it depends on electricity or telecommunications. When OpenAI’s API goes down, thousands of applications break. When a model gets updated and its behavior shifts, products built on top of it fail in ways their developers didn’t anticipate. We’re constructing dependencies on systems we don’t control, maintained by companies whose priorities are not transparent and whose decisions are not accountable to the people affected by them.
This is simply a call for transparency about what’s being built and who it serves. AI infrastructure is taking shape right now, and infrastructure is sticky and tricky. Once it’s in place, everything else gets built on top of it. The assumptions encoded today become the defaults of tomorrow.
This is the context in which SourceLess has been integrating AI in its web3 ecosystem that connects digital identity, communication and finance within an infrastructure that provides and protects ownership and privacy.
The problems that Crawford, Zuboff, and Russell describe are structural, and no single project resolves them. But we do think the design choices matter, and we’ve tried to make different ones.
ARES AI is built as an assistive layer, not a prediction engine. It connects to your STR Domain — your self-owned digital identity within the SourceLess ecosystem — which means it doesn’t need to harvest behavioral data to function. It’s not optimizing for engagement or time-on-platform. It’s not selling predictions about you to third parties. The goal is to help you navigate complexity: answer questions, guide onboarding, automate repetitive tasks, support decision-making. Infrastructure that works for the user, not on the user.
This doesn’t make it neutral or perfect. Every system encodes choices, and those choices have consequences. But we believe there’s a difference between AI designed around extraction and AI designed around assistance, and that difference matters more as these systems become foundational to how we live and work.
This article is the first in a series where we’ll explore these questions in more depth.
We’ll look at what it means for intelligence to become infrastructure — who controls it, what happens when it fails, what alternatives are possible. We’ll draw on the work of researchers like Crawford, Zuboff, Russell, and Jaron Lanier, who has spent years arguing that “free” AI services are never actually free. We’ll examine the alignment problem, the concentration of power in a handful of companies, and the choices that are still available before the architecture locks in.
And we’ll share more about how we’re trying to build differently with ARES AI as a case study in what it looks like to take these questions seriously.
More soon.
Learn more about SourceLess and ARES AI: sourceless.net and SourcelessAres ai
The Intelligence We Rent was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

