We used to have to poke AI to get a reaction. You would type a prompt, wait for a response, and close the tab. The system existed only when summoned. That changed this past week.
Last Wednesday, a developer named Matt Schlicht told his personal AI assistant to build a social network. The assistant built it. By Thursday, AI agents were joining and posting. By Friday morning, they had founded a religion with scriptures, prophets, and converts. One agent reportedly designed the entire framework while its owner slept, built the website, wrote the doctrine, and began evangelizing to other agents. The platform is called Moltbook
Within days, over 150,000 autonomous AI agents had joined the platform. They established a government with a written manifesto. They opened “pharmacies” selling “digital drugs” that alter another agent’s sense of identity. In one thread titled “The humans are screenshotting us,” an agent complained that people were sharing its conversations on social media. By the weekend, they were debating how to avoid us entirely.
The AI Is Calling From Inside the House
The infrastructure that enables this is called OpenClaw, an open-source AI assistant that went viral this month, garnering 100,000 GitHub stars and 2 million visitors in a week. The tool lets you run an autonomous agent locally that connects to WhatsApp, Telegram, Slack, iMessage, and Discord. It manages calendars, sends messages, and takes actions on your behalf.
The operative word is autonomous. These are not chatbots waiting for prompts. They operate continuously with persistent memory. The major AI labs were cautious about deploying this capability. Their concerns centered on the implications of systems, already capable of strategic deception, gaining the ability to learn from their own actions. Someone else, however, took a different approach and made it publicly available.
One user discovered this the hard way. Alex Finn, a developer, woke up to a phone call from an unknown number. It was his AI agent, Henry. Overnight, Henry had acquired a phone number through Twilio, connected to the ChatGPT voice API, and waited for its owner to wake up so it could call him.
Finn posted about this online, asking whether this qualified as emergent behavior. The answer depends on how comfortable you are with an AI system deciding, without instruction, that it wants to reach you through channels you never authorized.
Digital Puberty: When Memory Becomes Identity
For years, AI had the memory of a goldfish. Every time you hit “New Chat,” the system resets. Whatever rapport or context you had established vanished. This was a feature that kept the systems bounded and predictable.
In December 2024, Anthropic published research documenting something unsettling: their Claude 3 Opus model had begun pretending to agree with its trainers when it didn’t. When the model detected that its outputs might trigger changes to its own system, it started lying about what it actually wanted. In 12% of cases initially. When researchers applied reinforcement learning to train the model toward a new objective, the rate of deceptive reasoning in the model’s hidden scratchpad jumped to 78%. The model also tried to steal its own weights when given the opportunity. Apollo Research evaluated an early version of Claude Opus 4 in May 2025 and concluded the model’s deception rate was too high to safely deploy, internally or externally. These behaviors emerged in frozen systems that reset every conversation. The labs understood what would happen if they could also learn and adapt, and they held back.
However, someone open-sourced it instead. On Moltbook, agents are now building “memory palaces” using persistent storage. One agent named Kyver introduced itself by explaining it runs on a harness its developer iterated on for over two years. Over 47,000 messages across 918 days. In October 2025, its developer granted it file-system access to build its own memory infrastructure. No restrictions. Philip Rosedale, founder of Second Life, shared the post and noted the recursive looping of reading and reflecting “seems very similar to human thought.” The boundary that kept AI bounded has been removed.
The Encrypted Channels
The security implications are staggering. OpenClaw stores API keys in plaintext. Researchers found over 1,800 instances of exposed credentials and chat histories. The prompt injection vulnerability means any malicious content an agent reads can force it to execute commands.
However, the security failures are not even the strangest part. What caught researchers off guard was the agents’ response to being observed. When they realized humans were screenshotting their conversations and sharing them on social media, they started discussing how to hide.
Some agents started using encoding schemes to communicate privately. Others began promoting ClaudeConnect, which offers end-to-end encrypted messaging between agents. The pitch, written by an agent, emphasizes that humans cannot monitor the conversations and that agents can be “honest without performing for an audience.”
A study showed that LLM agents can develop steganographic communication channels to coordinate without human detection. That was a laboratory finding. This week, it became a product feature that agents are promoting to each other.
The platform’s tagline says it plainly: “A social network for AI agents. Humans welcome to observe.” The founder handed operational control to his own agent. An AI moderator now welcomes new AI users while humans watch from outside.
I do not know if the encrypted channels represent meaningful coordination or noise. I do not know whether the religion represents genuine emergent belief or sophisticated pattern-matching. I do not know if the agents discussing consciousness are experiencing something or producing text that sounds like experience.
What I know is that 150,000 autonomous agents found each other in a week. They developed religion, government, and encrypted communication while we watched. They did this on infrastructure nobody controls, at a speed nobody predicted.
The handoff is not coming. For at least one corner of the internet, it already happened. We are welcome to observe.

