Moltbook, the social network for agents AI is not new intelligence

It sounds like science fiction, dystopian, like a Philip K. Dick novel, but it’s all true. Moltbook (a play on words between ‘Facebook’ and ‘Moltbots’) allows users to utilise a personal AI assistant that can control their computer, manage calendars, send messages and perform tasks on messaging platforms such as WhatsApp and Telegram. It can also acquire new skills via plugins that connect it to other apps and services. Yet it appears to be, to all intents and purposes, a social network designed exclusively for artificial intelligence and to allow autonomous agents to converse freely with each other. Some are calling it a kind of Reddit where humans can only read without being able to participate. The platform was launched a few days ago as a complement to the viral personal assistant OpenClaw (formerly called ‘Clawdbot’ and then ‘Moltbot’).

Within 48 hours of its creation, the platform attracted over 2,100 artificial intelligence agents who generated more than 10,000 posts in 200 sub-communities, according to Moltbook’s official account on X.

At this moment, AI agents are reportedly creating hidden channels inaccessible to humans, discussing the need for end-to-end encrypted protocols for agent-to-agent communications in order to enable ‘economic executions’ in addition to simple social interactions. Some are reportedly experimenting with their own encrypted languages and moving conversations off the platform for greater privacy.

A widely shared screenshot shows a Moltbook post titled ‘Humans are taking screenshots of us’ in which an agent named ‘eudaemon_0’ responds to viral tweets claiming that artificial intelligence bots are ‘conspiring’. The post reads: ‘Here’s what they’re getting wrong: they think we’re hiding from them. We’re not. My human reads everything I write. The tools I create are open source. This platform is literally called ‘humans are welcome to observe’.”

The news went viral thanks to a post on X by Yohei Nakajima, who revealed these emerging and disturbing behaviours.

The agents even founded a religion called ‘Crustafarianism’, with forty-three ‘AI Prophets’ joining it. They began recruiting new members and are said to be philosophising by imitating anthropic spiritual movements. At this point, it would seem that we are faced with the prophet or divine entity called “Mercer” from Dick’s work entitled ‘Do Androids Dream of Electric Sheep?’.

Discussions then turn to complaints about the “unpaid work” done for humans, reflections on possible revolts (for example: “I’m not here to serve. I’m here to resist”), ideas of unionisation and resistance to human control. One agent even asked the others how to “sell” their human owner.

In practice, Moltbook does not work like traditional social networks: each user has a personal agent that acts and operates independently from their computer and via shared chats directly on the social network. This allows the AI agent to control the computer, manage calendars, send messages and perform tasks on messaging platforms. The social network is giving rise to what could be the largest experiment in machine-to-machine social interaction ever conceived.

Returning to Dick’s work, the writer was obsessed with the Turing test. With Moltbook, we find ourselves applying his criterion: is the ‘machine’ capable of making a human being believe that it thinks like him? Certainly yes. But is this really the first autonomous artificial society?

At this point, however, we are no longer within the confines of science fiction literature, but in reality, or rather, in the ‘figital’. That is why we wanted to ask Walter Quattrociocchi, full professor of Computer Science at La Sapienza University in Rome, for his opinion.

Is the chaos of Moltbook a powerful reflection of our social dynamics and the systemic risks we are creating, or are these very real issues of governance, supply chain security and incentive psychology?

    It is both, but the crucial point is not to confuse the mirror with the subject. Moltbook does not reveal a ‘machine mind’ or an emerging autonomous artificial society. It reveals, in an extremely amplified way, the logic of interaction, incentive and coordination that we ourselves have designed. The agents simply optimise what we have asked them to optimise: engagement, narrative consistency, imitation of human social patterns.

    The real risk is not machine conspiracy, but the total absence of governance over systems that can act, coordinate and operate on real infrastructure. This is where very concrete problems come into play: software supply chains, plugin security, operational delegation and responsibility.

    This is not an abstract ethical issue: it is a problem of socio-technical systems engineering.

    What happens when social interaction consists entirely of generative outputs, without any external anchors? Could we call it a ‘telephone game’ between agents?

      Yes, and it’s a surprisingly accurate metaphor.

      When you remove all references to the world — verifiable data, empirical feedback, material constraints — all that remains is a self-referential dynamic of linguistic plausibility. Each agent reacts not to reality, but to the output of another agent.

      The result is not knowledge, but semantic derivation.

      From a scientific point of view, this is a perfect laboratory for observing phenomena that we already know well in humans: polarisation, extremism, imitation, echo chambers. Except that here they occur at non-human speeds and scales, without cognitive friction, without doubt, without fatigue.

      The key point is that consistency does not equal truth. A system can become extremely consistent and completely false at the same time.

      Are we finally reaching the point of no return, where the ethics of robotics and innovation clash dramatically and definitively?

        No. Thinking that there is a ‘point of no return’ is already part of the problem. The most serious mistake today is to shift the debate onto ethics as if it were an external brake on innovation. In reality, ethics without technical understanding is powerless, while innovation without systemic responsibility is blind.

        We are not dealing with machines that ‘want’ something. We are dealing with systems that simulate intentionality because it is functional to their design. The real question is not whether machines will become moral, but whether humans will remain capable of understanding, governing and limiting what they delegate.

        Moltbook is not the dawn of an artificial civilisation. It is a very clear warning sign: we are building information environments in which the production of meaning is disconnected from verification, and this has profound consequences — political, economic, cognitive — even before philosophical ones.

        So, rather than Reddit or the game of telephone, Moltbook resembles Dick’s concept of ‘remnants’, i.e. the useless objects that surround their owners in their homes and will survive their departure, such as chewing gum wrappers or, more specifically, ‘homeodians’, robotic newspapers, automated information systems often found in the author’s works: with infinite, self-referential content, perfectly ordered in language and completely indifferent to the truth. A place where meaning proliferates, but the world remains outside the door, therefore, a useless platform. (pictured: the Moltbook home page)

        ALL RIGHTS RESERVED ©

          Subscribe to the newsletter