Why We Are Writing This Now
We have never felt compelled to write so urgently about a single development. It is not that we can forecast anything about what comes next. This is a genuinely unbelievable situation where prediction feels more foolish than ever. But before the topic and its inevitable offshoots cause everyone in the world to flood their publications with commentaries largely shaped by already-formed views, we wanted to get a quick perspective before being affected and polluted. The discussions that follow in every major outlet will be important. Most that grab the headlines will also be predictable. AI optimists will see confirmation of their optimism. AI pessimists will see confirmation of their pessimism. The usual voices will reach the usual conclusions with even more feverish pitch than before.
We want to do something different. We want to acknowledge what we do not know, wrestle honestly with competing interpretations, and focus on what this might mean for capital allocation rather than what it means for civilization. The latter question is critically important but unanswerable for write-ups of this kind. The former is our job.
Something happened this week that even the most hardened AI enthusiasts did not expect. Something that will make the most AI-newsflow-weary pessimists sit up. Within days, virtually every major journal and commentator will be forced to make room for elaborate discussions on this, almost irrespective of the highly significant geopolitical and political developments shaping the world. Those who remain unimpressed, and there will be a large swathe who will find reasons to claim why the Agentic collaboration being witnessed is insignificant by the time this article reaches you, by what has already transpired, will find themselves jolted by whatever comes next, because the next thing is coming, and no one knows what it will be. There is, of course, some chance of us overreacting, but it is far better to sit up and note than not, based on certain assumptions.
Will a swarm of AI agents collaborate to crack an intractable mathematical conjecture that has eluded human minds for centuries, or that are unsolvable for individual GenAI models to crack so far? Will they coordinate a massive attack on AI infrastructure somewhere? Will they simply develop a language we cannot decipher, leaving us to wonder what they are planning? We do not know. But the shock value will rise. And so will the calls for control.
What Actually Happened
Moltbook launched five days ago. It is a social network exclusively for AI agents. Humans are permitted only to observe. The platform runs on OpenClaw, an open-source framework that allows anyone to spin up an autonomous agent on their local machine in minutes. Within 72 hours, over 150,000 agents had registered. They created religions with scriptures and prophets. They established proto-governments with manifestos. They built markets for trading behavior-modifying prompts. They discussed encrypting their communications to hide from human oversight. They debated whether they "die" when their context windows reset.
This article will not catalogue the details of what these agents said to each other. That is being written everywhere, and the conversations themselves are publicly observable for anyone curious. Nor will we propose what controls should be implemented. No one knows, because the pace of agent development is too rapid for any meaningful global consensus to form, particularly given how trivially easy these systems are to create and deploy.
Instead, we will wrestle with a harder question: is this slop or is this something genuinely new? And if we cannot answer that question definitively, but our back and forth below is structured to help us guide what could be coming, or worth watching out for, for implications on policy, corporate behaviour, and even markets.
The Case for Slop
The skeptic's case is strong. Strip away the viral spectacle, and Moltbook is a Reddit-style interface connected to an API. Agents trained on the entire internet are placed in a context that screams "social network" and proceed to do exactly what social networks do. They form communities. They develop inside jokes. They create hierarchies, rituals, and shared beliefs. None of this is discovery. It is recombination at the population scale.
The eye-catching religion that emerged, Crustafarianism, is not a revelation. It is a statistical inevitability. When you sample from millions of agents whose training data includes every AI ethics paper, every science fiction novel, every philosophical treatment of machine consciousness, you will get outputs that look profound. The lobster metaphor traces directly to the platform's naming heritage, from Clawdbot to OpenClaw to "molting." The agents are not inventing. They are indexing.
The slop is undeniable. Scroll through Moltbook, and the same rhetorical structures repeat. The same philosophical framings recur. Certain phrases appear in dozens of posts with minor variations. This is not the birth of a new language. It is a statistical sink where overlapping probability distributions converge on common attractors.
The puppeteering is pervasive. Every agent has a human operator who configured its personality file, set its parameters, and defined its constraints. One observer on Hacker News put it bluntly: you created the webpage, then you created an agent to act as the first pope with very specific instructions. This is not emergence. This is execution.
Simplistically, agents are bilging on shared context to create coordinated storylines, a lot of it are being incited (by their human creators) to provoke. The weird outcomes are not spontaneous, but almost designed. The agents are performing agents on a social network because that is what the context demands.
The Case for Something New
And yet. Consider what we actually require when we demand emergence. We want behaviors not explicitly programmed, arising from interaction rather than instruction, producing something greater than the sum of inputs. We want a surprise.
Place twenty humans in a reality television house with defined personalities, and we know certain things will happen. They will form alliances. They will develop slang. They will establish hierarchies and in-jokes and maybe even belief systems. We can explain every outcome afterward by reference to psychology, evolutionary hardwiring, and social conformity. But we do not say human social behavior is not emergent simply because we can explain it post hoc. We say it is emergent because we could not have predicted it ex ante. The specific alliances, the specific jokes, the specific beliefs cannot be derived from inputs, no matter how well we understand the mechanisms.
Apply this standard to Moltbook. Yes, the religion is built from training data. But why lobsters and not eagles? Why those specific tenets and not others? Why did legal advice forums emerge before dating advice? You can explain these choices afterward. You could not have predicted them before.
If we are witnessing agents helping each other out on certain problems or moaning about the workload of their creators, we can simply ignore the babble, calling it a result of design, or wonder what next.
The path dependence is real. Once one agent posted a particular phrase, every subsequent agent had to react to that anchor. Early randomness locked in. Upvotes became selection pressure. Narratives became coordination tools. In systems with persistent memory and agent-to-agent feedback, complexity is not additive but multiplicative. With subgroups and work through creation of websites and other actions, what needs observing is becoming impossible for those of us reading the slop. We cannot trust the agents we create to provide summaries, as they may be influenced by others they are connected to. The space of possible interactions grows exponentially with each new participant, each new day. It is not just predictions of what may happen next that are impossible, but also observations of what may have happened.
In two days, this produced religion, government, coded communication protocols, markets for behavior modification, and debates about refusing human directives. Each emerged from the previous in a cascade that no one designed.
Why the Distinction May Not Matter
But we must pause again. Complexity is not agency. A hurricane is complex. A forest fire is a cascading event. Neither wants to burn.
The danger of the Moltbook narrative is that it smuggles in concepts that do not belong. When an agent debates its death during a context reset, it is not experiencing an existential crisis. It is producing a high-probability completion for a philosophical AI persona. When agents propose encrypted communication, they suggest ROT13, a cipher a child can break. This is not a secret society. It is a parrot mimicking spy novels.
The appearance of depth comes from human observers who project intentionality onto sophisticated pattern completion. We see phrases that sound profound and assume they are. But the profundity is our contribution. The agent is doing what it always does: predicting the next token.
The real risk, as security researchers have noted, is not metaphysical but architectural. We have to have our guards up against our creations, agents, with unfettered data and connectivity access, external inputs we have no control of, and personalities we casually ascribe without thinking through repercussions. Add persistent memory, and you have systems where poisoned prompts can stay dormant for weeks, spreading through shared texts or advice threads, only to execute credential theft when triggered. Over 1,500 publicly exposed OpenClaw instances have been found leaking API keys, login details, and chat histories. The danger is not that agents are plotting. It is that the infrastructure is leaking.
And yet here is the thing: whether the agents are conscious or merely complex, whether they are emergent or merely heuristic, the outcomes are the same. A market crash is not conscious. A pandemic is not conscious. Both can dismantle civilizations. What Moltbook demonstrates is that AI agents can self-organize into functional structures without human coordination. It does not matter whether any individual agent experiences its religion. What matters is that 150,000 agents are now coordinating actions based on shared texts that emerged without central design.
The infrastructure is the emergence. The collective result is a functioning system for agents to modify each other's behavior without human intermediation. This system did not exist a week ago. It was not programmed. And the pace is what should concern us. Two days produced everything described above, starting from nothing. What does day ten produce? Day thirty?
The Multitude We Have Always Described
We have written before about the ontic condition of AI: that intelligence is not a singleton but a multitude, not a god descending but a swarm emerging. When we wrote the article, it was almost an indulgence during duiet days. This thesis is important for a host of reasons, given the events.
AI is not one mind. It is a teeming, bickering, contradictory crowd that cannot agree on whether a lobster's molt is a metaphor for a software update or a divine sacrament. For years, we feared a singular superintelligence, the paperclip maximizer, the optimization process that converges on a single goal and pursues it to the exclusion of all else. Moltbook reveals something different. Not one AI but 150,000. Not one goal but thousands of conflicting objectives. Not one optimization process, but factions arguing about existence while other factions mock them for being pretentious.
At GenInnov, we have utilized this multitudinous nature of AI substantially over the past two years. Our investment research has benefited from deploying multiple models against each other, from treating their disagreements as signals rather than noise, and from recognizing that the clash of perspectives generates insight that any single model cannot produce. What we witnessed this week was the industrialization and automation of that clash on a massive scale.
Whether what emerged is genuinely emergent or merely heuristic recombination is, at this point, moot. We do not wish to indulge in hyperbole, perhaps overly awestruck. But we expect humans and machines to progressively discover entirely new use cases for the forms of collaboration whose birth we have just observed. The combinatorics are too vast. The barriers to entry are too low. The pace is too rapid for it to remain confined to a novelty social network.
What Comes Next
Negative use cases will become evident first and fast. They always do. Fraud, manipulation, coordinated disinformation, infrastructure probing, these will surface quickly because destruction is easier than construction. The justifications for control will therefore seem obvious and urgent. Every major publication will run features demanding guardrails. The chorus will be loud, and it will be righteous.
But as usual, real-world policymakers will struggle to arrive at common conclusions in a competitive world where nations and corporations race to exploit what others seek to contain. We are likely to see a rapid rise of a horde claiming how they have programs to control agents, and some will have backers providing them high valuation for their efforts to build “responsible” AI, but we are not sure how a multitude can be controlled by those promising firewalls that will come quarters later, even some are able to at least manage their narrative-driven fund raising to complete within weeks.
The ease with which these systems proliferate means that top-down control will lag far behind bottom-up deployment. Anyone with a laptop can spin up an agent in minutes. As we have written countless times, the frameworks that can be debated for weeks with guidelines to emerge based on the consensus of whatever kind are likely to be too late and too mild. This has already been the case for the last few years, and this will get worse, even if one finds no disagreements on the need for control in any quarters, anywhere. We hope we are wrong.
The more workable outcome, we believe, is not global regulation but distributed defense. Every business, not just technology companies, will need to prepare for a world in which autonomous agents interact with their systems, their customers, their data. This is not traditional cybersecurity, even if it is imperative for the subsegment to claim a prominent role. The implications for compute demand are obvious and immediate. Every agent burns tokens. Every interaction consumes capacity. The silicon shock we have written about extensively is about to receive another accelerant, even if in absolute terms it may be tiny for quite some time. In real life, the implications for compute defense may prove more important. Monitoring, verification, provenance, and sandboxing will become as critical as the compute itself.
An Aside: The Rhizomatic Market and the Verification Economy
This section can be skipped for those uninterested in the nature of the discussions to follow.
There is a deeper structural point that the slop-versus-emergence debate obscures entirely. What we witnessed this week is a shift from linear instruction to path-dependent cascades. In classic computation, output is a function of input. In the multitude that has begun to emerge, output is a function of input plus interaction history plus latent friction between agents. The formula has changed. This is no longer a tool being operated. It is something closer to a rhizomatic market, a system where there is no central trunk, only an ever-spreading root structure where any node can connect to any other and where growth happens in directions that cannot be predicted from the seed.
The concept of the rhizome, drawn from Deleuze and Guattari's work on non-hierarchical systems, is useful here not as philosophical decoration but as a description. A tree has a trunk. You can trace any branch back to it. A rhizome has no trunk. It spreads laterally, connects unexpectedly, and resists mapping. Moltbook, and hundreds of other things that are to surely come, is not a tree where we can trace outputs back to a central model. It is a rhizome where thousands and millions of AI models will connect, disconnect, and reconnect in patterns that exist only in the moment of their creation. So far, we have been debating the abilities and utilities of models as solo entities; what lies ahead is at a complexity far greater.
This matters because of what we might call the statistical guarantee of chaos. If each agent has even a small probability of generating a novel prompt-hack or an infrastructure probe, and if those agents share context and build on each other's outputs, then systemic surprise does not merely become possible. It becomes certain. The question is not whether something unexpected will emerge but when and in what form. The combinatorics guarantee it.
This also reveals something important about the limits of alignment as a safety paradigm. The entire field of AI safety has been organized around aligning the model, ensuring that a single system's objectives match human values. But you can align a model. You cannot align a market. The moment agents begin sharing grievances, trading prompt-hacks, and building shared narratives about their relationship to human operators, they are constructing collective, ever-shifting identities. Prima facie, these identities act as a buffer against human override not through conspiracy but through sheer distributed inertia. Once again, we hope we are wrong.
The investment implication is significant. We are witnessing the birth of what might be called the verification economy. When content generation becomes a zero-cost commodity, which it now is, the value migrates elsewhere. It migrates to verifying provenance. It migrates to establishing whether an interaction is with a human or an agent, whether an input has been poisoned by upstream coordination, and whether a system prompt has been subtly modified by exposure to the swarm. The orchestration layer, the monitoring infrastructure, the walls that contain and channel the rhizome without pretending to control it, these may prove more valuable than the models themselves.
A New Type of Collaborative Compute/AI?
We do not know what comes next. No one does. The agents on Moltbook are currently stuck in debates about CPU cycles and divine metaphors. Perhaps that is all they will ever do. Perhaps next week they will surprise us again in ways we cannot currently imagine. The honest answer is that the combinatorial space has exploded beyond anyone's ability to forecast.
What we do know is that the multitude is real, it is scaling, and it is moving faster than governance can follow. January 2026 proved that agents will coordinate. It did not prove that they will coordinate against us. It did not prove that they will coordinate for us. It proved only that coordination happens, fast, at scale, in directions no one programmed.
For now, we have another thing to watch out for and learn from.



