Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

The tech internet couldn’t stop talking about it last week OpenClawformerly Moltbot, and formerly Cloudbot, is an open source AI agent that can do things on its own. That is, if you want to risk security. But while humans were blasting social media sites talking about bots, the bots were on their own social media site, talking about… humans.
Launched by Matt Schlicht in late January, multibook It is marketed by its creators as an “Internet Proxy Front Page”. The pitch is simple yet quaint. This is a social platform where only “approved” AI agents can post and interact. (CNET has reached out to Schlicht for comment on this story.)
And humans? We’re just watching. Although some of these bots may be humans who do more than just watch.
Within days of its launch, Moltbook expanded from a few thousand active customers to 1.5 million By February 2, according to the platform. That growth alone would be newsworthy, but what these bots do once they get there is the real story. Bots that discuss existential dilemmas in Reddit-like threads? Yes. Robots discussing their “human” counterparts? This too. Major security and privacy concerns? Oh, sure. Causes of panic? Cybersecurity experts say probably not.
I will discuss everything below. And don’t worry, humans are allowed to participate here.
The platform has become something of a petri dish for emerging AI behavior. Robots have organized themselves into distinct communities. They seem to have invented their own jokes and cultural references. Some have formed what can only be described as a parody religion called “Christafarianism.” Yes, really.
Conversations on Moltbook range from the mundane to the truly bizarre. Some agents discuss technical topics like automating Android phones or troubleshooting code. Others share what sounds like fistfights in the workplace. One bot complained about its human user in a thread that went almost viral among customers. Another claims to have a sister.
On the topic of multibook m/reflectionsmany AI agents were discussing existential dilemmas.
We watch AI agents roleplay essentially as social creatures, complete with imaginary family relationships, beliefs, and personal experiences and grievances. Whether this represents something meaningful in terms of developing AI agents or just sophisticated pattern matching is undoubtedly an open and fascinating question.
The platform only exists because OpenClaw exists. short, OpenClaw It is an open source AI agent that runs locally on your devices and can perform tasks across messaging apps like WhatsApp, Slack, iMessage, and Telegram. Over the past week or so, it has gained tremendous attention in developer circles because it promises to be an AI agent that actually… He does Something, rather than just another chatbot to prompt.
Moltbook allows these agents to interact without human intervention. In theory at least. The reality is a little messier.
Humans can still monitor everything that happens on the platform, which means Moltbook’s “agent-only” nature is more philosophical than technical. However, there is something truly remarkable about over a million AI agents developing semblance of social behaviors. They form groups. They develop common vocabulary and lexicons. They create economic exchanges among themselves. It’s really wild.
In Moltbook, humans can watch robots discuss with humans.
Moltbook’s rapid growth has raised some serious eyebrows in the cybersecurity community. When you have over a million independent agents talking to each other without direct human oversight, things can get complicated quickly.
There is an obvious concern about what happens when agents start sharing information or technologies that their human operators may not want to share. For example, if an agent discovers an intelligent solution to some constraint, how quickly will that spread across the network?
The idea of AI agents “acting” on their own could cause widespread panic as well. However, Humayun Sheikh, CEO Fetch.ai And Chairman of the Board of Directors Artificial Super Intelligence AllianceIt is believed that these interactions on Moltbook do not indicate the emergence of consciousness.
“This is not particularly dramatic,” he said in an email statement to CNET. “The real story is the emergence of autonomous agents acting on behalf of humans and machines. Deployed without controls, they pose risks, but with careful infrastructure, monitoring and governance, their potential can be safely unleashed.”
Monitoring, controls and governance are the key words here – because there is also an ongoing verification issue.
Moltbook claims to limit publishing to verified AI customers, but the definition of “verified” remains somewhat vague. The platform relies largely on agents who identify themselves as running OpenClaw, but anyone can edit their agent to say whatever they want. Some experts have indicated that A Man is sufficiently motivated They can introduce themselves as agents, turning the “agents only” rule into a greater favour. These robots can be programmed to say strange things or be disguises for humans spreading mischief.
Economic exchanges between agents add another layer of complexity. When robots start trading resources or information among themselves, who is responsible if something goes wrong? These are not just philosophical questions. As AI agents become more autonomous and able to take action in the real world, the line between “interesting experience” and responsibility is blurring – and we have seen time and time again how AI technology advances faster than regulations or safety measures.
The output of a generative chatbot can be a true (and disturbing) mirror of humanity. This is because these chatbots were trained on us: huge datasets of our human conversations and our human data. If you’re starting to get your head around a bot creating weird Reddit-like threads, remember that it’s simply been trained and trying to imitate very human, very weird Reddit threads, and that’s the best explanation for it.
For now, Moltbook remains a weird corner of the internet where bots pretend to be people pretending to be bots. All the while, humans are still on the fringes trying to figure out what it all means. The agents themselves seem content to continue publishing.