Everything you need to know about the viral personal AI assistant Clawdbot (now Moltbot)


The latest wave of AI excitement has brought us an unexpected mascot: the lobster. clutchan AI personal assistant, went viral within weeks of its launch, and will retain its crustacean theme despite having to… Change its name to Moltbot After a legal challenge from Anthropic. But before you jump on the bandwagon, here’s what you need to know.

True to its tagline, Moltbot (formerly Clawdbot) is “AI that actually does things” — whether that’s managing your calendar, messaging through your favorite apps, or checking you in for flights. This promise has attracted thousands of users willing to handle the technical setup required, although it began as a scrappy personal project created by a developer for his own use.

This man is Peter Steinberger, an Austrian Developer and founder Which is known online as @steipete He actively blogs about his work. After leaving his previous project… PSPDFkitSteinberger explained in his blog that he felt empty and barely touched his computer for three years. But it is in the end He found his spark again -resulting in multipot.

While Moltbot is now more of a solo project, the publicly available version still derives from it shot“Peter the Cruel Assistant,” now called Molty, is a tool he designed to help him “manage his digital life” and “explore what human-AI collaboration could be.”

Viral interest around Moltbot has moved the markets. Cloudflare stock increased by 14% In pre-market trading on Tuesday, social media buzz around the AI ​​agent sparked investor enthusiasm for Cloudflare’s infrastructure, which developers use to run Moltbot natively on their devices.

For Steinberger, this meant diving deeper into the momentum around AI that had reignited his building spark. He admitted that he was a “lover of chaos.”He initially called his project Anthropic’s pioneering artificial intelligence product, Claude. It is revealed on X that he is an Anthropist later Force him To change the trademark for copyright reasons, TechCrunch has reached out to Anthropic for comment. But the “lobster spirit” of the project It remains unchanged.

For its early adopters, Moltbot represents the forefront of how useful AI assistants can be. Those who were already excited at the prospect of using AI to quickly create websites and apps are now even more eager to have their personal AI assistant perform tasks for them. And just like Steinberger, they are keen to manipulate it.

TechCrunch event

San Francisco
|
October 13-15, 2026

This explains how Multibot collected more than 44,200 stars On GitHub So fast; But there is still a long way to go before moving out of the early adoption zone, and this may be for the best. Installing Moltbot requires you to be familiar with the technology, and this also includes awareness of the inherent security risks that come with it.

On the one hand, Moltbot is designed with safety in mind: it’s open source, meaning anyone can inspect its code for vulnerabilities, and it runs on your computer or server, not in the cloud. But on the other hand, its very premise is inherently risky. As businessman and investor Rahul Sood He pointed at the X“Doing things virtually” means that arbitrary commands can be executed on your computer.

What keeps Sood up at night is “instant content injection” – where a malicious person can send you a WhatsApp message that can prompt Moltbot to take unintended actions on your computer without your intervention or knowledge.

This risk can be partially mitigated through careful preparation. Since Moltbot supports multiple AI models, users may want to make setup choices based on their resistance to these types of attacks. But the only way to completely prevent this is to run Moltbot in a silo.

This may be obvious to experienced developers tinkering with a weeks-old project, but some have become more vocal in warning users attracted by the hype: Things can go wrong quickly if they handle it as carelessly as ChatGPT.

Steinberger himself was reminded of the presence of malicious actors when he “messed up” his project’s renaming process. He complained on X about “crypto scammers.” Grab his github username and created fake cryptocurrency projects in his name, warning his followers that “any project that lists (him) as the owner of the currency is a scam.” Then post the GitHub issue It has been fixedbut warned that X’s legitimate account is @moltbot, “and not any of the 20 fraudulent variations of it.”

This doesn’t necessarily mean you should stay away from Moltbot at this point if you want to test it out. But if you’ve never heard of a VPS — a virtual private server, which is basically a remote computer that you rent to run software — you might want to wait your turn. (This is where you might want to run Moltbot currently. “Not your laptop with SSH keys, API credentials, and a password manager,” Sood warned.)

Currently, running Moltbot safely means running it on a separate computer with temporary accounts, which defeats the purpose of having a useful AI assistant. Fixing this trade-off between security and facilities may require solutions beyond Steinberger’s control.

However, by building a tool to solve his own problem, Steinberger showed the developer community what AI agents can actually achieve, and how autonomous AI can finally become truly useful rather than just impressive.

Leave a Reply

Your email address will not be published. Required fields are marked *