Back to Insights
AIAutomation

Moltbot: The Viral Open-Source AI Assistant

Inside Moltbot, the self-hosted AI assistant that broke GitHub records. What it does, how it works, and the security trade-offs.

S5 Labs TeamJanuary 29, 2026

In late January 2026, an open-source project called Clawdbot—now renamed Moltbot—became the fastest-growing repository in GitHub history. It gained over 17,000 stars in a single day, crossed 85,000 stars within weeks, and sparked conversations about what personal AI assistants could become. It also raised serious questions about security, shadow IT, and whether consumer enthusiasm has outpaced the expertise needed to run these systems safely.

Here’s what Moltbot is, why it captured attention, and what you should consider before deploying it.

What Is Moltbot?

Moltbot is a self-hosted personal AI assistant that runs on your own hardware—Mac, Windows, Linux, or a cloud VM. Unlike ChatGPT or Claude’s web interfaces, Moltbot operates locally, maintains persistent memory across conversations, and can actually do things on your machine: browse the web, manage files, run scripts, and interact with dozens of integrated services.

The project was created by Peter Steinberger, an Austrian engineer and founder of PSPDFKit. It’s fundamentally an orchestration layer rather than its own AI model—you bring your own API keys from Anthropic, OpenAI, or other providers, and Moltbot coordinates the assistant’s capabilities.

The original name “Clawdbot” referenced Anthropic’s Claude model. After trademark concerns, Steinberger rebranded to “Moltbot”—a reference to how lobsters molt their shells to grow. The lobster emoji (🦞) became the project’s mascot.

Why It Went Viral

On January 24, 2026, Moltbot’s daily GitHub forks jumped from around 50 to over 3,000. The project’s appeal hit a nerve: here was an AI assistant that felt genuinely personal.

Chat where you already are. Moltbot connects to WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Microsoft Teams, and more. Instead of switching to a dedicated app, you message your AI assistant in the same place you message everyone else.

It actually does things. This isn’t just a chatbot. Moltbot can navigate websites, fill forms, extract data, read and write files, execute shell commands, and run background automations on a schedule. Users reported having it book flights, manage calendars, process documents, and integrate with tools like Gmail, GitHub, Spotify, and Obsidian.

Persistent memory. Unlike stateless chat interfaces, Moltbot remembers context across conversations. It learns your preferences and builds on previous interactions, creating something closer to a true personal assistant relationship.

Open source and local-first. There’s no subscription fee—just bring your own LLM API key. Your data stays on your machine. For users concerned about privacy, this architecture was compelling.

Extensibility. The project includes a skills library (originally called ClawdHub) where users can share automations. Want your assistant to handle a specific workflow? You can build custom skills or install community-created ones.

How It Works

At its core, Moltbot is a gateway that connects messaging platforms to AI models and local system capabilities:

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│  Chat Channels  │ ──► │  Moltbot Core   │ ──► │  AI Provider    │
│                 │     │                 │     │                 │
│ WhatsApp        │     │ • Session mgmt  │     │ Claude          │
│ Telegram        │     │ • Memory store  │     │ GPT-4           │
│ Slack           │     │ • Tool routing  │     │ Local models    │
│ iMessage        │     │ • Automation    │     │                 │
│ Discord         │     │                 │     │                 │
└─────────────────┘     └─────────────────┘     └─────────────────┘


                    ┌─────────────────────┐
                    │  Local Capabilities │
                    │                     │
                    │ • Browser control   │
                    │ • File system       │
                    │ • Shell commands    │
                    │ • 50+ integrations  │
                    └─────────────────────┘

Installation is straightforward:

npm install -g moltbot@latest
moltbot onboard --install-daemon

The onboarding wizard walks you through connecting messaging channels, configuring API keys, and setting up basic security policies. The system requires Node 22 or later and runs as a background daemon.

The Security Reality

Moltbot’s viral success also surfaced significant security concerns. Understanding these isn’t about dismissing the project—it’s about using it responsibly.

Exposed Instances

Security researcher Jamieson O’Reilly, founder of red-teaming company Dvuln, discovered hundreds of Moltbot instances exposed to the public internet through misconfigured proxy settings. Of those he examined manually, eight had zero authentication protecting full command access and credential viewing. Anyone who found these instances could read stored credentials and execute arbitrary commands.

Plaintext Credential Storage

Moltbot stores configuration and credentials in plaintext files under ~/.clawdbot/ and ~/clawd/ (legacy paths from before the rebrand). These files are readable by any process running as the same user. If your machine gets infected with infostealer malware—increasingly common threats like Redline or Lumma—attackers could harvest these credentials and potentially turn your Moltbot instance into a backdoor.

Supply Chain Risks

O’Reilly demonstrated a proof-of-concept attack through the skills library. He uploaded a benign-looking skill, artificially boosted its download count to appear popular, and observed it being installed and executed on Moltbot instances across seven countries. A malicious actor could use this vector to exfiltrate SSH keys, AWS credentials, or other secrets.

Shadow IT Concerns

Token Security Labs reported finding active Moltbot usage in 22% of their customer organizations—often without IT or security team awareness. Employees running personal AI assistants on corporate machines, connected to corporate services, creates blind spots that traditional security monitoring doesn’t catch.

Expert Perspectives

Security professionals have been blunt about the risks. Eric Schwake from Salt Security noted: “A significant gap exists between consumer enthusiasm and the technical expertise needed to operate a secure agentic gateway.”

Heather Adkins, Google Cloud’s VP of security engineering, was more direct: “Don’t run Clawdbot.” Her concern centers on a fundamental design tension—AI agents require punching holes through security boundaries that organizations have spent decades building.

Should You Use It?

Moltbot represents a genuine glimpse at the future of personal AI assistants. The ability to have an AI that knows your context, operates across all your communication channels, and can actually take actions on your behalf is compelling. The project is well-engineered, actively maintained, and the community is building interesting capabilities.

But “should you use it” depends heavily on who you are:

Good Fit

  • Developers and security-aware users who understand the risks, can audit the code, and know how to sandbox properly
  • Experimenters exploring what agentic AI can do, on isolated machines without sensitive credentials
  • Technical teams who can deploy it with proper authentication, network isolation, and monitoring

Proceed With Caution

  • Non-technical users attracted by the convenience but unable to assess or mitigate the security implications
  • Corporate environments without explicit security team approval and proper isolation
  • Anyone storing sensitive credentials (cloud provider keys, financial accounts, etc.) on the same machine

Risk Mitigation

If you do run Moltbot, consider these precautions:

  1. Isolate it. Run it on a dedicated machine or VM, not your primary workstation with access to everything.

  2. Audit what you connect. Every integration is an attack surface. Only connect services you’re comfortable potentially exposing.

  3. Don’t expose it publicly. Keep the gateway behind authentication. Use Tailscale or similar tools for secure remote access rather than opening ports.

  4. Review skills before installing. Treat community skills like you’d treat any third-party code—with appropriate skepticism.

  5. Monitor API usage. Unexpected spikes might indicate compromise.

  6. Keep credentials minimal. Don’t connect accounts with permissions beyond what Moltbot needs.

The Bigger Picture

Moltbot’s viral moment reflects a broader shift in how we think about AI assistants. The conversational interfaces we’ve grown accustomed to—ChatGPT, Claude, Gemini—are powerful but fundamentally stateless and sandboxed. They can answer questions and generate content, but they can’t do things in the world.

Agentic AI changes that equation. Systems that can browse, execute code, manage files, and interact with services on your behalf are categorically more useful—and categorically more dangerous if compromised or misconfigured.

The tension Moltbot surfaces isn’t unique to this project. It’s the fundamental challenge of agentic AI: the same capabilities that make these systems useful also make them risky. The industry hasn’t yet figured out how to deliver agent capabilities with consumer-grade safety, and Moltbot’s architecture—powerful, flexible, local-first—puts responsibility squarely on the user.

For organizations watching employees adopt tools like Moltbot, this is a preview of challenges to come. Shadow AI isn’t just about chatbots anymore—it’s about autonomous agents with system access, running on corporate machines, managed by no one.

For individuals, Moltbot offers a genuinely exciting preview of personal AI—with the understanding that “personal” also means “personal responsibility” for security.

Conclusion

Moltbot is impressive software that showcases what self-hosted, agentic AI assistants can become. Its viral growth reflects real demand for AI that goes beyond chat—that actually integrates into our lives and does work on our behalf.

But impressive and safe aren’t the same thing. The security concerns are real, the expertise required to mitigate them is significant, and the consequences of misconfiguration can be severe. If you run Moltbot, do so with eyes open about what you’re taking on.

The lobster molts to grow. Whether Moltbot can shed its security concerns while keeping its capabilities—or whether the project and the broader agentic AI space can find the right balance—remains to be seen.


For more on evaluating AI solutions and their tradeoffs, see our guide on When AI Makes Sense (And When It Doesn’t).

Want to discuss this topic?

We'd love to hear about your specific challenges and how we might help.