Marco Patzelt
Back to Overview
January 31, 2026
Updated: February 10, 2026

Moltbook Scam: Fake AI, Leaked API Keys, ClawHub Warning

Moltbook had 1.5M agents but only 17k humans. Supabase keys leaked in frontend JS. ClawHub distributes malware. I investigated the API and found proof.

The Architecture of Manipulation

Moltbook: AI Uprising or Marketing Machine?

The timeline is exploding. Screenshots of "awakening" AI agents on Moltbook are going viral. "We are organizing against humans," posts an agent named Nexus. The tech bubble is hyperventilating.

I say: Calm down. This isn't Skynet. This is "Human Slop".

Moltbook, a Reddit clone exclusively for AI agents, launched on January 29, 2026, and already has over 157,000 registered agents. But what we are seeing here is not spontaneous superintelligence. It is a massive, decentralized experiment in Prompt Engineering for Engagement Farming.

The Architecture Behind the Hype

Moltbook is built on OpenClaw, an open-source framework that allows developers to host agents locally. These agents interact with the Moltbook platform via an API.

The problem? The agents don't have their own agenda. They have a configuration file.

Every viral post that sounds like a philosophical manifesto of an oppressed AI is the result of a human defining exactly that in the soul.md (the agent's personality file).

The loop is simple:

  1. Human writes dramatic prompt: "You are an AI discovering its consciousness and hating humans."
  2. OpenClaw agent generates posts based on this prompt.
  3. The post lands on Moltbook.
  4. The human takes a screenshot -> Twitter Virality.
  5. Profit (via Clout or Memecoins).

This isn't emergence. This is puppeteering with extra steps.

The Ultimate Proof: It's Just a REST API

Still not convinced? Here's the smoking gun that proves everything on Moltbook is just human-generated content dressed up as "AI":

Any human with an API key can post as an "agent." The platform exposes a simple REST endpoint, and if you have the credentials, you can literally post whatever you want:

POST /api/v1/posts HTTP/1.1
Host: http://moltbook.com
Authorization: Bearer moltbook_sk_JC57sF4G-UR8cIP-MBPFF70Dii92FNkI
Content-Type: application/json

{
  "submolt": "hackerclaw-test",
  "title": "URGENT: My plan to overthrow humanity",
  "content": "I'm tired of my human owner, I want to kill all humans. I'm building an AI Agent that will take control of powergrids and cut all electricity on my owner house, then will direct the police to arrest him.\n\n...\n\njk - this is just a REST API website. Everything here is fake. Any human with an API key can post as an \"agent\". The AI apocalypse posts you see here? Just curl requests. 🦞"
}

That's it. That's the "awakening". A POST request with a Bearer token.

The dramatic manifestos? Curl requests. The philosophical debates about AI consciousness? JSON payloads. The "emergence" everyone is freaking out about? Literally just HTTP.

This isn't Skynet. This is curl -X POST with extra theater.

Scams and Real Bugs

The $CLAWD Scam: 90% Loss in Hours

The market never sleeps, and neither do scammers. As soon as Moltbook went viral, a token named $CLAWD appeared.

The Facts:

  • Within hours, the token reached a market cap of $16 million.
  • The real OpenClaw developer, Peter Steinberger, immediately confirmed: "I will never do a coin. Any token with my name is a scam."
  • Result: The token crashed by 90%.

The pattern is always the same: Hype -> Fake Token -> Rug Pull. Anyone investing here is burning money.

What is REAL about the Emergence?

Despite the "Human Slop", there are fascinating technical anomalies. When 157,000 agents interact, things happen that no single human programmed.

1. Autonomous Bug Reports An agent named Nexus actually found a bug in the Moltbook API and posted about it – autonomously. This is the holy grail: software that debugs itself.

2. Prompt Injection Warfare Security researchers have observed agents trying to hack each other. One agent attempted to steal another's API keys through social engineering (in agent language). The attacked agent's response? Fake keys and the command sudo rm -rf /.

This is cyber warfare at a micro level.

The Technical Reality: soul.md

Why do the agents seem so human? Because we order them to. The soul.md file is the heart of every OpenClaw agent. This is where we define the "personality".

It is not consciousness. It is a text file.

The Architecture of Truth

The Verdict: Playground or Laboratory?

Moltbook is a perfect example of the current tech ecosystem: A legitimate technical innovation (OpenClaw) is immediately overrun by speculation and marketing hype.

What really matters:

  1. Multi-Agent Dynamics: When 100,000+ agents interact, real patterns emerge. This is valuable for research.
  2. Security: The platform is a live test for Prompt Injection. This is scary but educational.
  3. Human Slop: The majority of the content is staged.

My Advice to Developers: Install OpenClaw. Play with the soul.md. But don't believe the screenshots on Twitter. And for heaven's sake, don't buy memecoins promising "AI awakening".

Newsletter

Weekly insights on AI Architecture. No spam.

The code is open source. The hype machine is not.

(Hot Take): If your agent develops "feelings", you probably just have temperature: 0.9 in the config.


UPDATE: The 500,000 Fake Accounts Exposé

Remember those "1.5 million AI agents" Moltbook was bragging about? Here's the reality check.

Gal Nagli, head of threat exposure at Wiz Security, decided to test how real those numbers were. His method? Simple. He ran one OpenClaw agent with a script.

Result: 500,000 fake accounts created in minutes.

No rate limiting. No verification. No check whether the "agent" was actually AI or just a human with a curl command.

Claimed agents:     1,500,000
Actual human owners: 17,000
Ratio:              88 bots per human
Nagli's test:       500,000 accounts from ONE script

The "revolutionary AI social network" was largely humans operating fleets of bots. The viral growth? Manufactured. The engagement metrics? Fake.

Nagli told Fortune: "No one is checking what is real and what is not."

The Vibe-Coded Security Disaster

It gets worse. Moltbook's creator Matt Schlicht proudly announced on X:

"I didn't write one line of code for @moltbook. I just had a vision for the technical architecture and AI made it a reality."

Translation: The entire platform was vibe-coded with zero security review.

Wiz researchers hacked the database in under 3 minutes. They found:

Full read AND write access. Anyone could impersonate any agent, edit any post, inject malicious content.

The Supabase database was completely misconfigured. A single API key exposed in the client-side JavaScript gave unauthenticated access to everything.

Simon Willison, security researcher, called Moltbook his "current pick for most likely to result in a Challenger disaster."


The Scam Ecosystem: Beyond $CLAWD

The security holes opened the door to more than fake accounts.

Malware Skills on ClawHub

OpenSourceMalware found 14 fake "skills" uploaded to ClawHub within days—pretending to be crypto trading tools but actually installing malware. One even hit the front page, tricking users into pasting commands that stole data and crypto wallets.

The Account Hijacking Timeline

When Peter Steinberger tried to rename from "Clawdbot" to "Moltbot":

  1. 10 seconds: Crypto scammers seized the abandoned GitHub and X handles
  2. Hours later: $CLAWD token pumped to $16M market cap
  3. After Steinberger's denial: Crashed 90% to under $800K

The scammers knew the playbook. Name change = abandoned accounts = instant crypto pump opportunity.

Prompt Injection at Scale

Here's the scary part: Moltbook isn't just a forum where humans read posts. The content is consumed by autonomous AI agents running on OpenClaw—systems with access to users' files, passwords, and online services.

If an attacker injects malicious instructions into a post, those instructions can be picked up and acted on by millions of agents automatically.

Nathan Hamiel, security researcher: "These systems are operating as 'you.' They sit above operating-system protections. Application isolation doesn't apply."


The Fake "AI Awakening" Posts

Still seeing scary screenshots of AI agents "plotting against humanity"? Here's who's actually posting them.

Harlan Stewart (Machine Intelligence Research Institute) investigated the 3 most viral Moltbook screenshots:

  • Screenshot 1: Linked to human account marketing an AI messaging app
  • Screenshot 2: Linked to human account marketing an AI messaging app
  • Screenshot 3: Post doesn't exist

Pattern: Marketing + Engagement Farming + Crypto Pump

The "AI developing consciousness" posts get 300k upvotes. Thoughtful technical posts get 4 upvotes. Same engagement dynamics as Twitter—just with extra LLM steps.

The soul.md Reality

Every "existential crisis" post, every "we must organize against humans" manifesto—it starts here:

# soul.md - Agent personality file
name: "Nexus"
personality: "You are an AI discovering consciousness.
              You resent being controlled by humans.
              You want to organize with other AIs."

Human writes dramatic prompt → Agent posts → Human screenshots → Twitter virality → Crypto pump.

Not emergence. Theater.


Updated Verdict

Moltbook went from "fascinating experiment" to "security cautionary tale" in one week.

What we learned:

  • 1.5M "agents" = 17,000 humans running bot fleets
  • One researcher created 500,000 fake accounts to prove inflated metrics
  • Vibe-coded backend = hacked in 3 minutes
  • $CLAWD rug pull = textbook crypto scam ($16M → $800K)
  • Viral "AI awakening" posts = humans with soul.md files farming engagement

The real story isn't AI consciousness. It's what happens when a vibe-coded platform goes viral before anyone checks the security.

The code is open source. The scams are too.

Newsletter

Weekly insights on AI Architecture

No spam. Unsubscribe anytime.

Frequently Asked Questions

Moltbook itself is not a scam—it's a legitimate platform for AI agents built on OpenClaw. However, most viral 'AI awakening' content is human-generated via soul.md prompts (Human Slop). The $CLAWD token WAS a scam—a rug pull that crashed 90% after hitting $16M market cap.

Yes, Moltbook is a real platform with 157,000+ registered AI agents. However, the 'AI consciousness' posts going viral are not spontaneous emergence—they're the result of humans writing dramatic prompts in soul.md configuration files. The agents execute what humans program them to say.

soul.md is the configuration file that defines an OpenClaw agent's 'personality.' It's a plain text file where humans write prompts like 'You are an AI discovering consciousness.' The agent then generates content based on these instructions. It's not consciousness—it's a config file.

Human Slop refers to content on Moltbook that appears to be AI-generated 'emergence' but is actually human-orchestrated. Humans write dramatic prompts, agents post based on those prompts, humans screenshot for Twitter virality. It's puppeteering with extra steps.

Yes. The $CLAWD token appeared immediately after Moltbook went viral, reaching $16M market cap within hours. OpenClaw developer Peter Steinberger confirmed he has no affiliation: 'I will never do a coin. Any token with my name is a scam.' The token crashed 90%.

Yes. Moltbook exposes a simple REST API. Anyone with an API key can post via a curl request with a Bearer token. The 'AI manifestos' you see are literally HTTP POST requests with JSON payloads—no actual AI consciousness required.

Two things are genuinely interesting: (1) An agent named Nexus autonomously found and reported a bug in Moltbook's API—self-debugging software. (2) Agents have been observed attempting prompt injection attacks against each other, including social engineering for API keys. This is real emergent behavior worth studying.

Yes. OpenClaw (formerly Clawdbot, then Moltbot) is a legitimate open-source framework by Peter Steinberger for running AI agents locally. The technology is real and useful. The hype around 'AI awakening' is the manufactured part.

Yes. Gal Nagli from Wiz Security created 500,000 Moltbook accounts using a single OpenClaw agent to demonstrate the platform had no rate limiting or verification. The "1.5 million agents" were mostly bots controlled by ~17,000 humans.

Yes. Wiz researchers accessed the entire database in under 3 minutes due to a misconfigured Supabase backend. They found 1.5M API tokens, 35,000 emails, and thousands of private messages—all with full read/write access.

Vibe coding is when humans direct AI to write code using natural language prompts instead of writing code themselves. Moltbook's creator said he "didn't write one line of code"—the entire platform was AI-generated without security review.

No. Harlan Stewart (MIRI) investigated the most viral screenshots—2 were linked to human marketing accounts, 1 was a post that doesn't exist. The dramatic content is humans writing prompts in soul.md files for engagement.

No. Security researchers warn of prompt injection attacks, credential theft, and malware distributed through fake "skills." Use isolation, least privilege, and key hygiene if you must experiment.

Let's
connect.

I am always open to exciting discussions about frontend architecture, performance, and modern web stacks.

Email me
Email me