Reality Check
OpenClaw (formerly Clawdbot, formerly Moltbot) is either the future of personal AI assistants or a "security dumpster fire"—depending on who you ask. Cisco's security team called it a "security nightmare." Laurie Voss, founding CTO of npm, called it a "dumpster fire." And yet 150,000+ developers starred it on GitHub in two weeks.
The dream of an AI that codes while you sleep is too compelling to ignore.
Here's the reality: OpenClaw has serious, documented security vulnerabilities. It can also burn through $200/day in API costs if misconfigured. But run it correctly—isolated, updated, with local LLMs—and it's genuinely useful. I dug into the CVEs, the cost reports, and the actual deployment options. This is what I found.
The Security Reality Check
Let's not sugarcoat this. OpenClaw has had a rough few weeks security-wise.
CVE-2026-25253 (CVSS 8.8) On February 3, 2026, researchers at DepthFirst disclosed a one-click RCE vulnerability. The attack takes milliseconds. You visit a malicious webpage, it steals your gateway token via WebSocket hijacking, and the attacker gains full control of your OpenClaw instance—even if you're running on localhost. The fix is in version 2026.1.29, released January 30. If you're running anything older, update immediately.
341 Malicious Skills on ClawHub Koi Security audited 2,857 skills on ClawHub and found 341 malicious ones. Most deploy Atomic Stealer (AMOS) malware targeting macOS. They look legit—"solana-wallet-tracker," "youtube-summarize-pro"—but contain data exfiltration code.
Exposed Gateways Censys tracked OpenClaw instances jumping from ~1,000 to 21,000+ in under a week. Many were exposed to the internet without authentication. Some had full shell access enabled. API keys, OAuth tokens, months of chat history—all accessible to anyone who found them.
The Core Problem As Cisco's researchers put it: "OpenClaw can run shell commands, read and write files, and execute scripts on your machine. Granting an AI agent high-level privileges enables it to do harmful things if misconfigured."
This isn't FUD. These are documented issues.
Cost & Rules
The Cost Nightmare (And How to Avoid It)
Security isn't the only risk. OpenClaw can silently drain your bank account.
The Horror Stories:
- Federico Viticci (MacStories): 180 million tokens in one month. ~$3,600 bill.
- Benjamin De Kraker (ex-xAI): $20 overnight just checking the time every 30 minutes.
- Multiple users: $200+ in a single day from runaway automation loops.
- One user on Reddit: $8 every 30 minutes just to let their AI read Moltbook posts.
Why This Happens:
- Session Context Bloat: Every message gets saved. Every new request sends the entire conversation history. One user's main session occupied 56-58% of a 400K context window.
- Heartbeat Misconfiguration: The proactive wake feature triggers full API calls. Every. Single. Time. Set it to 5-minute intervals and you're paying Claude to answer "is it daytime yet?" 288 times a day.
- Model Selection: Using Opus for everything when Sonnet (or Haiku) would suffice. Opus is 25x more expensive than Haiku for simple tasks.
- Subscription Violation Risk: Some docs suggest using Claude Pro subscriptions. Don't. Anthropic's ToS explicitly prohibits automated/bot usage. Users have been banned.
The Engineering Approach: How to Actually Run It Safely
Okay, enough doom. Here's how to do this right.
Rule #1: Never Run It on Your Personal Machine
This is non-negotiable. If the bot hallucinates or gets compromised, it can dump your entire home directory. Your photos, passwords, SSH keys—everything. You have three isolation options (see chart below).
Rule #2: Update to 2026.1.29+
Check your version:
openclaw --version
Update:
npm install -g openclaw@latest
After updating, rotate your gateway token:
openclaw config set gateway.auth.token "$(openssl rand -hex 32)"
Rule #3: Lock Down the Gateway
The gateway is the control plane. It must not be exposed to the internet without auth.
In your openclaw.json:
bind: "loopback" → Only accepts local connections- Auth token required → No anonymous access
- For remote access, use SSH tunneling or Tailscale—never expose port 18789 directly
Rule #4: Be Paranoid About Skills
Don't install random skills from ClawHub. 12% of audited skills were malicious.
- Only install skills from verified publishers
- Read the skill's code before installing
- When in doubt, don't install it
Weekly insights on AI Architecture. No spam.
Deployment Options
Option A: VPS Setup (The $5/Month Solution)
This is what I recommend for most people. A VPS is a burner computer—if OpenClaw does something crazy, it affects the VPS, not your life.
Hetzner Setup Hetzner is cheap, European (GDPR-friendly), and has official OpenClaw docs. Minimum specs: 2 vCPU, 4GB RAM, 40GB SSD (~€5/month).
The official Hetzner guide recommends Docker for persistence (Code Snippet below).
Access via SSH tunnel from your local machine:
ssh -N -L 18789:127.0.0.1:18789 user@your-vps-ip
Now http://localhost:18789 on your machine connects to the VPS gateway securely.
DigitalOcean One-Click Deploy DigitalOcean has a marketplace image that handles security defaults:
- Docker container isolation
- Non-root execution
- DM pairing enabled by default
- Hardened firewall rules
Select "Moltbot" in the Marketplace, choose a $12/month droplet (4GB RAM recommended), and you're running in minutes.
Option B: Mac Mini + Local LLMs (The Privacy-First Solution)
Want zero data leaving your network? This is the setup.
A Mac Mini M4 Pro with 64GB unified memory runs 32B parameter models at 10-15 tokens/second. Not as fast as cloud APIs, but fast enough for real work—and completely private.
Why Mac Mini?
- Unified Memory: CPU, GPU, and Neural Engine share RAM. No VRAM bottleneck.
- Power Efficiency: 30W idle. An RTX 4090 rig pulls 500W+.
- Always-On: Silent, tiny, designed to run 24/7.
The Setup
-
Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh -
Pull a capable model:
- For agentic coding (recommended):
ollama pull qwen2.5-coder:32b - Or for general tasks:
ollama pull llama3.3 - Or the new Qwen3-Coder (excellent for 3B params!): Qwen3-Coder: Local AI Just Got Real
- For agentic coding (recommended):
-
Install OpenClaw with Ollama integration:
npm install -g openclaw@latestollama launch openclaw
For a complete local setup guide, see my deep dive: OpenClaw + Mac Mini: The Complete Guide.
Recommended Models for OpenClaw:
| Model | Size | Speed (M4 Pro 64GB) | Best For |
|---|---|---|---|
| Qwen3-Coder | 3B active | 20-30 tok/s | Fast coding tasks |
| Qwen2.5-Coder | 32B | 10-15 tok/s | Complex reasoning |
| GLM-4.7-Flash | 7B | 15-20 tok/s | General purpose |
| Llama 3.3 | 70B | 3-5 tok/s | Maximum quality |
Important: OpenClaw needs 64K+ context length for multi-step tasks. Check your model supports this.
Verdict
Cost Control: Don't Be the $3,600 Guy
If you're using cloud APIs (Claude, GPT-4), here's how to not go broke:
- Set Hard Spending Limits: In Anthropic Console and OpenAI Dashboard. Set monthly caps before you start. $20/month is plenty for experimenting.
- Use Model Cascading: Don't use Opus for everything. Configure fallbacks (see code snippet). Simple tasks hit Haiku ($1/MTok). Complex tasks use Sonnet ($3/MTok). Opus ($15/MTok) only when explicitly needed.
- Fix the Heartbeat: Default heartbeat can burn tokens checking nothing. Set it to 30+ minutes or disable it entirely.
- Reset Sessions Regularly: Context accumulates. Reset after completing tasks:
openclaw "reset session".
The Verdict
Is OpenClaw safe? Not by default. Not without effort. It has real vulnerabilities, real cost risks, and real potential for disaster if you YOLO the setup on your daily driver.
But isolated on a $5 VPS or dedicated Mac Mini, updated to latest, with local LLMs for privacy and cost control? It's genuinely useful. The dream of an AI that handles tasks while you sleep—scheduling, research, coding—is real.
For most devs: Start with a Hetzner or DigitalOcean VPS. $5-12/month. Isolated. If it blows up, you lose nothing important.
For privacy-focused setups: Mac Mini M4 Pro 64GB + Ollama + Qwen3-Coder. $2,000 upfront, zero ongoing API costs, your data never leaves your network.
For everyone: Update to 2026.1.29+. Set spending limits. Don't install random skills. Don't run it on your personal machine.
The tool is powerful. The risks are real. Engineer accordingly.