Marco Patzelt
Back to Overview
February 12, 2026

Claude Code Automation: How I Run 45 Blog Posts With Zero Writers

Claude Code automation runs my entire 45-post blog—writing, SEO, publishing. 68k impressions, zero writers. Full stack, real numbers, honest breakdown.

4% of all GitHub commits are now written by Claude Code. SemiAnalysis projects 20%+ by end of 2026. Most people are using it to write software. I'm using it to run an entire content operation.

45 blog posts. 68,000 impressions in 9 days. 200 clicks per day. Zero writers, zero editors, zero content managers. One engineer with a Claude Code terminal and a Supabase database.

This isn't a setup guide. Every Claude Code tutorial shows you how to install it and connect to an API. This is what happens when you actually run it in production for three months.

What "Claude Code Automation" Actually Means

Most "Claude Code automation" content describes the tool. Install it. Connect it to GitHub. Use it in CI/CD pipelines. That's fine for developers automating code workflows.

I'm not automating code. I'm automating an entire content business.

The pipeline handles: topic discovery from real search data, article drafting in my voice, SEO optimization against live Google Search Console metrics, database publishing to Supabase, and ongoing performance monitoring. Every step runs through Claude Code connected to external services via API.

The difference between asking ChatGPT to "write a blog post about X" and what this system does is the difference between asking someone for directions and handing them the car keys. ChatGPT gives you text. This pipeline reads your search performance, identifies what to write, drafts it in your documented voice, optimizes it against real data, and pushes it to your database. You review for 10 minutes. That's it.

The Stack

Four components. Total setup cost: one afternoon for the technical wiring, one weekend for the knowledge files that actually make it work.

Claude Code is the brain. It runs in your terminal with full system access—reads project files, executes commands, connects to external services via environment variables. This isn't a chatbot. It's an agent with tools.

Supabase is the CMS. All blog content lives in a Postgres database. Articles, metadata, titles, descriptions, internal links—structured data that Claude Code queries and updates via the Supabase REST API. One database replaced an entire CMS workflow. I detailed the full Claude Code architecture that replaced an entire agency workflow in a previous post.

Google Search Console delivers the data. Claude Code reads impressions, clicks, CTR, average position—per page, per query. This is where "automation" becomes "agentic." The system doesn't guess what to optimize. It looks at actual search performance.

Make.com handles the orchestration layer. Scheduled triggers, webhook connections, notification flows. The glue between components that don't natively talk to each other.

The Knowledge Files: 80% of the Work

Everyone skips this part. It's the only part that matters.

Claude Code is powerful. Without context, it produces generic content indistinguishable from every other AI blog. The knowledge files are what turn it from a language model into my content system.

I maintain four files that Claude Code reads before doing anything:

database_schema.md — The agent knows exactly how my database looks. Table names, column types, relationships. It pushes content, edits metadata, manages internal links. No guessing, no wrong field names.

marco_context.md — Everything about me. My engineering background, how I think about problems, what the blog is for. This isn't vanity—it's calibration. The agent writes differently when it knows the author built production middleware, not just read about AI on Twitter.

tone.md — How to write. Short paragraphs. Numbers over adjectives. No filler phrases. Direct address. Specific rules the agent follows every time. Things like: never say "in today's rapidly evolving landscape." Always give a concrete number instead of "fast" or "expensive."

internal_links.md — Every published article with its URL. The agent links new content to existing posts without me having to remember what I've written.

The output difference is night and day. Without knowledge files, Claude Code writes like a helpful assistant. With them, it writes like a specific person with specific expertise. Google rewards that. Readers trust that.

Writing good knowledge files took me a weekend. The technical setup took an afternoon. If you're trying to replicate this and you skip the knowledge files, you'll get a fast content machine that produces forgettable content. The files are the product.

A Real Session: How It Works

Here's what a typical 10-minute morning session looks like.

Step 1: Data pull. I open Claude Code and tell it to check my search performance. It pulls GSC data, cross-references with my existing content, and delivers a specific report:

Page: /openclaw-mac-mini — 5,200 impressions, 23 clicks (0.4% CTR). Intent mismatch. Users search "setup guide" but page reads like news. Recommendation: restructure intro, add step-by-step section.

Page: /claude-sonnet-5-leak — 3,800 impressions, position 8.2. Meta title doesn't contain primary keyword in first 30 chars. Recommendation: rewrite title, front-load keyword.

Query: "claude code automation" — 1,200 impressions, no dedicated page. Content gap. High-intent keyword with zero competition from us. Recommendation: new article targeting this query.

No vague "you should optimize your titles." A specific list with numbers and reasons.

Step 2: Agent acts. For existing content, the agent doesn't just report—it fixes. Rewrites meta titles, adjusts descriptions, checks if search intent matches content structure. Then pushes changes directly to Supabase.

For content gaps, it drafts new articles using the knowledge files for voice calibration. The draft follows my exact structure: hook, promise, deliver, verdict, FAQ.

Step 3: Review. I review what the agent changed. 10 minutes instead of 3 hours. The agent handled data analysis, writing, optimization, and database updates. I check whether the take is right and the tone is mine.

Newsletter

Weekly insights on AI Architecture. No spam.

That's one session. I do this daily. The compound effect is the whole point.

The Numbers

Three months of data. Started from effectively zero.

MetricBefore (Nov 2025)Now (Feb 12, 2026)
Weekly impressions~568,000+
Daily clicks0~200
Total published posts045
Writers employed00
Daily time investmentN/A~10 minutes
Total content budget$0$90/month (Claude Max)

The hockey stick started when I switched from manual content creation to this agentic workflow. Not because AI writes faster—because the system operates on data instead of intuition.

Every article targets queries where GSC data shows real demand. Every meta title is optimized against actual impression data. Every content gap is identified from Search Console numbers, not keyword tool estimates.

For context: my blog now generates more weekly impressions than the marketing agency I work for. Their team has writers, SEO specialists, and a content calendar. I have a terminal and a database.

What This Actually Costs

Let me be specific because "AI is cheap" is vague.

Monthly costs:

  • Claude Max subscription: $90
  • Supabase: Free tier (more than sufficient for 45 posts)
  • Make.com: Free tier for basic automation
  • Domain + hosting: ~$15
  • Google Search Console: Free

Total: ~$105/month.

The real cost is the upfront investment: one weekend writing knowledge files, one afternoon setting up the technical pipeline, and three months of daily 10-minute sessions to build the content library.

Compare that to hiring a content writer ($500-2,000/month for one article per week), an SEO specialist ($1,000-3,000/month), and a content manager ($2,000-4,000/month). The traditional stack costs $3,500-9,000/month for what this system does in 10 minutes daily.

The economics aren't even close.

What Breaks

This isn't magic. Four things that consistently go wrong.

Tone drift. After long sessions, the agent starts sounding generic. The knowledge files help but don't eliminate this completely. Solution: shorter sessions, frequent resets, and always reviewing the first two sentences of any draft. If the hook sounds like "In this article, we will explore..."—reject it immediately.

Hallucinated data. The agent sometimes invents statistics or misattributes sources. Every number in every article gets manually verified. This is non-negotiable. One wrong number and your credibility is gone.

Over-optimization. The agent can make content that's technically perfect for SEO but reads like a keyword-stuffed robot wrote it. The knowledge files help, but you need to watch for this. If a paragraph mentions the primary keyword four times, it needs a human edit.

Knowledge file staleness. As the blog grows and focus shifts, the context files need updates. Outdated context produces generic output. I update mine roughly every two weeks.

The time savings come from the agent doing 90% of the work, not 100%. That last 10% of human review is what separates this from AI content farms. Skip it and you'll rank initially, then tank when Google catches up.

Why This Works Better Than Traditional Content Workflows

Traditional content workflow: open GSC, export data, open spreadsheet, analyze, open CMS, make changes, repeat for every page. Takes hours. You do it once a week if you're disciplined.

Agentic workflow: tell the agent to analyze and optimize. It does everything in one session. You do it daily because it costs 10 minutes. I break down my daily SEO workflow in a separate article.

The compounding effect is brutal. Daily data-driven optimization means every article gets better every day. Traditional SEO operates in weekly or monthly cycles. By the time your competitor's SEO consultant reviews the spreadsheet, my agent has already identified the content gap, written the article, and published it.

Speed isn't the only advantage. The agent operates from a purely data-driven perspective. No gut feelings. No "I think this title sounds better." Impressions are down? Here's why, based on the data. CTR is low? Here's the intent mismatch, proven by the search queries.

It's the same engineering approach to uncertainty that works in system architecture. Data in, logic applied, data out.

The Bigger Picture: Why Claude Code for Content

SemiAnalysis reported this week that 4% of GitHub public commits are now authored by Claude Code. 135,000 commits per day. They project 20%+ by end of 2026.

Most of that is software engineering. But the underlying capability—an agent that reads context, plans multi-step tasks, and executes them autonomously—applies to any information work.

Content operations are information work. Research, write, optimize, publish, monitor, iterate. Every step is: data in, logic applied, data out. The same pattern repeated across layers.

I'm not using Claude Code as a writing assistant. I'm using it as an operating system for content. The writing is one step in a pipeline that includes data analysis, optimization, publishing, and monitoring. The agent doesn't just produce text—it operates the entire system.

That's the shift most people miss. They ask "can AI write good articles?" Wrong question. The right question: "can AI operate the entire pipeline from data to published, optimized content?" The answer is yes. I have 45 posts and 68,000 impressions to prove it.

The Verdict

Claude Code automation isn't about writing faster. It's about operating a content system at a frequency and consistency that humans can't match manually.

$105/month. 10 minutes daily. 45 published posts. 68,000 impressions in 9 days.

The traditional content workflow—manual research, manual writing, manual optimization, weekly cycles—is dead for solo operators. Not because AI writes better content. Because an AI agent that reads your search data, writes in your voice, and publishes to your database turns content marketing from a part-time job into a 10-minute daily review.

Start with the knowledge files. That's the hard part. The rest is plumbing.

Newsletter

Weekly insights on AI Architecture

No spam. Unsubscribe anytime.

Frequently Asked Questions

Using Anthropic's CLI agent to operate an entire content pipeline—topic discovery from GSC data, article drafting with knowledge files for voice calibration, SEO optimization, and database publishing. The agent manages the full workflow, not just writing.

About $105/month total. Claude Max subscription ($90), Supabase free tier, Make.com free tier, and domain hosting (~$15). Compare that to $3,500-9,000/month for a traditional content team.

It replaces the writing mechanics but not the expertise. 90% automation, 10% human review. The human provides domain knowledge via knowledge files and reviews output for accuracy and tone.

Markdown documents Claude Code reads before producing content. They include database schema, personal context, writing style rules, and internal link library. They're 80% of the system's value. Writing them takes a weekend.

Via GSC API with a service account. Claude Code reads impressions, clicks, CTR, and position per page and query. It uses this data to identify content gaps, underperforming pages, and optimization opportunities.

Yes, with proper knowledge files. 45 posts generating 68,000+ impressions in 9 days. Without knowledge files, the content is generic and won't rank. The files calibrate voice, tone, and expertise.

Four things: tone drift in long sessions, hallucinated statistics, over-optimization with keyword stuffing, and knowledge file staleness. The 10% human review catches all of these.

Traditional: hours per article, weekly optimization cycles. Claude Code: 10 minutes daily for the entire pipeline. Daily compounding beats monthly cycles. Speed plus data-driven decisions.

Let's
connect.

I am always open to exciting discussions about frontend architecture, performance, and modern web stacks.

Email me
Email me