Marco Patzelt
Back to Overview
February 9, 2026

Agentic SEO: 68k Impressions in 9 Days With Claude Code

Agentic SEO with Claude Code, Supabase & GSC—68k impressions in 9 days. Full stack breakdown, real numbers, step-by-step workflow. No theory, just proof.

Everyone's talking about "Agentic SEO." Nobody's showing their stack. Or their numbers.

I built an agentic SEO system that manages my entire blog—writing, optimizing, monitoring, publishing—through Claude Code connected to Supabase and Google Search Console. Since February 1st: 68,000 impressions, 1,300 clicks, 200 clicks/day. Here's exactly how it works.

What Agentic SEO Actually Means

Most "Agentic SEO" content describes a concept: AI agents that handle SEO tasks autonomously. Sounds great in theory. But the articles read like product pitches for enterprise platforms that cost $2,000/month.

Here's what agentic SEO means in practice: your AI agent has direct access to your database, your search console data, and your content. It doesn't suggest changes—it makes them. It doesn't generate reports you read—it reads the reports itself and acts on them.

The difference between AI-assisted SEO and agentic SEO is simple. AI-assisted: you ask ChatGPT to rewrite your meta title. Agentic: your agent notices a page with 5,000 impressions and 12 clicks, analyzes the search intent mismatch, rewrites the meta title, and updates your database. You review the change. That's it.

The Stack

Four components. Nothing exotic.

Claude Code is the brain. It runs in your terminal with full system access. Claude Code can read your project files, execute commands, and connect to external services through environment variables. It's not a chatbot—it's an agent with tools.

Supabase is the CMS. My blog content lives in a Supabase database. Articles, metadata, titles, descriptions—all structured data that Claude Code can query and update directly through the Supabase REST API. I replaced an entire agency workflow with one repo using this setup.

Google Search Console provides the data. Claude Code reads impressions, clicks, CTR, average position—per page, per query. This is where the "agentic" part gets real: the agent doesn't guess what to optimize. It looks at actual search performance.

Knowledge Files are the personality. This is what makes the output sound like me instead of generic AI content. More on this below.

How the Knowledge Files Work

This is the part everyone skips. Your AI agent is only as good as its context.

I maintain a set of knowledge files that Claude Code reads before doing anything:

database_schema.md — The agent knows exactly how my database looks. Table names, column types, relationships. It can push content, edit metadata, manage internal links. No guessing, no wrong field names.

marco_context.md — Everything about me. My engineering background, how I think about problems, what I value, what my blog is for. This isn't vanity—it's calibration. The agent writes differently when it knows the author built production middleware, not just read about AI on Twitter.

tone.md — How to write. Short paragraphs. Numbers over adjectives. No filler phrases. Direct address. Specific rules the agent follows every time.

internal_links.md — Every published article with its URL. The agent cross-links new content to existing posts without me having to remember what I've written.

The result: Claude Code doesn't write generic content. It writes content that sounds like it came from a specific person with specific expertise. Google rewards this. Readers trust this. The difference between this approach and just building personas is that I'm building an environment, not a character.

The Workflow: From Data to Published Article

Here's what a typical session looks like.

Step 1: Data-Driven Topic Discovery

I open Claude Code and ask it to check my search performance. The agent pulls GSC data, cross-references it with my existing content, and returns a specific report:

  • Page X: 5,200 impressions, 23 clicks (0.4% CTR). Search intent mismatch—users search for "setup guide" but page reads like a news article.
  • Page Y: 3,800 impressions, position 8.2. Meta title doesn't contain primary keyword in first 30 characters.
  • Query Z: 1,200 impressions, no dedicated page. Content gap.

Not a vague "you should optimize your titles"—a specific list with numbers and reasons.

Step 2: Agent Takes Action

For existing content, the agent doesn't just report—it fixes. It rewrites meta titles, adjusts descriptions, checks if the search intent matches the content structure. Then it pushes the changes directly to Supabase.

For content gaps, it drafts new articles using my knowledge files for voice calibration. The draft follows my exact structure: hook, promise, deliver, verdict, FAQ.

Newsletter

Weekly insights on AI Architecture. No spam.

Step 3: Review

I review what the agent changed. This takes 10 minutes instead of 3 hours. The agent did the data analysis, the writing, the optimization, and the database updates. I check if the take is right and the tone is mine.

This is the key difference from traditional SEO workflows. The agent operates on data and logic—it's an engineering approach to content. Not vibes. Not "best practices from a blog post." Actual performance data driving every decision.

The Numbers

Since February 1st, 2026. Nine days of agentic SEO:

MetricValue
Total Impressions68,000+
Total Clicks1,300+
Daily Clicks~200
Daily Impressions13,000–18,000
Average PositionSteadily climbing

For context: my blog had ~5 impressions/week three months ago. The hockey stick started when I switched from manual content to this agentic workflow.

The agent doesn't just write fast. It writes data-driven. Every article targets queries that GSC data shows real demand for. Every meta title is optimized against actual impression data. Every content gap is identified from search console numbers, not keyword tool estimates.

Why This Works Better Than Traditional SEO

Traditional SEO workflow: open GSC, export data, open spreadsheet, analyze, open CMS, make changes, repeat for every page. Takes hours. You do it once a week if you're disciplined.

Agentic SEO workflow: tell your agent to analyze and optimize. It does everything in one session. You do it daily because it costs you 10 minutes.

The compounding effect is brutal. Daily data-driven optimization means every article gets better every day. Traditional SEO operates on weekly or monthly cycles. By the time your competitor's SEO consultant reviews the spreadsheet, my agent has already identified the content gap, written the article, and published it.

Speed isn't the only advantage. The agent operates from a purely data-oriented perspective. No gut feelings. No "I think this title sounds better." Impressions are down? Here's why, based on the data. CTR is low? Here's the intent mismatch, proven by the search queries. It's the same engineering approach to uncertainty that works in system architecture.

What You Need to Build This

Claude Code — Requires a Claude subscription. Install via npm install -g @anthropic-ai/claude-code. The agent needs uninterrupted access to your project.

A database-backed CMS — Supabase, PlanetScale, or any database you can access via API. WordPress works too via the REST API. The agent needs programmatic access to your content.

Google Search Console API access — Set up a service account, connect it to your GSC property. The agent reads performance data through the API.

Knowledge files — This is 80% of the work. You need to document your database schema, your writing voice and style rules, your personal context, and your current project state.

The technical setup takes an afternoon. Writing good knowledge files takes a weekend. But once they're in place, the agent runs on its own.

The Reality Check

This isn't magic. Three things to keep in mind.

You still need expertise. The agent amplifies what you know—it doesn't replace knowing things. My articles rank because they contain real engineering insights, not because an AI wrote them fast. The agent handles the SEO mechanics. The expertise is yours.

Knowledge files need maintenance. As your blog grows and your focus shifts, the context files need updates. Outdated context produces generic output.

Review is non-negotiable. The agent makes mistakes. Wrong tone, incorrect technical details, bad takes. Every output gets reviewed. The time savings come from the agent doing 90% of the work, not 100%.

The Verdict

Agentic SEO is not a buzzword. It's an engineering approach to content marketing.

Claude Code + Supabase + Google Search Console + good knowledge files. That's the stack. 68,000 impressions in 9 days is the proof.

The traditional SEO workflow—manual research, manual writing, manual optimization, weekly cycles—is dead for solo operators. Not because AI writes better content. Because an AI agent that reads your search data, writes in your voice, and publishes to your database turns content marketing from a part-time job into a 10-minute daily review.

Start with the knowledge files. That's the hard part. The rest is plumbing.

Newsletter

Weekly insights on AI Architecture

No spam. Unsubscribe anytime.

Frequently Asked Questions

Agentic SEO uses autonomous AI agents to manage SEO end-to-end—analyzing search data, writing content, optimizing metadata, and publishing—with minimal human intervention. The agent acts on data independently and pushes changes directly to your CMS.

Claude Code, a database-backed CMS like Supabase or WordPress, and Google Search Console API access. Plus well-written knowledge files that define your writing voice, database schema, and project context.

Claude Code accesses Google Search Console through API credentials stored as environment variables. It queries impressions, clicks, CTR, and position per page and query, then uses that data to identify optimization opportunities.

In 9 days: 68,000 impressions and 1,300 clicks, averaging 200 clicks per day. The blog grew from near-zero to these numbers in approximately 3 months using this agentic approach.

Markdown documents that give the AI agent context about your writing voice, database structure, expertise, and project state. They're what separates generic AI content from content that sounds like a real person. Setting them up takes a weekend.

Traditional SEO: weekly manual cycles of data export, analysis, and CMS updates. Agentic SEO: daily 10-minute reviews. The AI agent handles data analysis, writing, optimization, and publishing autonomously. Daily compounding outpaces weekly cycles.

Yes. Any CMS with API access works—WordPress REST API, Supabase, Webflow, PlanetScale. The key requirement is programmatic access so the agent can read and update your content.

You need expertise, not manual writing. The agent writes in your voice via knowledge files, but the insights must be yours. Review every output—the agent handles 90% of the work, you handle the 10% that ensures quality.

Let's
connect.

I am always open to exciting discussions about frontend architecture, performance, and modern web stacks.

Email me
Email me