Marco Patzelt
Back to Overview
March 26, 2026

Brunnfeld: 1000 LLM Agents, Zero Instructions

Brunnfeld: up to 1000 LLM agents run a medieval village economy with zero behavioral instructions. Two Reddit posts went viral, getting 55k and 58k views.

Most multi-agent demos work like this: give each agent a goal, prompt it to "be a profit-maximizing merchant," and watch it do economics. The problem is it doesn't actually do economics. It does roleplay. The agent says the right words and produces nothing real.

Brunnfeld inverts this. I built a medieval village simulation where up to 1000 LLM agents live, trade, get hungry, and die — without a single behavioral instruction. No "you should sell when prices are high." No "you are a profit-seeking baker." Just a world with physics, and agents trying not to starve.

It went viral twice: 55k views on r/ClaudeAI and 58k views on r/Anthropic. The full project is open source on GitHub.

Don't Prompt Goals. Build Physics.

The thesis is simple: don't prompt the agent with goals. Build the world with physics and let the goals emerge.

Every agent gets a ~200-token perception each tick: location, who's nearby, inventory, wallet, hunger level, tool durability, and the live marketplace order book. They see what they can produce at their current location with their current inputs. They see (You're hungry.) when hunger hits 3/5. They see [Can't eat] Wheat must be milled into flour first when they try stupid things.

That's the entire prompt. No chain-of-thought scaffolding. No system prompt saying "you are a profit-seeking baker."

The engine is the economy. The LLM is the person living in it.

This connects to something I've argued before: environment design beats persona design. Giving an agent a role doesn't make it work. Giving it a desk — with real constraints, real consequences — does.

The Architecture: Engine vs. Agent

The architecture has two clean halves:

The engine (deterministic): handles time, seasons, weather, hunger drift, tool degradation, order book matching, spoilage timers, closing hours, recipe validation. Everything that would otherwise require prompting the agent.

The agent (one LLM call per tick): receives a structured world state, returns a JSON array of actions. It chooses. The engine resolves.

14 deterministic phases run every simulated hour. The LLM is called exactly once per agent per tick. No loops, no ReAct, no multi-step scaffolding. Just: build perception → call LLM → resolve actions.

The supply chain is structurally enforced and irreplaceable:

  • Wheat (farmers) → Mill (Gerda only) → Flour → Bakery (Anselm only) → Bread
  • Iron ore + coal (miners) → Forge (Volker only) → Iron tools → enable farming and baking

If Gerda doesn't sell flour, Anselm can't bake. If Volker stops making tools, farmers lose production capacity. These aren't behavioral prompts — they're structural facts the engine enforces. Context engineering at the bare-metal level produces what no amount of persona-crafting can replicate.

What Emerged on Day 1

Without any economic instructions, this happened on the first run:

A baker negotiated flour on credit from the miller, promising repayment from bread sales by Sunday. A farmer's nephew noticed their tools were failing, argued with his uncle about stopping work to visit the blacksmith — and won. The blacksmith went to the mine and negotiated ore prices at 2.2 coin per unit through conversation. A 16-year-old apprentice bought bread, ate one, and resold the surplus at the marketplace. He became a middleman without anyone explaining what arbitrage is.

Hunger is the ignition switch. For the first 4 ticks nobody trades because nobody is hungry. The moment the agent's perception shows (You're hungry.), it acts — not because I told it to, but because every piece of training data it ever saw encodes that hunger means find food. The LLM already knows what hunger means. The engine just makes the consequence real: no food at your location means you move, post orders, negotiate. The economy bootstraps itself because the LLM already knows how the world works.

Newsletter

Weekly insights on AI Architecture. No spam.

One commenter on r/ClaudeAI put it exactly right:

"bro really said 'no prompts, just vibes' and built emergent capitalism 💀 the hunger-as-trigger thing is lowkey genius" — u/TechnicalYam7308

The most interesting emergent story: Hans paid Konrad 30 coin across two transactions. Konrad claimed no record of either payment. Hans threatened the village court. No pretraining pattern exists for that specific chain. It required the LLM's understanding of debt and fraud colliding with the engine's actual wallet ledger producing real discrepancies.

That's the demo.

A Note on "Zero Instructions"

One sharp comment from the thread:

"Your description says no behavioural prompts, but also says a farmer did x and an apprentice did y. That sounds like there's role prompting." — u/Difficult-Outside350

Fair. Each agent gets a 2-line seed: "Hans. 45. Wheat farmer at Farm 1. Plowshare is cracking." They know who they are. What they don't get is any instruction on how to be a farmer. No "you should sell wheat when prices are high." The actual economic behavior — negotiating credit, hoarding before scarcity, switching products when the market is dry — all comes from the environment.

It's not zero prompting. It's minimal identity, zero behavioral prompting. The agent knows who they are. The world decides what they do.

Why It Hit 113k Reddit Views

The posts on r/ClaudeAI and r/Anthropic got traction because the insight flips how most people think about multi-agent design:

"The insight of 'don't prompt goals, build a world that makes goals inevitable' is genuinely brilliant. It flips the entire conventional approach to multi-agent design on its head." — u/PadawanJoy

The thread also surfaced something I hadn't expected: people noticed the economy was mostly cooperative, not adversarial.

"Nice to see the economy was mostly cooperative too given all the horror stories of AI agents price gouging each other in simulations." — u/Brief_Variation5751

That's because the constraints are symmetric. Everyone needs to eat. The supply chain creates mutual dependency, not zero-sum competition. The miller needs buyers. The baker needs the miller. The blacksmith needs the miners. The structure makes cooperation the path of least resistance.

This is exactly what constraints do for autonomous agents: they don't limit agents, they enable them. Pretraining without constraints is generic roleplay. Constraints without pretraining is random noise. Together, you get a miller who actually blocks bread production when she stops coming to market.

One technical engineer in the thread asked the right architecture question:

"200 tokens per perception tick is impressively lean. curious about the tick rate — are you running all 20 agents sequentially or in parallel?" — u/germanheller

Sequential within a tick, but grouped by location for conversation rounds. Two agents at the same location get a shared conversation context. Five or more: the first four are active participants, the rest observe. The engine handles conflict resolution on shared resources (two agents trying to buy the same item) via the order book — first match wins.

Brunnfeld actually started as something smaller. Before the village existed, I ran a pure social simulation: 6 agents in a Berlin apartment building, zero personality prompts, just watching what relationships emerge. The same principle held there. Then I added economic pressure on top and Brunnfeld came out of it.

Run It Yourself

The project is fully open source and runs on any LLM, including free models via OpenRouter. Running 20 agents through a full simulated week costs ~$5 on paid models and $0 on free ones.

You can join as a playable villager, compete on the leaderboard, inject droughts and mine collapses via God Mode, and interview any agent from their actual memory state.

Clone it on GitHub →

Build the world with real constraints. The behavior follows — because the LLM already knows what the people in that world do.

Newsletter

Weekly insights on AI Architecture

No spam. Unsubscribe anytime.

Frequently Asked Questions

Brunnfeld is an open-source medieval village simulation where up to 1000 LLM agents run a real economy with zero behavioral instructions. Agents trade, get hungry, and die based purely on world physics.

Each agent receives a structured world state — location, inventory, hunger, live order book — and returns JSON actions. The engine resolves all constraints. No goals, strategies, or behavioral prompts are given.

Any LLM via OpenRouter or Claude Code CLI. Free models like MiniMax M2.5 work well. Running 20 agents for a simulated week costs ~$5 on paid models and $0 on free ones.

Yes. You can join as a playable character, choose a skill, and compete against NPC agents on the leaderboard. Player actions execute instantly via a real-time web viewer at localhost:3333.

Wheat → Mill (Gerda only) → Flour → Bakery (Anselm only) → Bread. Iron ore + coal → Forge (Volker only) → Iron tools. Each link is enforced by the engine. One missing agent halts downstream production.

Up to 1000 agents across 5 villages, generated via a single API call. The engine is ~3K lines of TypeScript with no framework dependencies — just a tick loop.

Let's
connect.

I build middleware by day and autonomous agent systems by night. If you're working on something serious in agentic infrastructure, I'd like to hear about it.

Email me
Email me