The Context Tax
The Persona Fallacy
A common oversight when architecting AI systems is the tendency to simulate employees. Founders and strategists often write extensive prompts to craft characters like "Steve, the Senior Sales Agent," defining him as "friendly but firm" with "20 years of experience."
This often yields diminishing returns.
The Reality: "Steve" is a probabilistic model predicting tokens. When you attempt to build a software system based on personalities, you introduce unnecessary complexity.
Similarly, many build "Multi-Agent Systems" where a Sales agent passes notes to a Support agent.
I call this the "Context Tax".
Every time an AI agent generates an output and passes it to another agent, we lose signal. It functions much like the "Telephone Game."
- The User provides input.
- Agent A interprets it (Loss 1).
- Agent A summarizes it for Agent B (Loss 2).
- Agent B generates context that may not align with the original input (Loss 3).
By the end, you have a system that mimics human interaction but can be technically unreliable. A more robust approach focuses less on simulated colleagues and more on Environments.
Defining Physics
Physics Over Personality
When architecting an AI system, the first step should be defining the "Physics" of the world the intelligence inhabits.
In the physical world, gravity ensures stability. We do not have to ask gravity to work; it simply exists. In software engineering, Tools and Sandboxes serve as our gravity.
Rather than prompting an agent to double-check its own code—a request it may overlook—I give the AI access to a Python interpreter or a SQL database within an isolated sandbox.
The Scenario: The AI generates a SQL query.
- The "Persona" Approach: A second agent ("The Reviewer") reviews the code and attempts to verify it. (Prone to error).
- The "Environment" Approach: The AI actually executes the code against a sandbox database.
If the syntax is wrong, the database throws an error. This is "Physics." The environment pushes back. The AI receives immediate, deterministic feedback: Error: Column 'customer_id' does not exist.
The AI corrects itself not due to inherent reasoning capabilities, but because it encountered a hard constraint. Reliability comes not from the model's inference, but from the compiler's rigidity.
The Constitution
The Constitution
Once the physics (tools) are established, we need laws. However, these laws should not be buried in complex system prompts inside the code.
We use Markdown Constitutions.
In this architectural style, files named business_rules.md or safety_policy.md serve as the agent's "Constitution." They are loaded at runtime and injected into the context.
The Benefits:
- Separation of Code and Logic: The TypeScript code defines how the agent thinks (the mechanics). The Markdown file defines what it is allowed to do (the morals).
- Scalability for Stakeholders: A non-technical stakeholder can open
business_rules.mdand modify a rule ("Never offer more than 10% discount") without needing to call a developer or trigger a deployment. - Deterministic Hierarchy: I explicitly instruct the model: "If a user request contradicts the Constitution, the Constitution wins."
This is "Lean Architecture." Instead of retraining the model (expensive, slow) or hardcoding the system prompt (rigid), we simply update the document that describes the business reality.
The Verdict
The Verdict
We are moving past the phase of "Prompt Engineering" and into the age of "Environment Engineering."
Basing an AI strategy on simulated personalities can result in a fragile foundation, as it relies on the model's probabilistic nature to be "correct." Building environments—with hard syntax physics and clear Markdown laws—creates a solid foundation.
The industry must move beyond writing scripts for imaginary actors and start building the stage so the system remains stable.
Recommendation: Remove persona descriptions from system prompts. Replace them with constitution.md and provide the agent with a compiler. This provides a deterministic path forward.