The wiki approach to AI governance fails in a predictable way. You write the rules. You put them somewhere authoritative. The agent reads them once, at session start, and then operates from a compressed representation of those rules for the rest of the session. By the time the agent is 80% through a long task, the system prompt instructions are competing with 50,000 tokens of context for attention weight.

The fix isn't a longer system prompt. It's moving policy out of the context window and into the call path. Every serious governance approach for agentic systems is arriving at this conclusion from a different direction.

Three patterns, one convergence

Look at what's actually shipping in 2025 and 2026 and you find three distinct implementation patterns. They have different names, different toolchains, and different target audiences. They are solving the same problem.

OPA (Open Policy Agent) started in Kubernetes admission control and cloud infrastructure. Its policy language, Rego, is a declarative logic language that evaluates a structured input document against a set of rules and returns a decision. The same architecture that gates a Kubernetes pod deployment can gate an agent tool call: pass the proposed action as a structured document, evaluate it against the policy, allow or deny.

MCP (Model Context Protocol) gateways take a different approach. Rather than a centralised policy engine, each tool in the MCP server carries its own schema and constraints. The tool description is machine-readable. A gateway layer sits between the agent and the tool invocations, inspecting the tool call before forwarding it. InfoQ's analysis of AI agent gateways built with MCP describes this as "the compliance layer moving into the protocol itself" — the tool contract and the enforcement mechanism are the same object.

JSON persona rules, as implemented in ticketyboo's Gatekeep, are the lightest version of the same idea. Each agent role is defined as a structured JSON document that specifies what the agent can do, what it cannot do, and what requires escalation. The rules are evaluated at task assignment, not at session start. An agent that receives a task checks its persona definition before accepting it.

OPA / Rego evaluation layer Policy engine (centralised) Input: structured JSON document representing the proposed action What gets evaluated: data.allow == true Rego rules match input against allowed action types, resource patterns, caller identity, time Best for: Multi-team, multi-agent systems. Policy as a shared service. Audit trail built in. High operational overhead. MCP Gateway protocol layer Tool schema as contract Input: tool call with parameters intercepted before execution What gets evaluated: tool_name, params, caller_role Gateway checks tool schema constraints + caller permissions before forwarding invocation Best for: Tool-level observability. Composable, per-tool enforcement. Works with existing MCP servers. Schema discipline required. JSON Persona Rules agent layer Role-bound capability spec Input: task assignment + context checked at task acceptance What gets evaluated: allowed_actions[], deny_patterns[] Persona file defines allowed tools, resource scopes, escalation rules per agent role Best for: Single-team, known agent set. Low operational overhead. Fast to iterate. Right starting point.
Three implementation patterns, operating at different architectural layers. OPA is the policy engine layer. MCP gateways sit at the protocol layer. JSON persona rules operate at the agent layer. They are not competing choices: they stack.

Where the convergence started

Policy as code has a clear origin in infrastructure tooling. Terraform introduced the idea that infrastructure state should be declared in version-controlled files and validated before apply. Kubernetes took it further: admission webhooks and OPA/Gatekeeper made policy evaluation a first-class part of the resource creation path. Nothing gets into the cluster without passing the policy engine.

OPA (policyascode.dev describes it as "the de facto standard for cloud-native policy") extended this to any structured decision: API authorisation, data access, CI/CD gates. The mental model is consistent: express your rules as code, evaluate them against a structured input, get a machine-readable decision. No wiki, no human in the loop, no ambiguity.

The transition to agentic AI governance is a direct extension. Kyndryl's March 2025 announcement on agentic AI workflow governance described the need for "guardrails evaluated at execution time, not configuration time." The phrase is precise. An agent that reads its constraints once at startup and then operates from memory is operating from configuration time. Constraints evaluated at each tool call are execution-time constraints. The architecture is different, and the reliability properties are different.

2014 Terraform Infrastructure state declared as code 2016 OPA Rego: declarative policy language 2019 k8s Gatekeeper Policy at admission time, not runtime 2024 AI Guardrails System prompts as policy (insufficient) 2025-26 Agent policy eval OPA / MCP gateways / JSON personas all ship Runtime evaluation
Policy as code for infrastructure (2014) and policy as code for agents (2025) share the same core idea: rules evaluated at execution time, not read once from a document.

How Gatekeep implements it

ticketyboo's Gatekeep governance system uses the JSON persona pattern. Each agent role in the system has a corresponding persona file. The file is not a system prompt. It is a structured document that is evaluated at task assignment time.

{
  "role": "scanner-agent",
  "version": "1.2",
  "allowed_actions": [
    "read_file",
    "list_directory",
    "run_analysis",
    "write_report"
  ],
  "deny_patterns": [
    "delete_*",
    "write_production_*",
    "modify_terraform_*"
  ],
  "escalation_required": [
    "deploy_*",
    "update_iam_*",
    "publish_*"
  ],
  "resource_scope": {
    "s3_buckets": ["ticketyboo-scan-results", "ticketyboo-reports"],
    "dynamodb_tables": ["ticketyboo"],
    "lambda_invoke": ["ticketyboo-api"]
  },
  "max_task_duration_minutes": 15,
  "requires_human_review_for": ["security_findings_critical"]
}

When a task is routed to the scanner agent, the orchestrator checks the persona file before handing off. If the task contains an action that matches a deny pattern or requires escalation, the task is either rejected or held for human review. The agent never sees the task.

This is the key property: the policy is evaluated before the agent's context window is populated. There is no attention dilution effect. The rules aren't competing with 50,000 tokens of task history. They're evaluated as code, against a structured input, before the agent starts work.

The OPA adaptation for agents

OPA's adaptation to agent governance follows the same pattern as its k8s usage, with the input document changed from "proposed resource" to "proposed tool call." A Rego policy for an agent tool call looks structurally similar to an admission policy:

package agent.tools

import future.keywords.if
import future.keywords.in

# Default deny
default allow := false

# Allow if action is in the agent's permitted set
allow if {
    input.action in data.agent_roles[input.agent_role].allowed_actions
    not deny_by_pattern
    not requires_escalation
}

deny_by_pattern if {
    some pattern in data.agent_roles[input.agent_role].deny_patterns
    glob.match(pattern, [], input.action)
}

requires_escalation if {
    some pattern in data.agent_roles[input.agent_role].escalation_required
    glob.match(pattern, [], input.action)
}

# Emit structured decision with reason
decision := {
    "allow": allow,
    "agent_role": input.agent_role,
    "action": input.action,
    "reason": reason
}

reason := "permitted" if allow
reason := "denied_by_pattern" if deny_by_pattern
reason := "escalation_required" if requires_escalation

The advantage over JSON persona rules is auditability. OPA emits a structured decision log for every evaluation. You can query the decision history: which agent tried to do what, when, and what the outcome was. For regulated environments or multi-team systems, this audit trail is not optional.

The cost is operational complexity. OPA is a service that needs to be deployed, configured, and maintained. For a single-team system with a known, stable set of agents, JSON persona rules evaluated in the orchestrator are sufficient and easier to iterate. The policyascode.dev guides on agentic AI governance are direct on this: "start with the lightest enforcement mechanism that meets your audit requirements, then upgrade the infrastructure as the system scales."

MCP gateways: policy in the protocol

InfoQ's piece on building AI agent gateways with MCP describes a different insertion point. Rather than a centralised policy engine that all agents consult, MCP gateways sit at the protocol layer: between the agent and the tool execution environment.

Every MCP tool call passes through the gateway. The gateway has access to the tool schema (machine-readable, part of the MCP protocol), the caller identity, and the call parameters. It can allow, deny, transform, or log the call before it reaches the tool handler.

# Simplified MCP gateway middleware
class PolicyGateway:
    def __init__(self, policy_engine: PolicyEngine) -> None:
        self.policy = policy_engine

    async def intercept(
        self,
        tool_name: str,
        params: dict,
        caller: AgentIdentity
    ) -> GatewayDecision:
        decision = await self.policy.evaluate({
            "tool": tool_name,
            "params": params,
            "caller_role": caller.role,
            "caller_id": caller.agent_id,
            "timestamp": utcnow_iso()
        })
        if not decision.allow:
            return GatewayDecision(
                allow=False,
                reason=decision.reason,
                logged=True
            )
        return GatewayDecision(allow=True, logged=decision.audit_required)

The property that distinguishes MCP gateways from the other patterns: the policy evaluation happens at the protocol boundary, not in the application layer. A tool that bypasses the application logic cannot bypass the gateway. An agent that is manipulated into calling a prohibited tool via prompt injection hits the gateway before the tool handler sees the call.

Where each pattern fits

These three patterns are not alternatives. They operate at different layers and address different threat models.

JSON persona rules are the right starting point. Low overhead, easy to version, readable by anyone who can read JSON. They enforce capability boundaries at task assignment time. They don't require infrastructure beyond the orchestrator itself. Start here.

MCP gateways add enforcement at the protocol layer. When you have multiple agents, multiple tool servers, and you can't guarantee that every agent's orchestrator is implementing persona rules correctly, the gateway is the backstop. It is the equivalent of network-layer controls versus application-layer controls.

OPA adds centralised audit and policy management. When policy decisions need to be shared across teams, versioned independently of agent code, and audited over time, OPA provides the infrastructure. The Rego language is expressive enough to encode complex conditions. The decision log is suitable for compliance reporting.

The convergence point: wiki-based rules fail because they're in the context window. Executable policy evaluated at call time succeeds because it's not. Whether you implement that with Rego, an MCP gateway, or a JSON persona file checked in an orchestrator, the structural requirement is the same: the rule must be machine-readable, evaluated before the action executes, and independent of the agent's attention state.

If the articles or tools have been useful, a coffee helps keep things running.

☕ buy me a coffee

Related

→ Governance as code → The MCP token tax on a homelab budget → Context rot on constrained hardware → Scan your repository

Pattern taxonomy from Antonio Gulli, Agentic Design Patterns (Springer, 2025). All examples are original implementations from the ticketyboo.dev platform.

ticketyboo runs five governance agents on every pull request — Security, Cost, SRE, CTO, and Dependency. Evidence signed, audit trail complete.

See how it works 5 free runs, one-time →