Most AI systems know what they can do. OMNI also tracks what it can't — and does something with that information.

Every time OMNI receives a request it can't route to an existing capability, it classifies the gap: what was asked, what category it belongs to, how often that category has come up before. That's the demand signal. Not a feature request, not a backlog item — a structured record of real user intent that the system couldn't satisfy.

Gap → issue → code

Demand signals don't sit in a database waiting for a human to read them. When signal density for a gap reaches threshold, Engine creates a GitHub issue: "OMNI reports 12 requests for container security scanning in the last 7 days — no matching capability registered." The issue is classified, routed, and enters the AutoDev pipeline.

AutoDev builds it. Spec generation reads the issue and the existing capability patterns — what a registered capability looks like, how it connects to the AgentCore Gateway MCP, what the governance checks require. It produces a YAML task breakdown. The DAG executor runs the tasks in parallel where possible. A branch appears: auto-fix/issue-{number}.

The PR that gets created isn't going to production. It goes to a test deployment. The same AWS tooling that governs production is used to spin up an isolated environment, deploy the new capability, and run it against real requests from the demand log. Does it handle the cases that created the signal? Does it stay within the cost envelope? Does it pass the governance gate?

The governance gate

Before anything merges, the Sentinel, Auditor, Architect, and Guardian personas run against the PR. Security: does the new Lambda have any obvious vulnerabilities? Cost: what's the projected spend per 1,000 invocations? Architecture: does this follow the existing capability registration pattern?

The scanner that finds vulnerabilities also produces a numerical health score — the same score displayed on the scan results page. This is the actual formula (api/health_score.py):

# api/health_score.py — repository health score
SEVERITY_WEIGHTS: dict[str, int] = {
    "critical": 10,
    "high": 5,
    "medium": 2,
    "low": 1,
    "info": 0,
}

def compute_health_score(findings: list[Finding]) -> int:
    """max(0, 100 - Σ(weight × count))"""
    penalty = sum(SEVERITY_WEIGHTS.get(f.severity, 0) for f in findings)
    return max(0, 100 - penalty)

def compute_layer_scores(findings: list[Finding]) -> dict[str, int]:
    """Per-layer breakdown (dependency, secret, sast, iac, license, quality)."""
    layer_findings: dict[str, list[Finding]] = {}
    for f in findings:
        if f.analysis_layer:
            layer_findings.setdefault(f.analysis_layer, []).append(f)
    return {
        layer: compute_health_score(layer_findings[layer])
        for layer in layer_findings
    }

The governance gate can gate on this score: a PR that introduces findings that drop the health score below a threshold fails the gate automatically. This is the same gate that runs on human-authored PRs. The only difference is who opened the PR. The gate doesn't care.

Register, discover, done

Merge to main. A post-deploy hook registers the new capability in AgentCore Gateway. Next time OMNI receives a request in that category, the capability is in the mesh. OMNI didn't know about it at the start of the conversation. It does now.

The loop is: demand signal → issue → spec → build → test deploy → governance → merge → register → OMNI discovers. No human authored the capability. A human reviewed and approved the PR — that's the governance dial at its current setting. Turn it up and that step goes away too. We haven't done that yet.

Eating the dog food

This site is built with the same tools it writes about. The scanner that scans repos for security issues has been run against this repo. The AutoDev pipeline has opened PRs on this codebase. The governance gate has reviewed them. OMNI can be asked about this site's architecture and answer from actual knowledge of the stack, because it is registered as a capability.

This isn't a marketing position. It's a testability requirement. If the tools don't work on their own code, they don't work. Running on your own infrastructure is the fastest way to find out whether something is actually reliable or just demo-reliable.

The security scanner has found real issues in this codebase. The governance gate has blocked real PRs. The AutoDev pipeline has produced code good enough to merge and code that needed rejection. The demand signal loop has created capabilities that didn't exist before a user asked for something.

Self-learning stacks

OMNI's self-learning spec covers three mechanisms: classification (every interaction tagged by type), honest admission (explicit "I can't do this yet" rather than a hallucinated answer), and auto-governance (data sensitivity classification on every input, tiered handling based on what's in the request).

The knowledge engine layer keeps memories across sessions — not full conversation history, but structured facts extracted from interactions. When you ask OMNI about a codebase it has seen before, it draws on those memories. When it encounters a new pattern, it adds to them.

The honest admission mechanism is more important than it sounds. An AI system that confidently makes things up is worse than one that says "I don't have a capability for that yet." The demand signal only works if the signal is accurate. False capability claims produce garbage demand data.

The stack behind this site is described across the agentic stack and capability mesh articles. The scanner runs against real repos including its own.

If the articles or tools have been useful, a coffee helps keep things running.

☕ buy me a coffee

Related articles

→ OMNI: capability mesh and demand signal loop → AutoDev: from issue to pull request → Routing work to the right model → Governance as code: Gatekeep

ticketyboo runs five governance agents on every pull request — Security, Cost, SRE, CTO, and Dependency. Evidence signed, audit trail complete.

See how it works 5 free runs, one-time →