The first version of every multi-tenant platform is usually a monolith with
a tenant_id column bolted on. That works until your second client asks for a
custom domain, a branded UI, a different model, or a data residency requirement. Then
the bolt-on becomes a liability.
The omni-whitelabel pattern solves this by making tenant configuration the deployment primitive rather than an application concern. The codebase never sees tenant-specific logic. Infrastructure renders it.
The problem with tenant_id columns
The standard multi-tenant pattern is to store a tenant_id on every record and
filter every query. This works fine for data isolation, but it creates two problems
that compound over time.
First, tenant-specific logic accumulates in the application. Custom domains, per-tenant model selection, branded system prompts, different data residency requirements — these all become conditionals in application code. The codebase develops a kind of institutional scar tissue around tenant differences.
Second, deployment becomes opaque. When you deploy, you're deploying changes for all tenants simultaneously. You can't easily deploy a new feature to one tenant before another, and a breaking change is a breaking change for everyone.
Config-driven deployment
The omni-whitelabel approach inverts this. Each tenant deployment is a separate infrastructure stack — separate Lambda, separate DynamoDB table, separate CloudFront distribution, separate custom domain. The only thing that's shared is the source code.
The tenant configuration lives in a single gitignored file — tenant.json.
It contains every tenant-specific value: the display name used in AI system prompts,
the CloudFront custom domain, the AWS region, the Bedrock model preference, the
Slack channel for alerts. Nothing is hardcoded.
# tenant.json — gitignored, never committed
{
"tenant_name": "Acme Corp",
"tenant_slug": "acme",
"tenant_domain": "omni.acme.example",
"aws_region": "eu-west-2",
"primary_model": "anthropic.claude-3-5-sonnet-20241022-v2:0",
"bedrock_region": "us-east-1"
}
A configure.sh script reads tenant.json and renders two files:
terraform/backend.hcl (the remote state config, scoped to this tenant's
S3 key) and terraform/terraform.tfvars (all Terraform variables for this
tenant's stack). Neither file is committed. The Terraform plan is clean and
tenant-specific. terraform apply deploys only that tenant.
What this enables
The architectural consequence is significant. Each tenant gets:
| Resource | Per-tenant or shared | Why |
|---|---|---|
| Lambda function | Per-tenant | Different environment variables, model config, system prompt |
| DynamoDB table | Per-tenant | Complete data isolation at the infrastructure level |
| CloudFront distribution | Per-tenant | Custom domain, custom TLS certificate, per-tenant cache policy |
| ACM certificate | Per-tenant | Issued against the tenant's own domain |
| Source code | Shared | Single repo, all improvements benefit all tenants |
Upgrading the shared codebase improves all tenants immediately on next deploy.
You can deploy to one tenant without touching others. Tenant-specific bugs are
isolated to one stack. You can run different tenants in different AWS regions for
data residency. A tenant can be decommissioned by running terraform destroy.
CORS as a tenancy signal
One subtle consequence of this pattern: CORS becomes a deployment-time enforcement
mechanism rather than a runtime check. The ALLOWED_ORIGINS Lambda environment
variable is set from tenant.json's custom_domain field during
terraform apply. A Terraform check {} block prevents deployment
with an empty custom domain in production. The application itself validates incoming
Origin headers against this allowlist.
This means a request from an origin that doesn't match the tenant's domain gets no CORS headers — not a rejection, just silence. The browser enforces the rest. It's not an elaborate security mechanism, but it's the right default.
The B2B2C model
This architecture is particularly well-suited to B2B2C — where a platform sells to businesses (B2B), and those businesses serve end users (B2C). Education is the clearest example: a platform sells to schools or universities, each of which serves students or staff.
In this model, the "tenant" is the institution. The institution's branding, domain, data, and users are completely isolated from every other institution. The platform vendor operates one codebase. The institution experiences it as a product built specifically for them.
What this looks like in practice
The omni-whitelabel repo has been deployed in production for enterprise clients serving internal knowledge management, document intelligence, and data governance needs. The same repo is the template for any tenant deployment. Adding a new tenant is:
- Clone the repo (or pull from upstream if already cloned)
- Create
tenant.json— fill in tenant-specific values - Run
./scripts/preflight.sh tenant.json— validates prerequisites - Run
./scripts/bootstrap.sh tenant.json— creates S3 state bucket - Run
./scripts/configure.sh tenant.json— renders Terraform config - Run
terraform plan && terraform apply— deploys the tenant stack
That's it. New tenant running in a new AWS account, with its own domain, its own data, its own Lambda, no cross-contamination with any other tenant.
When the upstream repo ships improvements — better model routing, new MCP tools,
security patches — tenants pull the changes and re-run configure.sh and
terraform apply. No divergence, no drift, no forks to maintain.
If the articles or tools have been useful, a coffee helps keep things running.
☕ buy me a coffeeScan any public GitHub repo for dependency risk, secrets, and code quality issues — free, no account needed.
Scan a repo free See governance agents →