Back to home

Four approaches to Kubernetes multi-tenancy

There is no single right answer to Kubernetes multi-tenancy. The right approach depends on your isolation requirements, tenant count, and operational budget. Here's how the four main approaches compare.

1. Namespace Isolation

How it works: Multiple tenants share a single cluster, separated by namespaces with RBAC and NetworkPolicy.

  • Overhead per tenant:effectively zero
  • Isolation:namespace-level only — CRDs, cluster-scoped resources, and admission webhooks are shared across all tenants
  • Provisioning:instant
  • Practical ceiling:~50 tenants before policy complexity becomes unmanageable
  • When it breaks:tenants need cluster-admin access, custom CRDs, or independent admission policies

2. Synced Virtual Clusters

How it works: Each tenant gets a dedicated API server and backing store running as a pod. A syncer process copies low-level resources (Pods, ConfigMaps, Secrets) from the virtual cluster to a host cluster for scheduling. Higher-level resources (Deployments, StatefulSets, CRDs) stay in the virtual cluster.

  • Overhead per tenant:~128 MB+ RAM (API server + datastore + syncer)
  • Isolation:strong — each tenant has their own API server
  • Provisioning:30-60 seconds
  • Practical ceiling:~100 tenants before resource costs dominate
  • When it breaks:resource types that the syncer doesn't handle behave differently than in a real cluster. At 100+ tenants the per-cluster infrastructure cost becomes significant.

3. Hosted Control Planes

How it works: Each tenant gets a real kube-apiserver and etcd instance, running as pods in a management cluster. No syncer — the control plane is real, just hosted centrally.

  • Overhead per tenant:~512 MB+ RAM (etcd is the bottleneck)
  • Isolation:full — dedicated control plane infrastructure per tenant
  • Provisioning:60-120 seconds
  • Practical ceiling:~50 tenants before etcd overhead dominates
  • When it breaks:per-cluster etcd doesn't scale to hundreds of tenants. Often tied to a specific distribution.

4. Virtual Control Planes

How it works: One shared API server serves all control planes. Each control plane is a logical partition within the shared API server's storage layer, with its own RBAC, CRDs, and namespaces. No per-tenant processes. No syncing.

  • Overhead per tenant:~3 MB
  • Isolation:full — RBAC, CRDs, namespaces, policies per control plane
  • Provisioning:~2 seconds
  • Practical ceiling:1,000+ tenants per management plane
  • When it breaks:if you need physically separate infrastructure per tenant (different cloud accounts, air-gapped networks), use dedicated clusters.

Side-by-side

Namespace IsolationSynced Virtual ClustersHosted Control PlanesVirtual Control Planes
Memory per tenant~0~128 MB~512 MB~3 MB
Provisioninginstant30-60 s60-120 s~2 s
CRD isolationnoyesyesyes
RBAC isolationpartialyesyesyes
Syncer requirednoyesnono
API compatibilityfullpartial*fullfull
Max tenants (practical)~50~100~501,000+

* Resources that the syncer doesn't handle may behave differently than in a standard Kubernetes cluster.