Introduction
Every platform team hits the same question: how do you give multiple teams isolated Kubernetes environments without spinning up a cluster per team? Dedicated clusters provide maximum isolation, but you end up managing infrastructure that scales linearly with your team count. Share a single cluster, and you trade operational overhead for a different kind of pain — CRD conflicts, noisy neighbors, and RBAC rules that grow until nobody fully understands them.
Three approaches have emerged to solve this problem, each with a fundamentally different architecture. Namespaces are built into Kubernetes and provide logical partitioning within a single cluster. vcluster creates virtual Kubernetes clusters that run as pods inside a host cluster. kcp workspaces provide API-only isolation — full Kubernetes API scopes without any compute layer attached.
Each solves the multi-tenancy problem differently, with different trade-offs in isolation strength, resource overhead, and complexity. This guide compares all three so you can pick the right one for your situation.
The Multi-Tenancy Problem
You have 10 teams. Each wants their own Kubernetes “environment” where they can deploy applications, install operators, define custom resources, and operate independently. You have two obvious options.
Option 1: Give each team a dedicated cluster. Maximum isolation, maximum cost. You are now managing 10 clusters, each with its own control plane, node pool, monitoring stack, upgrade cycle, and operational burden. When you have 50 teams, you have 50 clusters. The cost and operational complexity scale linearly.
Option 2: Share one cluster. Save money, but teams step on each other. CRDs are cluster-scoped, so two teams cannot use different versions of the same custom resource definition. A runaway pod in one team’s namespace can starve nodes of memory and CPU, affecting everyone. RBAC rules for cluster-scoped resources become a tangled mess of ClusterRoles and ClusterRoleBindings.
The three approaches below are all different answers to “how do we share without pain?” They sit on a spectrum from lightweight metadata boundaries (namespaces) to full virtual clusters (vcluster) to pure API isolation (kcp workspaces).
flowchart LR
subgraph NS["Namespaces"]
NS_API[Shared API Server]
NS_CRD[Shared CRDs]
NS_A["ns: team-a"]
NS_B["ns: team-b"]
NS_API --> NS_A
NS_API --> NS_B
end
subgraph VC["vcluster"]
Host[Host Cluster]
VC_A["vcluster A
own API + CRDs
pods synced to host"]
VC_B["vcluster B
own API + CRDs
pods synced to host"]
Host --> VC_A
Host --> VC_B
end
subgraph KCP["kcp Workspaces"]
KCP_API[kcp API machinery]
WS_A["workspace A
own CRDs + RBAC"]
WS_B["workspace B
own CRDs + RBAC"]
Operator[Multi-tenant operator]
Compute[(Real k8s cluster)]
KCP_API --> WS_A
KCP_API --> WS_B
WS_A -. APIBinding .-> Operator
WS_B -. APIBinding .-> Operator
Operator --> Compute
end
Namespaces: The Built-In Approach
How They Work
Kubernetes namespaces are a built-in mechanism for dividing a single cluster into logical partitions. Each namespace gets its own set of resources — pods, services, configmaps, secrets — and its own RBAC rules. You control how much each namespace can consume using ResourceQuotas and LimitRanges. NetworkPolicies can restrict traffic between namespaces.
Every Kubernetes cluster ships with namespaces. No additional installation, no extra processes, no third-party dependencies. You create a namespace with kubectl create namespace team-a, set up some RBAC rules, apply a ResourceQuota, and hand the team a kubeconfig scoped to that namespace.
Strengths
Zero additional tooling. Namespaces are part of every Kubernetes distribution. They work on EKS, GKE, AKS, KubeOne-managed clusters, and bare metal setups. There is nothing to install.
Low overhead. A namespace is a metadata boundary, not a running process. Creating 100 namespaces does not add 100 API servers or 100 etcd instances. The cluster’s resource consumption stays the same whether you have 5 namespaces or 500.
Well-understood by every Kubernetes user. Anyone who has used Kubernetes has worked with namespaces. The mental model is simple, the documentation is extensive, and troubleshooting is straightforward.
Works well for trusted teams. If your teams are all part of the same organization, trust each other, and run similar workloads, namespaces provide enough separation without adding complexity.
Limitations
CRDs are cluster-scoped. Install a CRD in one namespace and every namespace sees it. Two teams cannot use different versions of the same CRD. If team A needs cert-manager v1.12 and team B needs v1.14, you have a conflict that namespaces cannot resolve.
Noisy neighbor risk. A pod in namespace A that consumes all node memory affects pods in namespace B. ResourceQuotas limit what a namespace can request, but they do not prevent a single pod from exceeding its limits and triggering OOM kills that impact the entire node.
RBAC gets complex fast. Cluster-scoped resources — nodes, persistent volumes, CRDs, cluster roles — require careful RBAC rules to prevent cross-tenant access. As the number of namespaces grows, the number of RBAC rules grows faster. A mistake in a ClusterRoleBinding can expose resources across all namespaces.
No true API isolation. All namespaces share the same API server, the same etcd, the same admission webhooks. A misbehaving admission webhook affects every namespace. A slow custom controller watching all namespaces impacts the API server for everyone.
Best for: Small teams, trusted environments, simple applications where CRD conflicts are unlikely.
vcluster: Virtual Clusters
How They Work
vcluster, created by Loft Labs, creates virtual Kubernetes clusters that run inside pods on a host cluster. Each vcluster has its own API server — based on k3s, k0s, or vanilla Kubernetes — and its own backing store (etcd or SQLite). From the tenant’s perspective, they have a full Kubernetes cluster with cluster-admin access. From the host cluster’s perspective, a vcluster is just a set of pods running in a namespace.
The key mechanism is the syncer. When a tenant creates a pod in their vcluster, the syncer translates it into a real pod on the host cluster. The pod runs on the host cluster’s nodes, consuming the host cluster’s compute resources. Services, endpoints, and other resources are synced bidirectionally between the vcluster and the host.
You install a vcluster with a single Helm chart. Within 30 to 60 seconds, you have a fully functional virtual cluster with its own kubeconfig.
Strengths
Strong isolation. Each tenant gets their own API server, their own set of CRDs, and their own admission configuration. Two tenants can run completely different CRD versions, different operators, and different webhook configurations without any conflict.
Tenants get full cluster-admin access. Unlike namespaces, where tenants are limited to namespace-scoped operations, vcluster tenants can create ClusterRoles, install CRDs, and configure cluster-level settings within their virtual cluster.
Fast creation. Spinning up a new vcluster takes 30 to 60 seconds. This makes vclusters particularly effective for CI/CD pipelines where you need an isolated cluster for each test run.
Real kubeconfig. Tenants receive a standard kubeconfig that works with kubectl, Helm, ArgoCD, and every other Kubernetes tool. No special client-side tooling is required.
Limitations
Resource overhead. Each vcluster runs an API server and a syncer pod. With 50 vclusters, that is 50 API servers consuming memory and CPU on the host cluster. The per-tenant overhead is modest (around 200-500 MB of memory per vcluster), but it adds up at scale.
Compute is still shared. The host cluster runs all workloads from all vclusters. A vcluster tenant cannot consume more resources than their quota on the host cluster allows. Node-level isolation between tenants requires additional tooling like node affinity rules or dedicated node pools.
Networking complexity. The syncer translates services and endpoints between the virtual and host clusters. This translation layer can introduce latency and makes network debugging harder. If a service is not reachable, you need to check both the vcluster and the host cluster to diagnose the issue.
Commercial components. The open-source vcluster is fully functional for basic use cases. The full platform — vCluster Platform (formerly Loft) — adds features like sleep mode, access control, and cost management, but it is a commercial product with a licensing cost.
Best for: CI/CD environments (ephemeral clusters for testing), teams that need cluster-admin access, organizations with moderate tenant counts (10-50).
kcp Workspaces: API-Only Isolation
How They Work
kcp provides workspaces — isolated API scopes that function like independent Kubernetes clusters. Each workspace has its own resources, CRDs, RBAC rules, and admission configuration. You interact with a workspace using standard kubectl commands, and it feels exactly like interacting with a regular Kubernetes cluster.
The fundamental difference between kcp and both namespaces and vcluster is that kcp has no compute layer. There are no pods, no nodes, no scheduler, and no container runtime. kcp is pure API machinery — the Kubernetes control plane stripped down to CRDs, RBAC, admission control, and resource management.
When actual compute is needed, an API provider runs a multi-tenant operator that watches across the workspaces bound to its APIExport and reconciles resources into a real backend — one or more physical Kubernetes clusters, a cloud provider API, or any other system the operator understands. Earlier versions of kcp shipped a Syncer and Transparent Multi-Cluster (TMC) code for workload scheduling, but both were removed in May 2023 to refocus the project on pure API management. Compute lives with the operator, not with kcp itself.
Strengths
True API isolation with minimal overhead. Workspaces are API scopes in a shared control plane, not running processes. Creating a new workspace is almost free in terms of resource consumption. A single kcp instance can host thousands of workspaces on modest hardware.
CRD isolation. Each workspace has its own CRD registry. Two workspaces can define completely different versions of the same CRD without any conflict. This is the same benefit vcluster provides, but without the per-tenant API server overhead.
Hierarchical workspaces. Workspaces can contain child workspaces, enabling organizational hierarchy. A top-level workspace for a business unit can contain child workspaces for each team, which can contain child workspaces for each environment. Policies and configurations can cascade down the hierarchy.
API sharing via APIExport and APIBinding. Service teams can publish APIs using APIExport, and other workspaces can consume those APIs using APIBinding. This creates a native service catalog mechanism. A database team can export a Database API, and application teams can bind to it and create database instances without understanding the underlying implementation.
Scales to thousands of tenants. Since workspaces do not run compute, the control plane can manage far more tenants than vcluster. The limiting factor is the kcp server’s capacity to serve API requests, not the aggregate memory consumption of per-tenant API servers.
Limitations
No native compute. kcp does not run workloads by itself. You need a multi-tenant operator and at least one physical Kubernetes cluster (or equivalent backend) for actual pods. This adds architectural complexity — you are managing both a kcp control plane and the downstream compute that sits behind your APIs.
CNCF Sandbox maturity. kcp is under active development. The API surface may change between releases. Production adoption requires careful evaluation and a willingness to track upstream changes.
Ecosystem tooling gaps. Standard tools like Prometheus, ArgoCD, and Flux do not natively understand kcp workspaces yet. Observing state across workspaces and their downstream compute requires custom integration work. This gap will narrow over time, but today it means additional engineering effort.
Steeper learning curve. The workspace, APIExport, and APIBinding model is new and unfamiliar to most Kubernetes users. Teams need to understand not just Kubernetes concepts, but also kcp-specific concepts like workspace hierarchies, multi-tenant operators, and API sharing.
Warning: kcp is a CNCF Sandbox project. Evaluate it carefully for production use. The APIExport and APIBinding mechanisms are still maturing, and breaking changes are possible between releases.
Best for: Platform teams building Internal Developer Platforms, SaaS control planes, organizations with many tenants (100+) where compute isolation is less critical than API isolation.
Head-to-Head Comparison
| Factor | Namespaces | vcluster | kcp Workspaces |
|---|---|---|---|
| Isolation Level | Weak (shared API, shared CRDs) | Strong (separate API server) | Strong (separate API scope) |
| CRD Isolation | No — cluster-scoped | Yes — per vcluster | Yes — per workspace |
| Resource Overhead | None | Medium (API server per tenant) | Minimal (API scope only) |
| Compute Model | Shared cluster | Shared cluster (synced) | External (via multi-tenant operator) |
| Tenant Count | 10-50 practical | 10-100 practical | 100-1000+ practical |
| Tenant Admin Access | Limited (namespace-scoped) | Full cluster-admin | Full workspace-admin |
| Tooling Maturity | Built-in | Strong (Loft Labs ecosystem) | Early (CNCF Sandbox) |
| Setup Complexity | None | Low (helm install) | Medium (kcp server + per-API operator) |
| Best Fit | Trusted teams, simple apps | Dev/test, CI/CD | Platform engineering, SaaS |
Decision Framework
Use these questions to pick the right approach for your situation.
Start here: How many tenants do you have?
- Under 10 tenants, trusted teams. Namespaces are probably fine. Add ResourceQuotas and NetworkPolicies. Do not over-engineer it.
- 10-50 tenants, need CRD isolation or cluster-admin. vcluster gives you strong isolation with reasonable overhead. Especially good for CI/CD and ephemeral environments.
- 50+ tenants, building a platform. kcp workspaces scale better and provide API-level isolation without the per-tenant compute cost. Worth the investment if you are building an Internal Developer Platform.
Additional factors to consider:
- If tenants need to install their own operators, you need vcluster or kcp. Namespaces cannot provide CRD isolation.
- If you need fast ephemeral environments for CI pipelines, vcluster’s 30-second spin-up time is hard to beat.
- If you are building a service catalog with self-service APIs, kcp’s APIExport and APIBinding mechanism was designed for exactly this use case.
- If you need proven, production-stable tooling today, vcluster has the most mature ecosystem. Fall back to namespaces if even vcluster feels like too much.
Tip: These approaches are not mutually exclusive. You can use namespaces within a kcp workspace, or run vclusters on clusters that are managed by kcp. Think of them as layers, not alternatives.
Combining Approaches
Real-world platforms often use multiple approaches together. Here are three patterns that work well in practice.
kcp for organizational isolation, namespaces for team separation. Create one kcp workspace per business unit, giving each unit API-level isolation with its own CRDs and RBAC. Within each workspace, use namespaces for team-level separation. This gives you the scalability of kcp at the organization level with the simplicity of namespaces at the team level.
vcluster for ephemeral dev/test environments. Run vclusters on a shared compute cluster managed by KubeOne or KKP. Developers spin up a vcluster for a feature branch, run their integration tests against it, and tear it down when the branch merges. The host cluster handles resource management and cost control.
Namespaces as a starting point, graduating to stronger isolation. Start small teams with namespaces. When a team’s needs outgrow what namespaces can provide — they need their own CRDs, their own operators, or cluster-admin access — graduate them to a vcluster or a kcp workspace. This avoids over-engineering early while providing a clear growth path.
Next Steps
- What is kcp? — learn the fundamentals of kcp and how it differs from Kubernetes
- Installing kcp and Creating Your First Workspace — try kcp hands-on with a step-by-step guide
- vcluster Documentation — explore the vcluster approach and try it on your own cluster
Summary
Namespaces provide basic isolation with zero overhead but break down when you hit CRD conflicts, noisy neighbor problems, or more than a few dozen tenants. vcluster gives each tenant a full virtual cluster with strong isolation, at the cost of per-tenant resource overhead that limits practical scale to roughly 100 tenants. kcp workspaces provide API-level isolation with minimal overhead, scaling to thousands of tenants, but require a separate compute layer and come with the trade-offs of a still-maturing project.
Pick namespaces for simplicity when your teams trust each other and your tenant count is low. Pick vcluster when you need strong isolation with moderate scale, especially for CI/CD and development environments. Pick kcp when you are building an Internal Developer Platform or SaaS control plane that needs to serve many tenants without the per-tenant cost of running virtual clusters.
The most important thing to remember is that these are not competing technologies. They operate at different layers of the stack and can be combined. Start with the simplest approach that meets your requirements today, and layer on stronger isolation as your platform grows.
