Introduction
kcp is a Kubernetes-like control plane that gives you the API machinery of Kubernetes — CRDs, RBAC, admission control, resource management — without pods, nodes, or container orchestration. It is a CNCF Sandbox project designed for building multi-tenant platforms where every team gets an isolated API scope that feels like its own cluster.
If you have already read What is kcp?, you understand the concepts. This tutorial puts those concepts into practice. You will install kcp, start a server, create workspaces, define a custom API using a CRD, create resources against that API, and prove that workspaces provide true isolation — all on your local machine, in about twenty minutes.
By the end of this walkthrough, you will have a working mental model of how kcp workspaces, custom APIs, and resource isolation fit together. Everything here runs locally. No cloud account, no existing Kubernetes cluster, no special infrastructure required.
What you will learn:
- How to install and start a kcp server
- How to create and navigate workspaces
- How kcp differs from a regular Kubernetes cluster at the API level
- How to define a custom resource (CRD) inside a workspace
- How workspace isolation prevents CRDs and resources from leaking across boundaries
Prerequisites
Before you begin, ensure you have:
- Go 1.21 or later — check with
go version. Install Go if needed. - kubectl — check with
kubectl version --client. Install kubectl if needed. - A terminal — all commands run in a standard shell (bash or zsh).
- Basic Kubernetes knowledge — you should be comfortable with concepts like CRDs, namespaces, and
kubectl apply.
Estimated time: 20 minutes
Environment used in this tutorial:
- OS: macOS or Linux (commands work on both)
- kcp: latest release
- kubectl: 1.28+
Step 1: Install kcp
Download the latest pre-built binary from GitHub. This is the fastest way to get started:
# Detect the latest release version
KCP_VERSION=$(curl -s https://api.github.com/repos/kcp-dev/kcp/releases/latest | grep tag_name | cut -d '"' -f 4)
# Download the binary (adjust the platform suffix for your OS)
# Linux: linux_amd64 | macOS Intel: darwin_amd64 | macOS Apple Silicon: darwin_arm64
curl -L -o kcp.tar.gz "https://github.com/kcp-dev/kcp/releases/download/${KCP_VERSION}/kcp_${KCP_VERSION}_linux_amd64.tar.gz"
# Extract and install
tar xzf kcp.tar.gz
sudo mv bin/kcp /usr/local/bin/
sudo mv bin/kubectl-kcp /usr/local/bin/
The archive contains both the kcp server binary and the kubectl-kcp plugin, which adds workspace management commands to kubectl.
Verify both are installed:
kcp --version
kubectl kcp --help
You should see a version number from the first command and a list of subcommands from the second.
Tip: For alternative installation methods — including building from source — see the detailed guide: Installing kcp and Creating Your First Workspace.
Step 2: Start the kcp Server
Create a working directory for this tutorial and start the server:
mkdir -p ~/kcp-tutorial && cd ~/kcp-tutorial
kcp start
kcp boots an embedded etcd instance, sets up the API machinery, and begins listening for connections. You will see log output as it initializes. Once you see lines indicating the server is ready, it is accepting requests.
kcp writes an admin kubeconfig file to .kcp/admin.kubeconfig in the current directory. You will use this file to connect kubectl.
Leave this terminal running. Open a new terminal for the remaining steps.
In the new terminal, set the KUBECONFIG environment variable:
export KUBECONFIG=~/kcp-tutorial/.kcp/admin.kubeconfig
Warning: Every new terminal session needs this
KUBECONFIGexport. If kubectl commands return connection errors, the most likely cause is a missing or incorrect KUBECONFIG path.
Step 3: Create Your First Workspace
When kcp starts, you are in the root workspace — the top of the hierarchy. Verify this:
kubectl kcp workspace .
Expected output:
Current workspace is "root".
Now create a workspace called team-frontend and enter it:
kubectl kcp workspace create team-frontend --type universal --enter
Expected output:
Workspace "team-frontend" (type root:universal) created. Waiting for it to be ready...
Workspace "team-frontend" (type root:universal) is ready to use.
Current workspace is "root:team-frontend".
The --type universal flag creates a workspace with the standard set of kcp APIs. The --enter flag switches your kubectl context into the new workspace automatically.
Step 4: Explore the Workspace
You are now inside team-frontend. This workspace behaves like a Kubernetes API server. Run a familiar command:
kubectl api-resources
You will see resources like ConfigMaps, Secrets, ServiceAccounts, Namespaces, and CustomResourceDefinitions. These are the state-management APIs from Kubernetes.
What you will not see: Pods, Deployments, ReplicaSets, Services, Nodes. Those belong to the compute layer, and kcp does not include them. This is the fundamental difference — you are working with an API server that manages state and schema, not containers.
Check the namespaces:
kubectl get namespaces
Expected output:
NAME STATUS AGE
default Active 30s
Just like a fresh Kubernetes cluster, you get a default namespace. The workspace is a blank slate, ready for your resources and APIs.
Tip: From a developer’s perspective, interacting with a kcp workspace is identical to interacting with a Kubernetes cluster. Your existing kubectl commands, YAML manifests, and tooling all work without modification.
Step 5: Define a Custom API (CRD)
One of kcp’s strengths is that each workspace has its own CRD space. You can define custom APIs in one workspace without affecting any other workspace.
Create a file called environment-crd.yaml that defines a simple Environment resource:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: environments.platform.example.com
spec:
group: platform.example.com
names:
plural: environments
singular: environment
kind: Environment
shortNames:
- env
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
required:
- tier
- region
properties:
tier:
type: string
enum: ["development", "staging", "production"]
region:
type: string
replicas:
type: integer
minimum: 1
maximum: 10
status:
type: object
properties:
phase:
type: string
subresources:
status: {}
Apply the CRD inside the team-frontend workspace:
kubectl apply -f environment-crd.yaml
Expected output:
customresourcedefinition.apiextensions.k8s.io/environments.platform.example.com created
Verify the CRD is registered:
kubectl get crds
You should see environments.platform.example.com listed. The Environment API is now available — but only in this workspace.
Step 6: Create Resources Using the Custom API
Now use the custom API to create environment resources. Create a file called staging-env.yaml:
apiVersion: platform.example.com/v1
kind: Environment
metadata:
name: staging-eu
namespace: default
spec:
tier: staging
region: eu-west-1
replicas: 2
Apply it:
kubectl apply -f staging-env.yaml
Create a second environment inline:
kubectl apply -f - <<EOF
apiVersion: platform.example.com/v1
kind: Environment
metadata:
name: prod-us
namespace: default
spec:
tier: production
region: us-east-1
replicas: 5
EOF
List the environments:
kubectl get environments
Expected output:
NAME AGE
staging-eu 15s
prod-us 5s
Inspect one of them:
kubectl get environment staging-eu -o yaml
You will see the full resource with the spec you defined. The CRD validation is enforced — if you try to set tier to a value outside the enum, kcp rejects it, just as Kubernetes would.
Tip: Use the short name
envfor faster commands:kubectl get envworks because you definedshortNames: ["env"]in the CRD.
Step 7: Create a Second Workspace and Verify Isolation
This is where kcp’s isolation model becomes concrete. Navigate back to the root workspace:
kubectl kcp workspace ..
Create a second workspace and enter it:
kubectl kcp workspace create team-backend --type universal --enter
Now try to list environments in team-backend:
kubectl get environments
Expected output:
error: the server doesn't have a resource type "environments"
The Environment CRD does not exist in team-backend. It was defined only in team-frontend. In regular Kubernetes, CRDs are cluster-scoped — every namespace sees them. In kcp, each workspace is a separate API scope. CRDs defined in one workspace are invisible to every other workspace.
Verify there are no CRDs at all in team-backend:
kubectl get crds
Expected output:
No resources found
This is true isolation. Team Backend cannot see, use, or interfere with Team Frontend’s custom APIs. Each team has full control over its own API surface.
You can define a completely different Environment CRD in team-backend with different fields, different validation rules, even a different API group — and there will be no conflict with the one in team-frontend.
flowchart TB
Root["root workspace"]
subgraph FE["root:team-frontend"]
CRD1[CRD: environments.platform.example.com]
R1[Environment/staging-eu]
R2[Environment/prod-us]
CRD1 --> R1
CRD1 --> R2
end
subgraph BE["root:team-backend"]
Empty["(no CRDs)
(no custom resources)"]
end
Root --> FE
Root --> BE
FE -. isolated .-x BE
Warning: Workspace isolation means APIs are not shared by default. If you want Team Backend to use Team Frontend’s
EnvironmentAPI, you need to use kcp’s APIExport and APIBinding mechanism to explicitly share it. That is a separate topic covered in the kcp series.
Step 8: Clean Up
Stop the kcp server by pressing Ctrl+C in the terminal where it is running.
Remove the working directory and all generated files:
rm -rf ~/kcp-tutorial
This removes the embedded etcd data, the generated kubeconfig, and all workspace state. Since everything runs locally, cleanup is straightforward — there is nothing to tear down in the cloud.
Troubleshooting
Port 6443 Already in Use
Symptom: kcp fails to start with an “address already in use” error.
Cause: Another process — such as minikube, kind, Docker Desktop’s built-in Kubernetes, or a previous kcp instance — is already listening on port 6443.
Solution:
# Start kcp on a different port
kcp start --secure-port=6444
If you change the port, the generated kubeconfig at .kcp/admin.kubeconfig will automatically point to the new port.
KUBECONFIG Not Set
Symptom: kubectl commands return “The connection to the server localhost:8080 was refused” or similar connection errors.
Cause: The KUBECONFIG environment variable is not set, or it points to the wrong file.
Solution:
# Set KUBECONFIG to the kcp admin kubeconfig
export KUBECONFIG=~/kcp-tutorial/.kcp/admin.kubeconfig
# Verify the connection
kubectl cluster-info
Remember that this export applies only to the current terminal session. Each new terminal needs the export.
Workspace Not Found
Symptom: kubectl kcp workspace use <name> returns an error saying the workspace does not exist.
Cause: You are in the wrong parent workspace. Workspaces are hierarchical — you can only access direct children of your current workspace.
Solution:
# Navigate to the root workspace first
kubectl kcp workspace ..
# List available child workspaces
kubectl get workspaces
# Then switch to the target workspace
kubectl kcp workspace use team-frontend
kubectl kcp: Command Not Found
Symptom: Running kubectl kcp returns “unknown command” or “command not found.”
Cause: The kubectl-kcp plugin binary is not installed or not in your PATH.
Solution:
# Check if the binary exists
which kubectl-kcp
# If not found, install it from the kcp release archive
sudo mv bin/kubectl-kcp /usr/local/bin/
Go Version Mismatch (Building from Source)
Symptom: make build fails with Go version errors.
Cause: kcp requires Go 1.21 or later.
Solution:
go version
# If below 1.21, upgrade:
# macOS: brew upgrade go
# Linux: download from https://go.dev/dl/
Next Steps
Now that you have seen kcp in action, here are the recommended next tutorials:
- Installing kcp and Creating Your First Workspace — covers installation options in more detail, including building from source, and walks through workspace navigation and CRD isolation step by step.
- kcp Workspaces vs Namespaces vs vcluster — a comparison of the three main multi-tenancy approaches in the Kubernetes ecosystem, so you can evaluate where kcp fits for your use case.
- What is kcp? Kubernetes Without the Pods — if you jumped straight to this hands-on guide, the conceptual article explains the architecture, use cases, and design decisions behind kcp.
Summary
In this tutorial, you installed kcp, started a local server, and worked through a complete hands-on workflow. You created two workspaces (team-frontend and team-backend), defined a custom Environment API using a CRD in one workspace, created resources against that API, and verified that the CRD and its resources are completely invisible from the other workspace.
The key takeaways:
- kcp provides Kubernetes API machinery without the compute layer. You get CRDs, RBAC, namespaces, and resource management, but no pods, nodes, or scheduler.
- Workspaces are true isolation boundaries. Each workspace has its own CRD space, its own resources, and its own API surface. Unlike Kubernetes namespaces, workspaces isolate cluster-scoped resources like CRDs.
- The developer experience is familiar. You use kubectl, write standard YAML, and interact with standard Kubernetes APIs. The only difference is what those APIs manage.
This isolation model is what makes kcp a strong foundation for multi-tenant platforms. Each team or tenant gets a workspace that looks and feels like a dedicated cluster, while you run a single kcp instance behind the scenes.
