kubeone

Deploying KubeOne Clusters on Hetzner Cloud

Abubakar Siddiq Ango
Abubakar Siddiq Ango Senior Developer Advocate
Mar 18, 2026 11 min read Intermediate
getting-started automation multi-cloud

Prerequisites

  • Hetzner Cloud account with an API token (console.hetzner.cloud)
  • KubeOne installed — see Installing KubeOne
  • Terraform 1.5+ installed
  • SSH key pair available at ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub
  • kubectl installed on your local machine

Introduction

Hetzner Cloud is a popular choice for Kubernetes deployments in Europe. The pricing is straightforward — shared vCPU servers start at a few euros per month — and the infrastructure is reliable. For teams that do not need the complexity (or cost) of AWS, GCP, or Azure, Hetzner offers everything you need to run production Kubernetes clusters at a fraction of the price.

KubeOne has first-class support for Hetzner Cloud. The official Terraform examples handle all the infrastructure provisioning — servers, networks, load balancers, firewalls, and SSH keys — so you do not need to configure any of that manually. You provide a cluster name and an API token, and KubeOne takes care of the rest.

In this tutorial, you will deploy a highly available, 3-node Kubernetes cluster on Hetzner Cloud using KubeOne and Terraform. By the end, you will have a production-grade cluster with an external load balancer, private networking, automatic worker node provisioning via machine-controller, and a clear upgrade path for future Kubernetes versions.

What you will learn:

  • How to configure the Hetzner Cloud provider for KubeOne
  • How to use the official Terraform examples to provision infrastructure
  • How to create and apply a KubeOneCluster manifest for Hetzner
  • How to verify your cluster and add worker nodes
  • How to estimate and optimize costs

Step 1: Generate a Hetzner Cloud API Token

KubeOne and Terraform both need an API token to interact with Hetzner Cloud. Log into the Hetzner Cloud Console, select your project, and navigate to Security > API Tokens.

Generate a new token with Read & Write permissions. Copy the token immediately — Hetzner only shows it once.

Export the token as an environment variable. Both Terraform and KubeOne read this variable automatically:

export HCLOUD_TOKEN="your-api-token-here"

Tip: For production setups, store the token in a secrets manager or a .env file that is excluded from version control. Do not commit API tokens to your repository.

Verify the token works by listing your existing servers (the list will be empty if this is a new project):

curl -s -H "Authorization: Bearer $HCLOUD_TOKEN" https://api.hetzner.cloud/v1/servers | jq '.servers | length'

Expected output:

0

Step 2: Set Up the Terraform Configuration

KubeOne ships with official, production-tested Terraform examples for every supported cloud provider. Instead of writing Terraform from scratch, you will use the Hetzner example as your starting point.

Download the KubeOne release and extract the Terraform examples:

# Download the latest KubeOne release
curl -sfL https://get.kubeone.io | sh

# The examples are bundled with the release
# Copy the Hetzner example to your project directory
mkdir kubeone-hetzner && cd kubeone-hetzner
cp -r /usr/local/share/kubeone/examples/terraform/hetzner/* .

If the examples are not at that path, clone them from GitHub:

mkdir kubeone-hetzner && cd kubeone-hetzner
git clone --depth 1 https://github.com/kubermatic/kubeone.git /tmp/kubeone-repo
cp -r /tmp/kubeone-repo/examples/terraform/hetzner/* .
rm -rf /tmp/kubeone-repo

Your directory should now contain:

kubeone-hetzner/
├── main.tf
├── output.tf
├── variables.tf
└── versions.tf

These files define the complete infrastructure: control plane servers, private network, subnet, load balancer, firewall rules, SSH key, and placement groups for server distribution.

Step 3: Configure Terraform Variables

Create a terraform.tfvars file with your cluster settings:

cluster_name = "production"

# Server types — see https://www.hetzner.com/cloud for current pricing
control_plane_type = "cpx21"    # 3 vCPU, 4 GB RAM, 80 GB SSD
worker_type        = "cpx31"    # 4 vCPU, 8 GB RAM, 160 GB SSD

# Datacenter location
datacenter = "nbg1"             # Nuremberg. Alternatives: fsn1 (Falkenstein), hel1 (Helsinki)

# Worker nodes managed by machine-controller
initial_machinedeployment_replicas = 2

# SSH key for node access
ssh_public_key_file = "~/.ssh/id_rsa.pub"

Choosing Server Types

Hetzner Cloud offers shared vCPU (CX/CPX series) and dedicated vCPU (CCX series) servers. For Kubernetes:

RoleRecommended TypeSpecsApprox. Monthly Cost
Control planecpx213 vCPU, 4 GB RAM, 80 GB~5 EUR
Workers (general)cpx314 vCPU, 8 GB RAM, 160 GB~10 EUR
Workers (compute)cpx418 vCPU, 16 GB RAM, 240 GB~19 EUR
Workers (dedicated)ccx132 vCPU, 8 GB RAM, 80 GB~14 EUR
Load balancerlb1125 targets, 5 services~6 EUR

Note: Hetzner regularly updates their server types and pricing. Check the Hetzner Cloud pricing page for current prices before provisioning.

Available Terraform Variables

The Hetzner Terraform example supports these variables:

VariableDefaultDescription
cluster_name(required)Name for all resources
control_plane_vm_count3Number of control plane nodes
control_plane_typecx23Hetzner server type for control plane
worker_typecx23Hetzner server type for workers
lb_typelb11Hetzner load balancer type
datacenternbg1Hetzner datacenter
osubuntuOperating system (ubuntu or flatcar)
ssh_public_key_file~/.ssh/id_rsa.pubPath to SSH public key
ssh_port22SSH port
initial_machinedeployment_replicas2Number of worker nodes
disable_kubeapi_loadbalancerfalseSet true to skip LB creation

Step 4: Provision the Infrastructure

Initialize Terraform, review the plan, and apply:

terraform init
terraform plan

Review the plan output. You should see resources being created for:

  • 3 control plane servers (hcloud_server.control_plane)
  • 1 private network with a subnet (hcloud_network.net, hcloud_network_subnet.kubeone)
  • 1 load balancer for the API server (hcloud_load_balancer.load_balancer)
  • 1 firewall with Kubernetes-required ports (hcloud_firewall.cluster)
  • 1 SSH key (hcloud_ssh_key.kubeone)
  • 1 placement group to distribute servers across hosts

If everything looks correct, apply:

terraform apply

Type yes when prompted. Terraform creates all resources in about 1-2 minutes.

Export the infrastructure details for KubeOne:

terraform output -json > tf.json

Verify the output contains your infrastructure:

cat tf.json | jq '.kubeone_api.value.endpoint'

Expected output:

{
  "host": "203.0.113.100",
  "port": 6443
}

The host is the public IP of the Hetzner load balancer that fronts your API servers.

Step 5: Create the KubeOneCluster Manifest

Create kubeone.yaml with the Hetzner-specific configuration:

apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
name: production

versions:
  kubernetes: "v1.30.2"

cloudProvider:
  hetzner: {}
  external: true

containerRuntime:
  containerd: {}

clusterNetwork:
  cni:
    canal: {}

features:
  nodeLocalDNS:
    deploy: true

The important Hetzner-specific settings:

cloudProvider.hetzner: {} tells KubeOne to configure Hetzner Cloud integrations. This includes the Hetzner Cloud Controller Manager (CCM), which handles node lifecycle events and provides metadata to Kubernetes about the underlying infrastructure.

cloudProvider.external: true deploys the cloud controller manager as an external component (out-of-tree). This is the recommended approach for all cloud providers in modern Kubernetes. The external CCM runs as a Deployment in the cluster rather than being built into the kubelet.

clusterNetwork.cni.canal: {} deploys Canal (Calico + Flannel) as the CNI plugin. This is KubeOne’s default and works well on Hetzner. Canal provides VXLAN overlay networking for pod-to-pod communication and Calico for network policy enforcement.

features.nodeLocalDNS deploys a DNS cache on every node to reduce latency and CoreDNS load.

Tip: You can also use Cilium as your CNI instead of Canal. Replace the cni section with cilium: {}. Cilium provides advanced networking features like eBPF-based load balancing and network observability, but requires Linux kernel 5.10+ on your nodes (which Hetzner’s Ubuntu images provide).

Step 6: Provision the Kubernetes Cluster

With your infrastructure running and the manifest ready, provision Kubernetes:

kubeone apply --manifest kubeone.yaml --tfjson tf.json

KubeOne shows you a summary of what it will do. Review the planned actions — it should list:

  • 3 control plane hosts with their Hetzner IPs
  • Kubernetes version v1.30.2
  • Canal CNI
  • Hetzner external cloud provider

Confirm to proceed. KubeOne then:

  1. Connects to each control plane node over SSH
  2. Installs containerd and Kubernetes packages
  3. Bootstraps the first control plane node with kubeadm
  4. Joins the second and third nodes to form an HA cluster
  5. Configures etcd across all three nodes
  6. Deploys the Hetzner Cloud Controller Manager
  7. Deploys Canal CNI, metrics-server, and node-local DNS
  8. Deploys machine-controller for worker node management
  9. Creates MachineDeployments for worker nodes (based on initial_machinedeployment_replicas)

The process takes 5-8 minutes. Do not interrupt it.

Expected output (final lines):

INFO[00:05:32] Downloading kubeconfig...
INFO[00:05:32] Ensure MachineDeployments...
INFO[00:05:33] Done!

Step 7: Access and Verify the Cluster

KubeOne creates a kubeconfig file in your current directory:

export KUBECONFIG=$(pwd)/production-kubeconfig

Check that all control plane nodes are ready:

kubectl get nodes

Expected output:

NAME                   STATUS   ROLES           AGE   VERSION
production-cp-1        Ready    control-plane   6m    v1.30.2
production-cp-2        Ready    control-plane   5m    v1.30.2
production-cp-3        Ready    control-plane   5m    v1.30.2

Worker nodes are provisioned asynchronously by machine-controller. They appear within 2-3 minutes:

kubectl get nodes --watch

Once workers appear:

NAME                   STATUS   ROLES           AGE   VERSION
production-cp-1        Ready    control-plane   8m    v1.30.2
production-cp-2        Ready    control-plane   7m    v1.30.2
production-cp-3        Ready    control-plane   7m    v1.30.2
production-worker-1    Ready    <none>          2m    v1.30.2
production-worker-2    Ready    <none>          2m    v1.30.2

Verify all system pods are running:

kubectl get pods -A

You should see pods for the API server, controller manager, scheduler, etcd, CoreDNS, Canal, node-local DNS, machine-controller, and the Hetzner cloud controller manager — all in Running state.

Verify the Hetzner Cloud Controller Manager

The CCM handles node metadata and lifecycle events. Verify it is running:

kubectl get pods -n kube-system -l app=hcloud-cloud-controller-manager

Expected output:

NAME                                              READY   STATUS    RESTARTS   AGE
hcloud-cloud-controller-manager-xxxxxxxxxx-xxxxx  1/1     Running   0          6m

Check that nodes have Hetzner-specific labels:

kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.node\.kubernetes\.io/instance-type}{"\n"}{end}'

This should show the Hetzner server type (e.g., cpx21) for each node, confirming the CCM is correctly reporting node metadata.

Step 8: Deploy a Test Workload

Verify the cluster is fully functional by deploying nginx:

kubectl create deployment nginx --image=nginx:latest --replicas=4
kubectl get pods -o wide

The pods should be distributed across your worker nodes. Verify pod-to-pod networking:

kubectl exec -it $(kubectl get pods -l app=nginx -o jsonpath='{.items[0].metadata.name}') -- curl -s -o /dev/null -w "%{http_code}" http://$(kubectl get pods -l app=nginx -o jsonpath='{.items[1].status.podIP}')

A 200 response confirms that overlay networking is working correctly across nodes.

Clean up the test deployment:

kubectl delete deployment nginx

Step 9: Scale Worker Nodes

One of the advantages of KubeOne on Hetzner is that worker node scaling is fully automated through machine-controller. You do not need to manually provision servers.

Scale Existing Workers

To increase the worker count, edit the MachineDeployment:

kubectl -n kube-system get machinedeployments
kubectl -n kube-system scale machinedeployment production-worker --replicas=4

Machine-controller automatically provisions new Hetzner servers, installs Kubernetes, and joins them to the cluster. New nodes appear within 2-3 minutes.

Add a Different Worker Pool

To add workers with different server types (for example, compute-intensive workloads), create a new MachineDeployment. First, inspect the existing one to use as a template:

kubectl -n kube-system get machinedeployment production-worker -o yaml > worker-pool.yaml

Edit worker-pool.yaml: change the name, adjust the server type in cloudProviderSpec, and update the replica count. Apply the new MachineDeployment:

kubectl apply -f worker-pool.yaml

Step 10: Understand the Cost Breakdown

With the default configuration (3x cpx21 control plane, 2x cpx31 workers, 1x lb11 load balancer), your estimated monthly cost is:

ResourceTypeCountApprox. Unit CostTotal
Control planecpx213~5 EUR~15 EUR
Workerscpx312~10 EUR~20 EUR
Load balancerlb111~6 EUR~6 EUR
Private network1Free0 EUR
Total~41 EUR/month

Note: Prices are approximate and vary by datacenter. Check Hetzner Cloud pricing for current rates. Traffic within the private network is free. Outbound public traffic is included up to 20 TB/month on most server types.

Cost Optimization Tips

  • Use CX series instead of CPX for control plane nodes if you do not need AMD EPYC processors. CX series (Intel) is slightly cheaper.
  • Start with 2 workers and scale up as needed. Machine-controller makes scaling a single command.
  • Use Hetzner volumes for persistent storage instead of provisioning larger server types for disk space.
  • Set up autoscaling with the Kubernetes Cluster Autoscaler and Hetzner Cloud provider to scale workers based on demand.

Troubleshooting

Terraform Fails with “Unauthorized”

The API token is missing or invalid. Verify it is set:

echo $HCLOUD_TOKEN

If the variable is empty, re-export it. If it is set but Terraform still fails, generate a new token in the Hetzner Cloud Console — the old one may have been revoked.

Worker Nodes Not Appearing

If control plane nodes are ready but workers do not appear after 5 minutes:

  1. Check MachineDeployment status:
kubectl -n kube-system get machinedeployments
kubectl -n kube-system get machines
  1. Check machine-controller logs:
kubectl -n kube-system logs -l app=machine-controller -f

Common causes: the HCLOUD_TOKEN secret is missing from the cluster, the token does not have write permissions, or the Hetzner API rate limit has been hit.

  1. Verify the cloud-init secret exists:
kubectl -n kube-system get secrets | grep cloud-init

Nodes Stuck in NotReady

If nodes appear but stay in NotReady state:

  1. Check kubelet logs on the affected node:
ssh root@<node-ip> journalctl -u kubelet -f
  1. Check that Canal pods are running on all nodes:
kubectl get pods -n kube-system -l k8s-app=canal -o wide

If Canal pods are in CrashLoopBackOff, the private network may not be configured correctly. Verify the Hetzner private network exists and all servers are attached to it in the Hetzner Cloud Console.

API Server Unreachable

If kubectl cannot connect after provisioning:

  1. Verify the load balancer is healthy in the Hetzner Cloud Console. All three control plane targets should show “healthy.”
  2. Check that port 6443 is not blocked by a local firewall or corporate network.
  3. Verify the kubeconfig points to the correct load balancer IP:
grep server production-kubeconfig

Next Steps

Your Hetzner Cloud cluster is running and ready for workloads. Here are some paths forward:

Summary

You deployed a highly available Kubernetes cluster on Hetzner Cloud using KubeOne and Terraform. The cluster runs three control plane nodes with distributed etcd, two worker nodes managed by machine-controller, and a Hetzner load balancer fronting the API server. The Hetzner Cloud Controller Manager provides node lifecycle management and infrastructure metadata to Kubernetes. The entire setup costs approximately 41 EUR per month and can be upgraded, scaled, or repaired with a single kubeone apply command.