kubevirt

Installing KubeVirt on a Kubernetes Cluster

Abubakar Siddiq Ango
Abubakar Siddiq Ango Senior Developer Advocate
Mar 17, 2026 14 min read Beginner
getting-started virtualization

Prerequisites

  • A running Kubernetes cluster (v1.28 or later)
  • kubectl configured with cluster-admin access
  • Nodes with hardware virtualization support (Intel VT-x or AMD-V)
  • Basic understanding of KubeVirt concepts — see What is KubeVirt?

Introduction

You have a Kubernetes cluster running your containerized workloads. Now you want to add virtual machines to the mix — maybe you have legacy applications that need a full OS, or you are evaluating a migration path away from VMware. Either way, you need KubeVirt installed and working.

In this tutorial, you will install KubeVirt and the Containerized Data Importer (CDI) on an existing Kubernetes cluster, set up the virtctl command-line tool, and launch your first virtual machine to verify that everything works. By the end, you will have a fully functional KubeVirt installation with a running VM that you can access via console.

The whole process takes about 20 minutes if your cluster is already up. Most of that time is waiting for pods to pull images and reach a running state.

Here is what you will set up:

  • KubeVirt operator and custom resource — the core virtualization layer
  • Containerized Data Importer (CDI) — for importing disk images into PersistentVolumes
  • virtctl CLI — the command-line tool for VM-specific operations
  • A test VM — to prove everything works end to end

Step 1: Verify Hardware Virtualization Support

Before installing anything, you need to confirm that your cluster nodes support hardware virtualization. KubeVirt uses KVM under the hood, and KVM requires either Intel VT-x or AMD-V extensions at the CPU level.

If you have SSH access to a cluster node, run this command:

grep -cE 'vmx|svm' /proc/cpuinfo

The output is a number. If it is greater than 0, hardware virtualization is available. The number itself tells you how many CPU cores support it — on a 4-core machine with VT-x enabled, you would see 4.

If the output is 0, your CPU either does not support virtualization or it is disabled. On physical servers, check the BIOS/UEFI settings — hardware virtualization is sometimes disabled by default. On cloud VMs, you need an instance type that supports nested virtualization.

If you cannot SSH into your nodes directly, you can check from within the cluster by running a privileged pod:

kubectl run virt-check --image=alpine --restart=Never --rm -it \
  --overrides='{"spec":{"containers":[{"name":"virt-check","image":"alpine","command":["sh","-c","grep -cE vmx\\|svm /proc/cpuinfo"],"securityContext":{"privileged":true}}]}}' \
  -- sh -c "grep -cE 'vmx|svm' /proc/cpuinfo"

You can also verify that the /dev/kvm device exists on the node:

kubectl run kvm-check --image=alpine --restart=Never --rm -it \
  --overrides='{"spec":{"containers":[{"name":"kvm-check","image":"alpine","command":["ls","-la","/dev/kvm"],"securityContext":{"privileged":true}}]}}' \
  -- ls -la /dev/kvm

If /dev/kvm exists, you are good to go.

Warning: KubeVirt can fall back to software emulation when hardware virtualization is not available, but the performance penalty is severe — expect 10x to 100x slower execution. Software emulation is acceptable for quick testing or development environments where you just need to validate manifests and workflows. It is not viable for production workloads or any scenario where VM performance matters. If you are deploying to cloud VMs, check your provider’s documentation for instance types that support nested virtualization (for example, .metal instances on AWS, or N2/C2 instances with nested virt enabled on GCP).

Step 2: Deploy the KubeVirt Operator

With hardware virtualization confirmed, you can install KubeVirt. The installation follows the standard Kubernetes operator pattern: first you deploy the operator, then you create a custom resource that tells the operator what to deploy.

flowchart LR
    A["kubectl create
kubevirt-operator.yaml"] --> B[virt-operator
pod running] B -->|waits for| C["kubectl apply
KubeVirt CR"] C --> D[virt-operator reconciles] D --> E[virt-api
Deployment] D --> F[virt-controller
Deployment] D --> G[virt-handler
DaemonSet]

Start by fetching the latest stable version and deploying the operator:

export KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | grep tag_name | cut -d '"' -f 4)
echo "Installing KubeVirt ${KUBEVIRT_VERSION}"
kubectl create -f "https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml"

This creates several resources in the kubevirt namespace:

  • The kubevirt namespace itself
  • Custom Resource Definitions (CRDs) for VirtualMachine, VirtualMachineInstance, and related types
  • The virt-operator Deployment, which is the control plane component that manages all other KubeVirt components
  • ServiceAccounts, ClusterRoles, and ClusterRoleBindings for the operator’s RBAC permissions

The operator does not deploy any VM-related components yet. It sits and waits for you to create a KubeVirt custom resource — that is the signal to roll out the full stack. This two-phase approach gives you the chance to customize the configuration before anything else gets deployed.

Wait for the operator pod to be ready before proceeding:

kubectl -n kubevirt wait --for=condition=Ready pod -l kubevirt.io=virt-operator --timeout=180s

Step 3: Create the KubeVirt Custom Resource

Now create the KubeVirt custom resource that tells the operator to deploy all the components:

apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  certificateRotateStrategy: {}
  configuration:
    developerConfiguration:
      useEmulation: false
  customizeComponents: {}
  imagePullPolicy: IfNotPresent

Save this as kubevirt-cr.yaml and apply it:

kubectl apply -f kubevirt-cr.yaml

Tip: If your nodes do not have hardware virtualization support and you want to proceed with software emulation for testing purposes, change useEmulation: false to useEmulation: true in the manifest above. Remember that this is only suitable for development and testing — never for production.

Once you apply this resource, the virt-operator reads it and begins deploying the KubeVirt components:

  • virt-api — a Deployment that provides the Kubernetes API extension for VM operations. It handles subresource requests like console access, VNC, and migration triggers. It also performs admission validation on VM manifests before they are persisted.
  • virt-controller — a Deployment that watches for VirtualMachine and VirtualMachineInstance resources. When you create a VM, virt-controller creates the corresponding virt-launcher pod and coordinates the VM lifecycle at the cluster level.
  • virt-handler — a DaemonSet that runs on every node eligible to host VMs. It is the node-level agent that manages the actual KVM/libvirt interaction. When a virt-launcher pod lands on a node, virt-handler configures and starts the VM process inside it.

Each VM runs inside its own virt-launcher pod. The pod provides the isolation boundary — cgroups, namespaces, resource limits — while the actual VM process runs as a QEMU/KVM instance managed by libvirt inside that pod.

Step 4: Wait for KubeVirt to Deploy

The operator needs a few minutes to pull images and bring all components online. Use the built-in condition check to wait:

kubectl -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s

When this command returns successfully, KubeVirt is ready. If it times out, check the operator logs for errors:

kubectl -n kubevirt logs -l kubevirt.io=virt-operator --tail=50

Verify that all pods are running:

kubectl get pods -n kubevirt

You should see output similar to this:

NAME                               READY   STATUS    RESTARTS   AGE
virt-api-7fc5db8b6-4xz8m          1/1     Running   0          2m
virt-api-7fc5db8b6-n9hkl          1/1     Running   0          2m
virt-controller-6b9f5d4c7-8qjrw   1/1     Running   0          2m
virt-controller-6b9f5d4c7-txz5k   1/1     Running   0          2m
virt-handler-7kpnz                 1/1     Running   0          2m
virt-handler-qm4x8                 1/1     Running   0          2m
virt-operator-5f8bc4c5d-jn7xr     1/1     Running   0          4m
virt-operator-5f8bc4c5d-zw2lp     1/1     Running   0          4m

The exact pod names will differ, and the number of virt-handler pods matches the number of nodes in your cluster (since it is a DaemonSet). The key thing is that all pods show Running with 1/1 ready.

You can also check the KubeVirt resource status directly:

kubectl get kubevirt -n kubevirt

The PHASE column should show Deployed.

Step 5: Install the Containerized Data Importer (CDI)

KubeVirt handles running VMs. But VMs need disk images — ISOs, QCOW2 files, VMDKs — and those images need to get into PersistentVolumes that the VMs can mount. That is where the Containerized Data Importer comes in.

CDI is a separate project that works alongside KubeVirt. It provides a declarative way to import VM disk images from various sources (HTTP endpoints, container registries, S3 buckets, or local uploads) into PersistentVolumeClaims. Without CDI, you would need to manually provision and populate PVCs before creating VMs — CDI automates that entire workflow.

Install CDI the same way you installed KubeVirt — operator first, then custom resource:

export CDI_VERSION=$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases/latest | grep tag_name | cut -d '"' -f 4)
echo "Installing CDI ${CDI_VERSION}"
kubectl create -f "https://github.com/kubevirt/containerized-data-importer/releases/download/${CDI_VERSION}/cdi-operator.yaml"
kubectl create -f "https://github.com/kubevirt/containerized-data-importer/releases/download/${CDI_VERSION}/cdi-cr.yaml"

The first command deploys the CDI operator. The second creates the CDI custom resource in the cdi namespace, which triggers the operator to deploy all CDI components.

Wait for CDI to become available:

kubectl wait --for=condition=Available --timeout=300s cdi/cdi -n cdi

Verify the CDI pods are running:

kubectl get pods -n cdi

You should see the CDI operator, API server, deployment, and upload proxy pods all in a Running state.

CDI becomes important when you start working with real VM images — importing cloud images from public URLs, converting VMDK files from VMware exports, or cloning existing disks. For this tutorial’s test VM, you will use a container disk that does not require CDI, but having CDI installed means you are ready for real workloads.

Step 6: Install virtctl

virtctl is the KubeVirt command-line tool that handles VM-specific operations that kubectl cannot do natively. You need it for:

  • Accessing a VM’s serial console or VNC display
  • Starting and stopping VMs
  • Live migrating VMs between nodes
  • Port forwarding to VM ports
  • SSH access to VMs
  • Uploading disk images

Install it by downloading the binary that matches your KubeVirt version:

curl -L -o virtctl "https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64"
chmod +x virtctl
sudo mv virtctl /usr/local/bin/

For macOS, replace linux-amd64 with darwin-amd64 (Intel) or darwin-arm64 (Apple Silicon):

curl -L -o virtctl "https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-darwin-arm64"
chmod +x virtctl
sudo mv virtctl /usr/local/bin/

Verify the installation:

virtctl version

If you prefer managing kubectl plugins through krew, you can install virtctl as a kubectl plugin instead:

kubectl krew install virt

This lets you use kubectl virt instead of virtctl — the functionality is identical. Both approaches work; pick whichever fits your workflow.

Step 7: Launch Your First Virtual Machine

Time to verify the installation by running an actual VM. You will use a CirrOS container disk — a minimal Linux distribution designed specifically for cloud testing. The container disk approach packages a VM image inside a container image, so there is no need to provision storage or import disk images. It is the fastest way to get a VM running for validation purposes.

Create a file called testvm.yaml with the following content:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: testvm
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/vm: testvm
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
        resources:
          requests:
            memory: 1Gi
      networks:
        - name: default
          pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGVsbG8sIFdvcmxkIQ==

Apply it:

kubectl apply -f testvm.yaml

A few things to understand about this manifest:

  • spec.running: true tells KubeVirt to start the VM immediately after creation. If you set this to false, the VirtualMachine resource is created but the VM does not boot until you explicitly start it with virtctl start testvm.
  • containerDisk is a volume type that pulls a VM disk image packaged as a container image. The image quay.io/kubevirt/cirros-container-disk-demo contains a CirrOS disk image. Container disks are ephemeral — any data written inside the VM is lost when the VM is deleted. They are ideal for testing and stateless workloads.
  • cloudInitNoCloud provides basic cloud-init configuration. The base64-encoded value here decodes to “Hello, World!” — a minimal user data payload. In production VMs, you would use this to inject SSH keys, configure networking, install packages, and run setup scripts.
  • masquerade networking uses NAT to connect the VM to the pod network. The VM gets an internal IP address and can reach external services. Incoming connections require explicit port forwarding or a Kubernetes Service.
  • bus: virtio specifies paravirtualized disk controllers, which offer significantly better I/O performance than emulated IDE or SATA controllers. CirrOS includes virtio drivers by default. Windows VMs may need virtio drivers installed separately.

Watch the VM come up:

kubectl get vmi -w

The VirtualMachineInstance (VMI) will transition through several phases: Pending, Scheduling, Scheduled, and finally Running. Once you see Running, the VM is booted and ready.

You can also check the virt-launcher pod that hosts the VM:

kubectl get pods -l kubevirt.io/vm=testvm

Step 8: Access the VM Console

With the VM running, connect to its serial console:

virtctl console testvm

You will see the CirrOS boot output, followed by a login prompt. Log in with the default credentials:

  • Username: cirros
  • Password: gocubsgo

Once logged in, run a few commands to confirm the VM is functioning:

hostname
ip addr
uname -a

You should see the hostname set to testvm, a network interface with an IP address from the VM’s internal network, and the Linux kernel version that CirrOS ships with.

To exit the console, press Ctrl+].

You can also access the VM’s graphical console through VNC if needed:

virtctl vnc testvm

This opens a VNC viewer if one is installed on your local machine. For a headless CirrOS VM this is not particularly useful, but it becomes valuable when working with desktop operating systems or VMs with graphical installers.

Step 9: Clean Up

Once you have verified that the VM works, clean it up:

kubectl delete vm testvm

This deletes both the VirtualMachine resource and the associated VirtualMachineInstance. The virt-launcher pod is terminated and the container disk is released.

Tip: Deleting a VirtualMachine also deletes the associated VirtualMachineInstance and stops the VM. If you only want to stop the VM without deleting the definition, use virtctl stop testvm instead. You can then start it again later with virtctl start testvm. This is analogous to shutting down vs. destroying a VM in traditional hypervisors — the “hardware definition” persists even when the VM is powered off.

If you want to remove the entire KubeVirt installation later (not recommended if you plan to continue with the series), the removal order matters — delete the custom resources first, then the operators:

kubectl delete -f kubevirt-cr.yaml
kubectl delete -f "https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml"
kubectl delete -f "https://github.com/kubevirt/containerized-data-importer/releases/download/${CDI_VERSION}/cdi-cr.yaml"
kubectl delete -f "https://github.com/kubevirt/containerized-data-importer/releases/download/${CDI_VERSION}/cdi-operator.yaml"

Troubleshooting Common Installation Issues

If something went wrong during the installation, here are the most common issues and how to resolve them.

KVM device not found

If virt-handler pods fail with errors about /dev/kvm not being found, the node does not have hardware virtualization available. Verify that:

  1. The physical CPU supports Intel VT-x or AMD-V
  2. Virtualization is enabled in the BIOS/UEFI
  3. If running on cloud VMs, nested virtualization is enabled for the instance type
  4. The KVM kernel modules are loaded: lsmod | grep kvm

As a temporary workaround for testing, you can edit the KubeVirt custom resource to enable software emulation:

kubectl edit kubevirt kubevirt -n kubevirt

Set spec.configuration.developerConfiguration.useEmulation to true. The operator will reconfigure the components automatically.

Pods stuck in Pending

If KubeVirt pods remain in Pending state, the most common causes are:

  • Insufficient resources — virt-handler and virt-launcher pods have resource requests. Check that your nodes have enough CPU and memory available with kubectl describe nodes.
  • Resource quotas — if your namespace has ResourceQuotas configured, they may block pod creation. Check with kubectl get resourcequota -n kubevirt.
  • Node taints — virt-handler is a DaemonSet that needs to run on worker nodes. If your nodes have taints, you may need to add tolerations to the KubeVirt configuration.
  • PodSecurityPolicy or PodSecurity admission — virt-handler requires privileged access. If your cluster enforces pod security standards, you may need to configure exemptions for the kubevirt namespace.

CDI importer pod errors

If CDI data import pods fail, check these common causes:

  • No default StorageClass — CDI needs to create PersistentVolumeClaims dynamically. Verify you have a default StorageClass: kubectl get storageclass. One should be marked (default).
  • Insufficient storage — CDI creates scratch space PVCs during import operations. Ensure your storage backend has enough capacity.
  • Network restrictions — if importing images from external URLs, the CDI importer pods need outbound network access. Check network policies that might block egress traffic from the cdi namespace.

virt-operator CrashLoopBackOff

If the virt-operator itself is crashing, check the logs:

kubectl -n kubevirt logs -l kubevirt.io=virt-operator --previous

Common causes include RBAC misconfigurations (especially on clusters with strict security policies) and incompatibilities between the KubeVirt version and the Kubernetes version. Check the KubeVirt releases page for the compatibility matrix.

What You Installed

Here is a quick recap of what is now running on your cluster:

ComponentTypeNamespacePurpose
virt-operatorDeploymentkubevirtManages KubeVirt component lifecycle
virt-apiDeploymentkubevirtAPI extension for VM operations
virt-controllerDeploymentkubevirtCluster-level VM lifecycle controller
virt-handlerDaemonSetkubevirtNode-level VM management agent
cdi-operatorDeploymentcdiManages CDI component lifecycle
cdi-apiserverDeploymentcdiAPI extension for data import operations
cdi-deploymentDeploymentcdiCore CDI controller
cdi-uploadproxyDeploymentcdiHandles local disk image uploads

Next Steps

Your cluster can now run virtual machines alongside containers. Here is where to go from here:

Summary

You installed KubeVirt and the Containerized Data Importer on a Kubernetes cluster, set up the virtctl CLI tool, and launched a test virtual machine using a CirrOS container disk. You verified that the VM booted correctly by accessing its serial console.

Your cluster is now capable of running virtual machines alongside containers, managed through the same Kubernetes API and tooling. The KubeVirt operator handles component lifecycle, CDI is ready to import disk images when you need persistent VM storage, and virtctl gives you the CLI tools for day-to-day VM operations.

In the next tutorial, you will move beyond test VMs and create a production-style virtual machine with persistent storage, proper networking, and cloud-init configuration.