virtkube implements a local Kubernetes cluster running on multiple virtual machines provisioned by Vagrant. It uses kubeadm to set up the cluster from scratch based on minimal Ubuntu images for each node. In comparison to minikube, this tool does not provision a single-node cluster. The smallest possible infrastructure consists of one controlplane and one worker node. This is intentional because virtkube's primary goal is to provide a development cluster as close as possible to a productive Kubernetes deployment.
Vagrant can be installed on macOS, Linux, and Windows. Follow HashiCorp's installation manual for your desired operating system.
Vagrant works with multiple virtualization tools. This project is tested with VirtualBox. Consult the official documentation for further information on the installation procedure.
vagrant up
Use kubeconfig
file in sync folder for kubectl authentication.
export KUBECONFIG=/path/to/sync/kubeconfig
kubectl get pods -A
You can access a specific cluster VM by running vagrant ssh <VM-name>
, e.g. vagrant ssh worker_0
to connect to first
worker node.
vagrant destroy -f
Use Helm chart in canary folder to test Kubernetes cluster setup. You might need to install Helm first.
export KUBECONFIG=/path/to/sync/kubeconfig
helm install canary ./canary
helm test canary
You might need to manually delete canary-test-connection
pod by running kubectl delete pod canary-test-connection
before re-executing the tests.
helm uninstall canary
The number of controlplane and worker nodes can be configured using num_controlplanes
and num_workers
variables
in Vagrantfile. In case multiple controlplane nodes are configured, a corresponding load balancer based
on HAProxy will be provisioned.
The following dependencies are used to provision the cluster:
- Container runtime: containerd
- CNI plugin: flannel