Kubernetes has emerged as the leading platform for container orchestration, widely adopted by organizations to deploy, scale, and manage containerized applications efficiently. Its popularity stems from its ability to automate deployment, scaling, and operations of application containers across clusters of hosts, ensuring high availability and resource optimization.
While major cloud providers like AWS, Azure, and Google Cloud offer managed Kubernetes services, which simplify cluster creation and maintenance, there is significant value in setting up your own Kubernetes cluster from scratch. Doing so allows you to gain a deeper understanding of the underlying infrastructure, including how nodes communicate, how pods are scheduled, how networking and storage are configured, and how control plane components like the API server, scheduler, and etcd interact.
By building a cluster manually, you also learn essential operational skills, such as troubleshooting cluster issues, managing certificates, configuring networking plugins, and handling persistent storage, which are crucial for becoming proficient in real-world Kubernetes environments. In essence, setting up your own cluster is an excellent way to bridge the gap between theoretical knowledge and practical expertise.
In this guide, I will walk you through creating a mini Kubernetes cluster on Hyper-V, consisting of:
1 Master Node
2 Worker Nodes
Ubuntu OS on all nodes
containerd as the container runtime
Calico for networking
This cluster is great for experimenting with workloads, testing deployments, and learning core Kubernetes concepts.
A Kubernetes cluster is made up of master nodes (control plane) and worker nodes (where workloads run). Each node runs different components, working together to ensure your applications run reliably and efficiently.
The master node is responsible for managing the cluster. It controls scheduling, scaling, and overall cluster state. The main components running on the master node are:
a) kube-apiserver
Acts as the front-end of the Kubernetes control plane.
Exposes the Kubernetes API and handles REST requests from kubectl or other clients.
All communication (worker nodes, scheduler, controllers) goes through the API server.
b) etcd
A distributed key-value store that stores the entire cluster state.
Keeps information about pods, deployments, services, and nodes.
Highly reliable and consistent; critical for the cluster to function.
c) kube-scheduler
Responsible for scheduling pods to worker nodes based on resource availability, constraints, and policies.
Ensures pods are efficiently placed in the cluster.
d) kube-controller-manager
Runs various controller processes that manage cluster tasks:
Node controller → Monitors node health
Replication controller → Ensures the correct number of pod replicas
Endpoints controller → Manages service endpoints
Works in the background to maintain the desired cluster state
Worker nodes run the actual application workloads (pods). They also contain components that communicate with the master and manage containers.
a) kubelet
The primary agent running on each node.
Ensures containers are running in pods as defined by the master.
Communicates with the API server for instructions and reports node status.
b) kube-proxy
Manages networking rules on the node.
Ensures that pods and services can communicate within the cluster.
Handles load balancing and forwards traffic to the correct pods.
c) Container Runtime (containerd / Docker)
Runs and manages containers on the node.
Kubernetes uses the container runtime to start, stop, and manage container lifecycles.
d) CNI Plugin (Calico / Flannel / Weave)
Provides pod networking so that pods can communicate across nodes.
Handles IP addressing, routing, and network policies for security.
The master node decides where pods should run and stores this info in etcd.
kube-scheduler assigns pods to worker nodes based on available resources.
Worker nodes use kubelet to communicate with the master and manage pods.
kube-proxy and the CNI plugin ensure pods can communicate across nodes.
When setting up a Kubernetes cluster, some configurations must be applied on all nodes (both master and worker) to ensure a consistent environment across the cluster. These include steps such as updating the system, disabling swap, loading kernel modules, configuring containerd, and installing Kubernetes components (kubeadm, kubelet, and kubectl).
Once these common steps are completed, there are additional tasks that are specific to each role: the master node is initialized with kubeadm init, where the control plane and cluster networking (such as Calico) are set up, while the worker nodes are joined to the cluster using the kubeadm join command provided during initialization. This separation of steps ensures that the master node functions as the control plane while worker nodes are ready to host workloads.
Kubernetes requires swap to be disabled for stable resource management. If swap is enabled, the kubelet may behave unpredictably when scheduling pods.
# Update system package index and upgrade installed packages
sudo apt update && sudo apt upgrade -y
apt update → Refreshes the list of available packages from repositories.
apt upgrade -y → Installs the latest versions of all installed packages.
# Disable swap immediately
sudo swapoff -a
swapoff -a → Turns off swap for the current session.
# Disable swap permanently
sudo nano /etc/fstab
This opens the configuration file for mounted filesystems.
Remove or comment out the line containing swap to ensure swap stays disabled after reboot.
# Verify swap is off
free -h
Shows memory and swap usage. Swap should show 0B.
Kubernetes networking requires certain kernel modules and sysctl parameters to allow containers to communicate properly.
# Create a config file to load required modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
overlay → Enables the overlay filesystem, required by container runtimes.
br_netfilter → Allows Linux bridge traffic to be processed by iptables.
sudo modprobe overlay
sudo modprobe br_netfilter
Loads the above modules immediately into the kernel.
# Configure networking parameters
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
net.bridge.bridge-nf-call-ip6tables = 1 → Ensures bridged IPv6 traffic passes through ip6tables.
net.bridge.bridge-nf-call-iptables = 1 → Ensures bridged IPv4 traffic passes through iptables.
net.ipv4.ip_forward = 1 → Enables packet forwarding between network interfaces (required for pod-to-pod communication).
sudo sysctl --system
Applies all sysctl configuration changes immediately.
containerd is a lightweight container runtime that Kubernetes uses to run containers. To run containers in Pods, Kubernetes uses a container runtime. By default, Kubernetes uses the Container Runtime Interface (CRI) to interface with your chosen container runtime. If you don’t specify a runtime, kubeadm automatically tries to detect an installed container runtime by scanning through a list of known endpoints. If multiple or no container runtimes are detected kubeadm will throw an error and will request that you specify which one you want to use.
# Install containerd
sudo apt install -y containerd
# Generate default containerd configuration
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
Creates config directory and generates a default config.toml file.
# Set cgroup driver to systemd
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
By default, containerd uses cgroupfs.
Kubernetes prefers systemd because it aligns with the host’s process manager, reducing resource management conflicts.
# Restart and enable containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
Restarts containerd with new config and ensures it auto-starts on boot.
We need three main tools:
kubeadm → Bootstraps the cluster.
kubelet → the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl → CLI tool to interact with the cluster.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Update the apt package index and install packages needed to use the Kubernetes apt repository.
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL.
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Add the appropriate Kubernetes apt repository for version v1.33.
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Update the apt package index, install kubelet, kubeadm and kubectl.
apt-mark hold prevents accidental upgrades that could break compatibility.
sudo kubeadm init \
--apiserver-advertise-address=<MASTER_IP> \
--pod-network-cidr=192.168.0.0/16
--apiserver-advertise-address=<MASTER_IP> → Sets the IP that the control plane advertises.
replace <MASTER_IP> with the actual IP address assigned to your Master Node VM
--pod---pod-network-cidr=192.168.0.0/16 network-cidr=192.168.0.0/16 → Defines the pod network range (needed by Calico or other CNI plugins).
Copies the admin kubeconfig to your home directory so kubectl can talk to the cluster without root access. To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p $HOME/.kube → Creates a hidden .kube directory in your home folder.
cp -i ... → Copies the admin kubeconfig file into it.
chown ... → Changes ownership so your user (not root) can access it.
Kubernetes itself does not handle pod-to-pod networking. Instead, it relies on a CNI (Container Network Interface) plugin to provide networking capabilities.
When pods are created, they need:
A unique IP address.
The ability to communicate with other pods (even on different nodes).
Support for Kubernetes features like Network Policies.
This is where CNI plugins come in. They implement the networking model that allows:
Pod-to-pod communication across nodes.
Pod-to-service communication.
Enforcing security rules between workloads.
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Installs Calico, which provides pod networking and network policy enforcement.
You can join any number of worker nodes by running the following on each worker nodes.
kubeadm join 192.168.x.xx:xxx3 --token 2rexxx.xxx8pg01xxx19xxx \
--discovery-token-ca-cert-hash xxxxxx:63a1b32e347a6xxxb188xxx0c00xxxeb3xxx3862f61078xxxx5ee59af477xxxx
If lost, run the following command on master node to regenerate:
kubeadm token create --print-join-command
kubectl get nodes
It should display master node and worker nodes in a Ready state.
With this setup, you now have a fully functional 3-node Kubernetes lab cluster on Hyper-V. This environment serves as a powerful sandbox for:
Practicing deployments, scaling, and rolling updates
Testing Kubernetes features like ConfigMaps, Secrets, and Ingress
Exploring monitoring, logging, and CI/CD integrations with real-world tools
By not only executing the commands but also understanding the reasoning behind each step, you will develop a stronger foundation in Kubernetes concepts and gain hands-on experience with how clusters are architected and managed from the ground up. This knowledge will prepare you for working with Kubernetes in both lab and production environments.