Skip to content
Back to blog K3s Homelab Setup Guide - Running Kubernetes on Raspberry Pi 5

K3s Homelab Setup Guide - Running Kubernetes on Raspberry Pi 5

K8sDevOps

K3s Homelab Setup Guide - Running Kubernetes on Raspberry Pi 5

Running Kubernetes at home used to mean either expensive hardware or a noisy server rack. K3s changes that - it’s a lightweight Kubernetes distribution that runs comfortably on Raspberry Pi devices.

This guide walks through setting up a three-node K3s cluster on Raspberry Pi 5 devices. One control plane, two workers. Real Kubernetes, pocket-sized infrastructure.

TL;DR

  • K3s runs Kubernetes on Raspberry Pi with ~512MB RAM overhead
  • Three Pi 5 devices: one control plane, two workers
  • Install takes about 30 minutes end-to-end
  • Includes Traefik ingress and local-path storage by default

Cluster Overview

Hardware:

  • 3x Raspberry Pi 5 (4GB or 8GB recommended)
  • 3x microSD cards (32GB minimum)
  • Power supplies and network connectivity

Node Configuration:

NodeRoleIP (Example)
pi1Control Plane192.168.1.159
pi2Worker192.168.1.160
pi3Worker192.168.1.161

Prerequisites

Before starting:

  • Raspberry Pi Imager installed on your computer
  • Raspberry Pi OS Desktop flashed to each microSD card
  • All Pis connected to the same network
  • SSH access or direct terminal access to each Pi
  • Internet connectivity on all devices

Step 1: Initial Configuration (All Pis)

Perform these steps on all three Pis.

1.1 Update System Packages

sudo apt update && sudo apt upgrade -y

Static IPs prevent cluster issues when DHCP leases change.

On pi1 (Control Plane):

sudo nano /etc/dhcpcd.conf

Add at the end:

interface wlan0
static ip_address=192.168.1.159/24
static routers=192.168.1.1
static domain_name_servers=192.168.1.1 8.8.8.8

On pi2 (Worker 1):

sudo nano /etc/dhcpcd.conf

Add:

interface wlan0
static ip_address=192.168.1.160/24
static routers=192.168.1.1
static domain_name_servers=192.168.1.1 8.8.8.8

On pi3 (Worker 2):

sudo nano /etc/dhcpcd.conf

Add:

interface wlan0
static ip_address=192.168.1.161/24
static routers=192.168.1.1
static domain_name_servers=192.168.1.1 8.8.8.8

Note: Adjust 192.168.1.1 if your router uses a different gateway IP.

Restart networking:

sudo systemctl restart dhcpcd

1.3 Enable Container Features

K3s requires cgroups for container resource management.

sudo nano /boot/firmware/cmdline.txt

Add to the end of the existing line (don’t create a new line):

cgroup_memory=1 cgroup_enable=memory

The full line should look something like:

console=serial0,115200 console=tty1 root=PARTUUID=... rootfstype=ext4 ... cgroup_memory=1 cgroup_enable=memory

1.4 Reboot

sudo reboot

Step 2: Install K3s Control Plane (pi1)

SSH into pi1 or open a terminal directly.

2.1 Install K3s Server

curl -sfL https://get.k3s.io | sh -

This will:

  • Install K3s as a systemd service
  • Start the K3s server automatically
  • Configure kubectl

2.2 Verify Installation

sudo systemctl status k3s

Check if the node is ready:

sudo kubectl get nodes

Expected output:

NAME   STATUS   ROLES                  AGE   VERSION
pi1    Ready    control-plane,master   30s   v1.xx.x+k3s1

2.3 Get the Node Token

Worker nodes need this token to join the cluster:

sudo cat /var/lib/rancher/k3s/server/node-token

Save this token. It looks like:

K10abc123def456ghi789jkl012mno345pqr678stu901vwx234yz::server:abc123def456ghi789

2.4 Configure kubectl for Regular User (Optional)

To use kubectl without sudo:

mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
chmod 600 ~/.kube/config

Step 3: Install K3s Workers (pi2 and pi3)

Perform these steps on pi2 and pi3.

3.1 Install K3s Agent

Replace <NODE_TOKEN> with the token from Step 2.3, and <CONTROL_PLANE_IP> with pi1’s IP:

curl -sfL https://get.k3s.io | K3S_URL=https://<CONTROL_PLANE_IP>:6443 K3S_TOKEN=<NODE_TOKEN> sh -

Example:

curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.159:6443 K3S_TOKEN=K10abc123def456ghi789jkl012mno345pqr678stu901vwx234yz::server:abc123def456ghi789 sh -

3.2 Verify Agent is Running

sudo systemctl status k3s-agent

Step 4: Verify the Cluster

Back on pi1, check all nodes:

kubectl get nodes

Expected output:

NAME   STATUS   ROLES                  AGE   VERSION
pi1    Ready    control-plane,master   5m    v1.xx.x+k3s1
pi2    Ready    <none>                 2m    v1.xx.x+k3s1
pi3    Ready    <none>                 1m    v1.xx.x+k3s1

For more details:

kubectl get nodes -o wide

Post-Installation Setup

Label Worker Nodes

Give worker nodes a proper role label:

kubectl label node pi2 node-role.kubernetes.io/worker=worker
kubectl label node pi3 node-role.kubernetes.io/worker=worker

Now kubectl get nodes shows:

NAME   STATUS   ROLES                  AGE   VERSION
pi1    Ready    control-plane,master   10m   v1.xx.x+k3s1
pi2    Ready    worker                 7m    v1.xx.x+k3s1
pi3    Ready    worker                 6m    v1.xx.x+k3s1

Test Your Cluster

Deploy a Test Application

Create a deployment:

kubectl create deployment nginx --image=nginx --replicas=3

Expose it as a service:

kubectl expose deployment nginx --type=NodePort --port=80

Check the deployment:

kubectl get pods -o wide
kubectl get svc nginx

Get the NodePort and access nginx using any Pi’s IP:

# Get the port
kubectl get svc nginx -o jsonpath='{.spec.ports[0].nodePort}'

# Access it (example: http://192.168.1.159:30080)

Clean Up

kubectl delete deployment nginx
kubectl delete service nginx

Useful Commands

Cluster Management

# View all nodes
kubectl get nodes

# View all pods across namespaces
kubectl get pods -A

# View cluster info
kubectl cluster-info

# View K3s logs on control plane
sudo journalctl -u k3s -f

# View K3s logs on worker nodes
sudo journalctl -u k3s-agent -f

Service Management

# Restart K3s on control plane
sudo systemctl restart k3s

# Restart K3s on worker nodes
sudo systemctl restart k3s-agent

# Stop K3s
sudo systemctl stop k3s        # Control plane
sudo systemctl stop k3s-agent  # Worker nodes

Troubleshooting

Node Not Joining Cluster

1. Check firewall on pi1:

sudo ufw status

If active, allow K3s ports:

sudo ufw allow 6443/tcp
sudo ufw allow 10250/tcp

2. Verify the token is correct:

# On pi1
sudo cat /var/lib/rancher/k3s/server/node-token

3. Check connectivity from worker:

# From pi2 or pi3
ping 192.168.1.159
curl -k https://192.168.1.159:6443

Node Shows NotReady

Check the logs:

# On control plane
sudo journalctl -u k3s -n 50

# On worker node
sudo journalctl -u k3s-agent -n 50

Pods Not Starting

Check pod events:

kubectl describe pod <pod-name>
kubectl get events --sort-by='.lastTimestamp'

What’s Included with K3s

K3s comes batteries-included:

  • Traefik - Ingress controller for exposing services
  • CoreDNS - Cluster DNS
  • local-path-provisioner - Persistent storage using local disks
  • Metrics Server - Resource metrics for pods and nodes
  • ServiceLB - Load balancer for bare-metal

All configured and ready to use out of the box.


Next Steps

With your cluster running, you can:

  1. Remote Access - Copy ~/.kube/config to your laptop to manage remotely
  2. Deploy Apps - Use Helm charts or kubectl manifests
  3. Set Up Ingress - Configure Traefik for external access
  4. Add Storage - Configure NFS or Longhorn for distributed storage
  5. Install Monitoring - Deploy Prometheus and Grafana
  6. Try GitOps - Set up ArgoCD or Flux for declarative deployments

Resource Usage

K3s is remarkably lightweight:

ComponentRAM Usage
Control Plane~512MB
Worker Agent~256MB
Total (3-node)~1GB

Compare that to a full Kubernetes cluster that needs 2-4GB per node minimum.


Why K3s for Homelab?

  • Lightweight - Runs on low-power devices
  • Simple - Single binary, easy install
  • Real Kubernetes - Same APIs, same tools
  • Production-ready - Used in edge and IoT deployments
  • Active Community - Backed by SUSE/Rancher

For learning Kubernetes, testing deployments, or running actual workloads at home, K3s on Raspberry Pi hits the sweet spot of capability vs. cost.


References


Running Kubernetes doesn’t require a data center. Three Raspberry Pis, a weekend afternoon, and you’ve got a fully functional cluster. Happy homelabbing.

Found this helpful?

Comments