Deploying a Production-Ready Kubernetes Cluster in 5 Minutes

Kubernetes is a widely used orchestration system to manage containerized deployments. While it provides a multitude of essential features needed for modern application, it's scope and complexity can be exhausting for beginners. Especially the setup of Kubernetes clusters can be difficult and time-consuming, or lead to insecure installations. The k3s distribution aims to make deploying Kubernetes clusters easier, providing highly automated installation and management tools to set up and maintain Kubernetes clusters.

Why k3s?

Installing Kubernetes is such a complex task that a lot of automation tools have gained traction: kops, kubespray, kubeadm, the list goes on. They all aim to ease the installation process of production-grade Kubernetes clusters. A tool that is often misconstrued as one of them is k3s. But that's not the entire story: K3s is not just an installation tool (although it does heavily automate installations) - it is an entire CNCF-certified Kubernetes distribution. While it provides the same features as the mainstream k8s distribution, ot differs in a few key asepects:

  • it is packaged as a single binary
  • it's resource usage is reduced, allowing it to run even on low-power devices like the Raspberry Pi
  • it can run with different datastores, using sqlite by default
  • It's installation scripts set up an opinionated cluster instance, with Traefik ingress, ServiceLB load balancer and helm operator installed by default

Creating a simple cluster

K3S divides nodes in a cluster into two distinct types:

  1. Server: This type of node is a kubernetes master, typically hosting the control-plane and datastore
  2. Agent: This node type is a kubernetes worker, designed to be orchestrated by masters and have applications deployed on

While it is perfectly possible to host deployments on master nodes, it is not always a good idea: High resource usage may bring down the master, causing issues throughout the entire cluster.

A cluster consists of one or more servers, and optionally any number of agents.

Before installing, we need to ensure our hosts meet the minimum system requirements: A properly configured hostname and curl installed. On a debian-based system, this is trivial:

apt install curl -y

For our first Kubernetes cluster, we will have one master named and one agent named We also assume these domains are valid and point to the correct server IP addresses.

The node is installed with the following command:

curl -sfL | K3S_TOKEN=mynodetoken sh -

You should set your own value for the environment variable TOKEN=mynodetoken, as this will be used to connect nodes to this cluster. It should be a secret and reasonably long string that can't be guessed easily. With this token, you can then add an agent by running this command from

curl -sfL | K3S_URL= K3S_TOKEN=mynodetoken sh -

And that's all! You now have a fully working kubernetes deployment with one control-plane and one worker node ready to go. You can access it by using kubectl from

Creating a highly-available cluster

Since one of Kubernetes main features is high-availability, deploying multiple master nodes is usually desirable. This also requires switching the default sqlite datastore to an embedded etcd cluster instead. K3s is capable of this by slightly adjusting the installation command. For the first master server, you need to add the --cluster-init flag to it's arguments:

curl -sfL | K3S_TOKEN=SECRET sh -s - server --cluster-init

Remember to pick a secret and secure K3S_TOKEN value!

Next, you add your other master nodes by running this command on each of them:

curl -sfL | K3S_TOKEN=SECRET sh -s - server --server

Replace with the domain of your first master server.

Lastly, you can add any number of agent nodes with this command:

curl -sfL | K3S_TOKEN=SECRET sh -s - agent --server

Connecting to the cluster remotely

You will often want to interact with clusters from remote machines, such as deployment servers or local development environments. To achieve this, you can simply copy the file /etc/rancher/k3s/k3s.yaml from to your local machine at ~/.kube/config. Before you can use kubectl locally, you need to slightly adjust this file by changing the line




(assuming is your master server).

And that's all you need to do to get a production-ready Kubernetes cluster ready to go. Of course, there are many more options to this installation process if you are unhappy with the k3s defaults, want a different ingress or network manager. Have a look at the k3s configuration options if you want something fancier.

More articles

Working with JSON in PostgreSQL

When the line between SQL and NoSQL becomes blurred

The Reasons for Go's Growing Popularity

And why simplicity may be just what you need

What are NewSQL Databases?

A primer on the drop-in replacements for traditional RDBMS systems

Installing ingress-nginx on K3S

Setting up the default ingress controller

Passing by Reference in PHP

Sharing variables instead of their values

Setting up the Kubernetes Dashboard

Deploying a visual overview of your k8s cluster