Kubernetes is a widely used orchestration system to manage containerized deployments. While it provides a multitude of essential features needed for modern application, it's scope and complexity can be exhausting for beginners. Especially the setup of Kubernetes clusters can be difficult and time-consuming, or lead to insecure installations. The k3s distribution aims to make deploying Kubernetes clusters easier, providing highly automated installation and management tools to set up and maintain Kubernetes clusters.
Why k3s?
Installing Kubernetes is such a complex task that a lot of automation tools have gained traction: kops, kubespray, kubeadm, the list goes on. They all aim to ease the installation process of production-grade Kubernetes clusters. A tool that is often misconstrued as one of them is k3s. But that's not the entire story: K3s is not just an installation tool (although it does heavily automate installations) - it is an entire CNCF-certified Kubernetes distribution. While it provides the same features as the mainstream k8s distribution, ot differs in a few key asepects:
- it is packaged as a single binary
- it's resource usage is reduced, allowing it to run even on low-power devices like the Raspberry Pi
- it can run with different datastores, using sqlite by default
- It's installation scripts set up an opinionated cluster instance, with Traefik ingress, ServiceLB load balancer and helm operator installed by default
Creating a simple cluster
K3S divides nodes in a cluster into two distinct types:
- Server: This type of node is a kubernetes master, typically hosting the control-plane and datastore
- Agent: This node type is a kubernetes worker, designed to be orchestrated by masters and have applications deployed on
While it is perfectly possible to host deployments on master nodes, it is not always a good idea: High resource usage may bring down the master, causing issues throughout the entire cluster.
A cluster consists of one or more servers, and optionally any number of agents.
Before installing, we need to ensure our hosts meet the minimum system requirements: A properly configured hostname
and curl installed. On a debian-based system, this is trivial:
hostname master1.myk8s.com apt install curl -y
For our first Kubernetes cluster, we will have one master named master1.myk8s.com
and one agent named worker1.myk8s.com
. We also assume these domains are valid and point to the correct server IP addresses.
The master1.myk8s.com
node is installed with the following command:
curl -sfL https://get.k3s.io | K3S_TOKEN=mynodetoken sh -
You should set your own value for the environment variable TOKEN=mynodetoken
, as this will be used to connect nodes to this cluster. It should be a secret and reasonably long string that can't be guessed easily. With this token, you can then add an agent by running this command from worker1.myk8s.com
:
curl -sfL https://get.k3s.io | K3S_URL=https://master1.myk8s.com:6443 K3S_TOKEN=mynodetoken sh -
And that's all! You now have a fully working kubernetes deployment with one control-plane and one worker node ready to go. You can access it by using kubectl
from master1.myk8s.com
.
Creating a highly-available cluster
Since one of Kubernetes main features is high-availability, deploying multiple master nodes is usually desirable. This also requires switching the default sqlite datastore to an embedded etcd cluster instead. K3s is capable of this by slightly adjusting the installation command. For the first master server, you need to add the --cluster-init
flag to it's arguments:
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server --cluster-init
Remember to pick a secret and secure K3S_TOKEN
value!
Next, you add your other master nodes by running this command on each of them:
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server --server https://master1.myk8s.com:6443
Replace master1.myk8s.com
with the domain of your first master server.
Lastly, you can add any number of agent nodes with this command:
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - agent --server https://master1.myk8s.com:6443
Connecting to the cluster remotely
You will often want to interact with clusters from remote machines, such as deployment servers or local development environments. To achieve this, you can simply copy the file /etc/rancher/k3s/k3s.yaml
from master1.myk8s.com to your local machine at ~/.kube/config
. Before you can use kubectl
locally, you need to slightly adjust this file by changing the line
server: https://127.0.0.1:6443
to
server: https://master1.myk8s.com:6443
(assuming master1.myk8s.com
is your master server).
And that's all you need to do to get a production-ready Kubernetes cluster ready to go. Of course, there are many more options to this installation process if you are unhappy with the k3s defaults, want a different ingress or network manager. Have a look at the k3s configuration options if you want something fancier.