Automating RKE2 kubernetes installation with ansible

Table of contents

Kubernetes is a preferred deployment choice for companies of all sizes, with features that enable operators to easily scale applications to any size necessary. While interacting with a kubernetes cluster is fairly simple using tools like helm, many administrators shy away from running their own clusters, since the setup process can be quite involved. This guide aims to provide an automated installation procedure that works for most users.

Planing & infrastructure choices

In order to make this guide as easily approachable as possible, we will assume the following for the cluster setup:

  • your local machine (not the nodes for the cluster) has kubectl and ansible installed (and optionally helm if you want longhorn volumes)
  • all nodes are running a recent version of linux debian, with an SSH server enabled and running
  • the cluster has no private network available, all communication happens over the internet (no control over switches/routes either)

To meet our requirements, we make these configuration choices:

  • RKE2 as the kubernetes distribution for ease of use and reliable defaults
  • calico as the networking plugin (cni), because it encrypts traffic by default and needs no extra hardware/vxlan setup
  • [optional] longhorn for automatic persistent volumes using the local disks on nodes, without any external storage provider

With these choices made, we are ready for the ansible setup.

Inventory and node prerequisites

Before we can run any automated tasks, we need to define an inventory for all our machines and ensure we can reach them. It will look like this:

inventory.yaml

all:
  vars:
    primary_master: master1
  hosts:
    master1:
      ansible_host: 192.168.56.101
      hostname: master1.rke2.local
    master2:
      ansible_host: 192.168.56.102
      hostname: master2.rke2.local
    master3:
      ansible_host: 192.168.56.103
      hostname: master3.rke2.local
    worker1:
      ansible_host: 192.168.56.151
      hostname: worker1.rke2.local
    worker2:
      ansible_host: 192.168.56.152
      hostname: worker2.rke2.local
    worker3:
      ansible_host: 192.168.56.153
      hostname: worker3.rke2.local
  children:
    masters:
      hosts:
        master1:
        master2:
        master3:
    workers:
      hosts:
        worker1:
        worker2:
        worker3:

The sample inventory defines 6 nodes, 3 masters for etcd and the control plane service, and 3 workers to run deployed pods and services.Every node defines a hostname key, because RKE2 requires a unique hostname for all nodes in the kubernetes cluster, so by automatically setting them from the playbook we ensure no node was accidentally forgotten or received a duplicate hostname value.

One master node, master1 in this example, is defined as the primary_master in the inventory variable section. Even though the masters will later be highly available, we need one master node to initially start the new cluster. Other nodes can then use the primary master's address to join the newly created cluster. Once the cluster is created, nodes can freely connect to any of the master servers to (re-)join the cluster.

When you are done filling the inventory file with all nodes you want in the kubernetes cluster, try pinging them:

ansible -i inventory.yaml -m ping all

Ensure all nodes are reachable, to prevent authentication issues later.

Validate inventory & primary master

Since the primary_master node needs to be an existing member of the masters group, we start with an assertion to fail early if the inventory is misconfigured:

validate.yaml

- name: Validate primary master and distribute variable
  hosts: all
  gather_facts: no
  tasks:
   - name: Check if primary_master is defined
     assert:
       that:
         - primary_master is defined
         - primary_master in groups['masters']
       fail_msg: "primary_master must be a valid master node"

With this check in place, we can be sure that there are no typos or missing values in the primary_master variable.

Node preparation

It's time for the first part of the playbook, where we install base requirements on all nodes and ensure they are ready for the RKE2 installation. Specifically, we need to set the hostname from the inventory.yaml, install required packages and start services if necessary:

prepare_nodes.yaml

- name: Prepare cluster nodes
  hosts: all
  tasks:
   - name: Set hostname
     become: true
     ansible.builtin.hostname:
       name: "{{ hostname }}"
       use: systemd

   - name: Update package cache
     become: true
     ansible.builtin.apt:
       name: "*"
       state: latest
       update_cache: True

   - name: Install base packages
     become: true
     ansible.builtin.apt:
       name:
        - curl
        - open-iscsi
        - nfs-common
        - bash
        - cryptsetup
        - dmsetup
       state: present

   - name: Start open-iscsi service
     become: true
     ansible.builtin.service:
       name: iscsid
       state: started
       enabled: true

Even though RKE2 would only need the bash and curl packages, we included the dependencies for longhorn as well, so we can later skip dependency installation should you want to use persistent volumes. If not, having these installed won't hurt your cluster. The iscsid service is also needed for longhorn storage.

Preparing master nodes

Before we can start installing the cluster, we need to install the rke2 server onto all master nodes and create a new fact that holds the addresses of all master nodes for the tls-san config of RKE2. The tls-san specifies which master servers can later be accessed with the generated kubeconfig, and using all master nodes ensures we can still interact with the cluster if at least one master survives an outage.

prepare_masters.yaml

- name: Prepare master nodes
  hosts: masters
  tasks:
   - name: Install RKE2 binaries
     become: true
     ansible.builtin.shell:
       cmd: curl -sfL https://get.rke2.io | sh -
       creates: /usr/local/bin/rke2

   - name: Collect the addresses of all master nodes
     set_fact:
       master_ips: "{{ master_ips | default([]) + [ansible_ssh_host] }}"
     delegate_to: "{{ item }}"
     with_inventory_hostnames: masters

Note the use of creates to prevent re-downloading the executable if it already exists. The second task loops through all hosts in the masters group and adds their IPs to the master_ips fact on all master nodes, so they can later access it when generating their own configuration.

Starting the first master node

With all the setup out of the way, we are now ready to start the first master node to bootstrap the cluster. With RKE2, this step is necessary before other masters / agents can join and share load, which is why we previously set the primary_master fact.

start_primary_master.yaml

- name: Seed first cluster master
  hosts: "{{ hostvars['localhost']['primary_master'] }}"
  tasks:
   - name: Create cluster config dir
     become: true
     file:
       path: /etc/rancher/rke2
       state: directory

   - name: Set initial server config
     become: true
     copy:
       dest: /etc/rancher/rke2/config.yaml
       content: |
         cni: calico
         tls-san:
         {{ master_ips | to_nice_yaml }}
         node-external-ip:
         - "{{ ansible_ssh_host }}"
         node-ip:
         - "{{ ansible_ssh_host }}"

   - name: Start server instance
     become: true
     ansible.builtin.service:
       name: rke2-server
       state: started
       enabled: true

The play is restricted to the primary master only, where it creates the config dir /etc/rancher/rke2 and stores a config.yaml file in it. The config sets the node's ip dynamically, enforces the chosen calico networking interface and adds all master nodes to the tls-san, enabling kubernetes api access from all master nodes.

Note the verbose syntax for the hosts: field; unfortunately ansible can't use host variables directly because no host is selected at that stage, so we specifically pass in the host variable primary_master from a node, in this case localhost.

Once the config is ready, the rke2-server service is started and enabled at boot. To give the initial master a little time to pull needed container images and start services after initializing the cluster, you might consider waiting 2 minutes before ending the play, but our version doesn't since we are using decent hardware.

Exposing cluster config variables to other nodes

When the first master node started, it automatically generated a token to authenticate new members joining the cluster. This and the address of the primary master node are needed in all node configurations, so we need to expose them:

expose_cluster_config.yaml

- name: Expose cluster config to all nodes
  hosts: all
  tasks:
   - name: Read cluster node-token
     become: true
     slurp:
       src: /var/lib/rancher/rke2/server/node-token
     register: node_token_b64
     when: inventory_hostname == primary_master

   - name: Set cluster facts on all nodes
     set_fact:
       master_url: "https://{{ ansible_ssh_host }}:9345"
       node_token: "{{ node_token_b64.content | b64decode }}"
     delegate_to: "{{ primary_master }}"
     run_once: true

Once again we use a little trick to move the information from our primary master to all nodes, by executing the play on all nodes, but delegating execution to the primary master, and only running it once. This way the set_fact task has access to the variables in primary_master and can distribute it.

Joining the master nodes

With all the configuration out of the way, we can start the remaining master nodes and join them into the cluster

join_masters.yaml

- name: Join master nodes
  hosts: masters
  serial: true
  tasks:
   - name: Create config dir
     become: true
     file:
       path: /etc/rancher/rke2
       state: directory

   - name: Create cluster config file
     when: inventory_hostname != primary_master
     become: true
     copy:
       dest: /etc/rancher/rke2/config.yaml
       content: |
         server: "{{ master_url }}"
         token: "{{ node_token | trim }}"
         cni: calico
         tls-san:
         {{ master_ips | to_nice_yaml }}
         node-external-ip:
         - "{{ ansible_ssh_host }}"
         node-ip:
         - "{{ ansible_ssh_host }}"

   - name: Start server instance
     become: true
     ansible.builtin.service:
       name: rke2-server
       state: started
       enabled: true

Similar to the primary master node, we create the config dir and store a config.yaml file in it. Only this time, it includes two more keys, namely server: and token: to contact and authenticate with the primary master and join the cluster. Note that the play is marked as serial: true, to ensure masters start and join one after another and not all at once, since etcd may take a moment to fully accept a new master node and we don't want to overload it.

Joining the worker nodes

The last missing nodes are the workers, which are very similar to the masters from the previous step, but with a slightly different install command and less configuration contents:

join_workers.yaml

- name: Join worker nodes
  hosts: workers
  tasks:
   - name: Install RKE2 binaries
     become: true
     ansible.builtin.shell:
       cmd: curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -
       creates: /usr/local/bin/rke2

   - name: Create cluster config dir
     become: true
     file:
       path: /etc/rancher/rke2
       state: directory

   - name: Set initial server config
     become: true
     copy:
       dest: /etc/rancher/rke2/config.yaml
       content: |
         server: "{{ master_url }}"
         token: "{{ node_token | trim }}"
         node-external-ip:
         - "{{ ansible_ssh_host }}"
         node-ip:
         - "{{ ansible_ssh_host }}"

   - name: Start server instance
     become: true
     ansible.builtin.service:
       name: rke2-agent
       state: started
       enabled: true

Note the environment variable used to install RKE2 as an agent (aka worker) rather than a server (aka master). Since the workers won't manage the cluster they don't need the tls-san and cni config keys, and can all be started at once since they won't join the master's etcd cluster, thus causing much less load on the system.

Retrieving the kubeconfig

The cluster is now fully operational, but we can't access it yet. To do so, we need the kubeconfig file generated by the primary master when it first started. Keeping with the spirit of ansible, retrieving it can be just another play in our playbook:

kubeconfig.yaml

- name: Update kubeconfig to use primary_master IP
  hosts: localhost
  tasks:
   - name: Get primary_master's ansible_ssh_host IP
     set_fact:
       primary_master_ip: "{{ ansible_ssh_host }}"
     delegate_to: "{{ primary_master }}"

   - name: Read kubeconfig from primary_master
     become: true
     slurp:
       src: /etc/rancher/rke2/rke2.yaml
     register: kubeconfig_b64
     delegate_to: "{{ primary_master }}"

   - name: Decode kubeconfig content
     set_fact:
       kubeconfig: "{{ kubeconfig_b64.content | b64decode }}"

   - name: Replace default IP (127.0.0.1) with primary_master IP
     set_fact:
       kubeconfig_updated: "{{ kubeconfig | regex_replace('127.0.0.1', primary_master_ip) }}"

   - name: Write the updated kubeconfig file to localhost
     copy:
       dest: "{{ ansible_env.HOME }}/.kube/config"
       content: "{{ kubeconfig_updated }}"
       mode: '0644'

Be warned that this overrides any existing kubernetes cluster config you had previously! Adjust the path in the last task to store the kubeconfig somewhere else, without touching your existing one. Because RKE2 always generates the kubeconfig with the server address set to 127.0.0.1 (aka localhost), we replace that with the address or our primary master, so we can reach the api from our local machine.

If everything worked, you can now test your new cluster:

kubectl get nodes

It should print a list of all master and worker nodes.

Optional: Installing longhorn for persistent volumes

Since most kubernetes clusters need a form of persistent storage, we include this optional play here. We assume your local machine has successfully installed the kubeconfig from the previous step and you have helm installed.

longhorn.yaml

- name: Install Longhorn using Helm
  hosts: localhost
  gather_facts: false
  tasks:
   - name: Add Longhorn Helm repository
     kubernetes.core.helm_repository:
       name: longhorn
       repo_url: https://charts.longhorn.io

   - name: Install Longhorn chart
     kubernetes.core.helm:
       name: longhorn
       chart_ref: longhorn/longhorn
       release_namespace: longhorn-system
       create_namespace: true
       update_repo_cache: true
       chart_version: "1.8.0"
       state: present

Longhorn uses the disks on the kubernetes member nodes to provide highly-available persistent volumes without any additional configuration necessary, making it a perfect choice for our automated installation. The play adds the longhorn chart, updates it's chart cache and installs the 1.8.0 release of longhorn. Check out their release page if you want a more recent version, all you need to do is adjust the chart_version: field in the last task.

Automating every step

If you like automation, you can combine all these individual playbooks into a single one and run it without any human input.

- name: Validate primary master and distribute variable
  hosts: all
  gather_facts: no
  tasks:
   - name: Check if primary_master is defined
     assert:
       that:
         - primary_master is defined
         - primary_master in groups['masters']
       fail_msg: "primary_master must be a valid master node"

- name: Prepare cluster nodes
  hosts: all
  tasks:
   - name: Set hostname
     become: true
     ansible.builtin.hostname:
       name: "{{ hostname }}"
       use: systemd
   - name: Update package cache
     become: true
     ansible.builtin.apt:
       name: "*"
       state: latest
       update_cache: True
   - name: Install base packages
     become: true
     ansible.builtin.apt:
       name:
        - curl
        - open-iscsi
        - nfs-common
        - bash
        - cryptsetup
        - dmsetup
       state: present
   - name: Start open-iscsi service
     become: true
     ansible.builtin.service:
       name: iscsid
       state: started
       enabled: true

- name: Prepare master nodes
  hosts: masters
  tasks:
   - name: Install RKE2 binaries
     become: true
     ansible.builtin.shell:
       cmd: curl -sfL https://get.rke2.io | sh -
       creates: /usr/local/bin/rke2
   - name: Collect the addresses of all master nodes
     set_fact:
       master_ips: "{{ master_ips | default([]) + [ansible_ssh_host] }}"
     delegate_to: "{{ item }}"
     with_inventory_hostnames: masters

- name: Seed first cluster master
  hosts: "{{ hostvars['localhost']['primary_master'] }}"
  tasks:
   - name: Create cluster config dir
     become: true
     file:
       path: /etc/rancher/rke2
       state: directory
   - name: Set initial server config
     become: true
     copy:
       dest: /etc/rancher/rke2/config.yaml
       content: |
         cni: calico
         tls-san:
         {{ master_ips | to_nice_yaml }}
         node-external-ip:
         - "{{ ansible_ssh_host }}"
         node-ip:
         - "{{ ansible_ssh_host }}"
   - name: Start server instance
     become: true
     ansible.builtin.service:
       name: rke2-server
       state: started
       enabled: true

- name: Expose cluster config to all nodes
  hosts: all
  tasks:
   - name: Read cluster node-token
     become: true
     slurp:
       src: /var/lib/rancher/rke2/server/node-token
     register: node_token_b64
     when: inventory_hostname == primary_master
   - name: Set cluster facts on all nodes
     set_fact:
       master_url: "https://{{ ansible_ssh_host }}:9345"
       node_token: "{{ node_token_b64.content | b64decode }}"
     delegate_to: "{{ primary_master }}"
     run_once: true

- name: Join master nodes
  hosts: masters
  serial: true
  tasks:
   - name: Create config dir
     become: true
     file:
       path: /etc/rancher/rke2
       state: directory
   - name: Create cluster config file
     when: inventory_hostname != primary_master
     become: true
     copy:
       dest: /etc/rancher/rke2/config.yaml
       content: |
         server: "{{ master_url }}"
         token: "{{ node_token | trim }}"
         cni: calico
         tls-san:
         {{ master_ips | to_nice_yaml }}
         node-external-ip:
          - "{{ ansible_ssh_host }}"
         node-ip:
          - "{{ ansible_ssh_host }}"
   - name: Start server instance
     become: true
     ansible.builtin.service:
       name: rke2-server
       state: started
       enabled: true

- name: Join worker nodes
  hosts: workers
  tasks:
   - name: Install RKE2 binaries
     become: true
     ansible.builtin.shell:
       cmd: curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -
       creates: /usr/local/bin/rke2
   - name: Create cluster config dir
     become: true
     file:
       path: /etc/rancher/rke2
       state: directory
   - name: Set initial server config
     become: true
     copy:
       dest: /etc/rancher/rke2/config.yaml
       content: |
         server: "{{ master_url }}"
         token: "{{ node_token | trim }}"
         node-external-ip:
         - "{{ ansible_ssh_host }}"
         node-ip:
          - "{{ ansible_ssh_host }}"
   - name: Start server instance
     become: true
     ansible.builtin.service:
       name: rke2-agent
       state: started
       enabled: true

- name: Update kubeconfig to use primary_master IP
  hosts: localhost
  tasks:
   - name: Get primary_master's ansible_ssh_host IP
     set_fact:
       primary_master_ip: "{{ ansible_ssh_host }}"
     delegate_to: "{{ primary_master }}"
   - name: Read kubeconfig from primary_master
     become: true
     slurp:
       src: /etc/rancher/rke2/rke2.yaml
     register: kubeconfig_b64
     delegate_to: "{{ primary_master }}"
   - name: Decode kubeconfig content
     set_fact:
       kubeconfig: "{{ kubeconfig_b64.content | b64decode }}"
   - name: Replace default IP (127.0.0.1) with primary_master IP
     set_fact:
       kubeconfig_updated: "{{ kubeconfig | regex_replace('127.0.0.1', primary_master_ip) }}"
   - name: Write the updated kubeconfig file to localhost
     copy:
       dest: "{{ ansible_env.HOME }}/.kube/config"
       content: "{{ kubeconfig_updated }}"
       mode: '0644'

- name: Install Longhorn using Helm
  hosts: localhost
  gather_facts: false
  tasks:
   - name: Add Longhorn Helm repository
     kubernetes.core.helm_repository:
       name: longhorn
       repo_url: https://charts.longhorn.io
   - name: Install Longhorn chart
     kubernetes.core.helm:
       name: longhorn
       chart_ref: longhorn/longhorn
       release_namespace: longhorn-system
       create_namespace: true
       update_repo_cache: true
       chart_version: "1.8.0"
       state: present

Save the file locally as rke2-install.yaml and run

ansible-playbook -i inventory.yaml rke2-install.yaml

And after a few minutes, you have a fully working production-ready kubernetes cluster up and ready to use.

If you want to change add / remove nodes from your cluster, simply edit the inventory.yaml file and re-run the command. Only if the primary_master node changes (is replaced / removed), you must set a different node as primary master (one that was part of the old cluster, or else everything gets deleted and a new cluster will be set up).

More articles

Switching from docker to podman

Embracing modern and standardized container management

Writing a pastebin application in bash

You don't always need a heavy web framework

Load testing websites with siege

Measuring performance of web applications