K3s on DigitalOcean, fully automated
In this tutorial we utilize Terraform & Ansible to deploy a K3s cluster on cheap DO Droplets.
We’re going to make a scalable, automated cluster deployment perfect for testing and training.
K3s is a lightweight while featured Kubernetes deployment built upon the Rancher project.
Kubernetes may be the cloud-portability saviour we are all looking for! Clusters consist of a control-plane and many “worker nodes”, the goal of this deployment orchestrator is to allow N workers to be deployed and automatically connected to the cluster controller.
Bootstrapping
We are using Digital Ocean, Terraform, Ansible and K8s as the tech stack to make this deployment a reality. If you want more information about when and where these different technologies shine, check out my article about the past, present and future of infra-as-code tools and ideas.
First we use Terraform to deploy a controller and a worker node.
# Controller
resource "digitalocean_droplet" "k3sController" {
image = "debian-12-x64" # doctl compute image list-distribution
name = "debian12-k3sController"
region = "sfo3"
ipv6 = false
size = "s-2vcpu-2gb" # doctl compute size list | awk '{print $1}'
ssh_keys = [999] # doctl compute ssh-key list | awk '{print $1}'
tags = ["name:k3sController"]
user_data = file("setup.sh")
}
output "k3sController" {
value = digitalocean_droplet.k3sController[*].ipv4_address
}
# Worker
resource "digitalocean_droplet" "k3sWorker" {
image = "debian-12-x64" # doctl compute image list-distribution
name = "debian12-k3sWorker"
region = "sfo3"
ipv6 = false
size = "s-1vcpu-1gb" # doctl compute size list | awk '{print $1}'
ssh_keys = [999] # doctl compute ssh-key list | awk '{print $1}'
tags = ["name:k3sWorker"]
user_data = file("setup.sh")
}
output "k3sWorker" {
value = digitalocean_droplet.k3sWorker[*].ipv4_address
}
These two resources invoke the same “user_data” script. They both do initial setup and bootstrapping of docker and other tools.
Later we use Ansible to perform specific setup on each node.
To set up the workers, we need to reference some variables set on the controller. We want to make this process as automated as possible.
Terraform is great for managing cloud state, but once we get into the realm of multi-server orchestration, I think Ansible is the right tool for the job.
Let’s build an inventory file for Ansible to target our systems using groups. We also need to make sure we use node labels so we can target our individual systems by name instead of IP.
echo "[controller]" > inventory
echo "controller01 ansible_host="`terraform output -json | jq '.k3sController.value[0]'` >> inventory
echo "[workers]" >> inventory
echo "worker01 ansible_host="`terraform output -json | jq '.k3sWorker.value[0]'` >> inventory
Now that we have an inventory file we can confirm our connectivity works
ansible -u root -i inventory -m ping all
The response “pong” means our hosts received and executed our command instructions, and we are good to go! This style of setup allows us to configure as many or few controllers or workers as we want with the behaviour remaining consistent across the cluster.
To achieve this, I use an Ansible playbook which retrieves data from one server and sends it to the other system.
I think this is a really powerful feature of Ansible. The ability to dynamically move data between the systems we are orchestrating sequentially & verifiably.
We are specifically interested in the controllers’ secure token, and its private IP address. These strings need to be passed to the workers before they can be configured.
# ansible-setup.yaml
- hosts: controller
gather_facts: true
tasks:
- name: Setup controller
shell: "curl -sfL https://get.k3s.io | sh -"
register: setup
- debug: var=setup
- name: Get contents of token file
shell: "cat /var/lib/rancher/k3s/server/node-token"
register: token
- set_fact:
token: "{{ token.stdout }}"
- hosts: workers
gather_facts: true
tasks:
- name: Install the node agent
shell: "curl -sfL https://get.k3s.io | K3S_URL=https://{{ hostvars['controller01'].ansible_host }}:6443 K3S_TOKEN={{ hostvars['controller01'].token }} sh -"
register: setup
- debug: var=setup
ansible-playbook -u root -i inventory ansible-setup.yaml
When we execute the playbook, we first connect to the group containing the systems labeled as controllers (in this case, one system). We register the token as a variable followed by an Ansible fact.
Then we connect to the systems registered in the workers group, here we fetch the “token” and “ansible_host” variables registered on ‘controller01’ and use it each worker to install the agent and join the cluster.
If the setup scripts succeed, it should result in a fully functioning K3s cluster with one worker node. Expanding the worker node pool should be as simple as increasing the count in terraform, and exporting the subsequent IP’s.
Currently this is a manual step. In a production environment we would use a feature which automatically finds systems we have launched and labeled, and works to provision them based on their label.
Kubernetes Dashboard
The final step is to begin installing useful services upon our cluster. We can do this elegantly with the Ansible kubernetes.cor.k8s module but be aware, this requires the python3-kubernetes Python PIP module to be present on the remote host. The best place to take care of this dependency is in the user_data:setup.sh
- hosts: controller
vars:
ansible_host_key_checking: false
tasks:
- name: Install dashboard
shell: "kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml"
- name: Set up service account
kubernetes.core.k8s:
kubeconfig: /etc/rancher/k3s/k3s.yaml
state: present
definition:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
- name: Create a cluster role binding
kubernetes.core.k8s:
kubeconfig: /etc/rancher/k3s/k3s.yaml
state: present
definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
- name: Create a long-lived bearer token in the secrets store
kubernetes.core.k8s:
kubeconfig: /etc/rancher/k3s/k3s.yaml
state: present
definition:
apiVersion: v1
kind: Secret
metadata:
name: admin-user
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
- name: Retrieve the base64 encoded dashboard token
shell: 'kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d'
register: dashboard_token
- debug: var=dashboard_token.stdout
We can now copy the contents of /etc/rancher/k3s/k3s.yaml into our local ~/.kube/config then run
kubectl proxy
And browse to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
The dashboard_token variable can be used to authenticate with the dashboard app.
Conclusion
This forms a basic cluster for orchestrating K8s deployments. In the next article I’ll cover deploying an application utilizing K3s’s Traefik ingress controller, and dive into some of the cool features unlocked by K8s.
Let me know your thoughts or ideas in the comments. Full source code behind this project is available on GitHub.
I like how you've used Digital Ocean because they are so much simpler and cheaper than AWS!
👌