Update local Vagrant cluster guides with prerequisites and installation instructions for Traefik and Longhorn

This commit is contained in:
jon brookes 2025-08-25 15:13:37 +01:00
parent c113ad1107
commit c1cea80b12
3 changed files with 81 additions and 59 deletions

View file

@ -3,17 +3,53 @@ title: Traefik ingress
description: A guide to adding ingress. description: A guide to adding ingress.
--- ---
## Prerequisites
This section follows on from [Add Longhorn Storage](/guides/local-vagrant-cluster-storage/). Be sure to complete that before commencing with this. This section follows on from [Add Longhorn Storage](/guides/local-vagrant-cluster-storage/). Be sure to complete that before commencing with this.
metallb can be installed with if not already shelled into the workstation, from `/home/user/projects/infctl-cli` we need to change directory to
```bash
cd vagrant/dev/ubuntu/
```
so we can ssh to the workstation
```bash
vagrant ssh workstation
```
## Familiarise yourself with …
* [install_metallb.sh](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/scripts/install_metallb.sh) applies a kubernetes manifest to install metallb and then applies an `IPAddressPool` and `L2Advertisement` which will be used later by `traefik` to access a loadbalancer in order to expose services so that we can access them from outside of kubernetes. This is a key part of MVK and one which does not exist in Kubernetes as this is typically expected to be provided by a managed kubernetes cloud service.
* [install_traefik.sh](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/scripts/install_traefik.sh) sets up some custom variables to enable an internal dashboard, ports for ingress, log level and loadbalancer.
## Run the pipelines …
The [metallb pipeline](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/pipelines/vagrant-metallb.json) can be run with :
```bash ```bash
LOG_FORMAT=basic infctl -f pipelines/vagrant-metallb.json LOG_FORMAT=basic infctl -f pipelines/vagrant-metallb.json
``` ```
ingress can be installed with traefik ingress can be installed with
```bash ```bash
LOG_FORMAT=basic infctl -f pipelines/vagrant-ingress.json LOG_FORMAT=basic infctl -f pipelines/vagrant-ingress.json
``` ```
## Smoke test ingress
If all has gone well, we should now be able to get the service for `traefik` and see an external IP address and type of `LoadBalancer` :
```bash
kubectl -n traefik get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.5.252 192.168.56.230 80:32066/TCP,443:32410/TCP 16s
```
Here the address of `192.168.56.230` is available to use to ingress route our services, pods and Kubernetes hosted apps on both plain text port 89 and over TLS https on port 443.
We will be able next to assign a certificate to this 2nd port such that `traefik` will be able to serve URL's on that port to our pods and apps.
Initially this will use a self signed certificate but we will be able to crate our own CA and have a secure connection without browser warnings on our local, vagrant mks cluster environment.

View file

@ -3,15 +3,19 @@ title: Longhorn storage Layer
description: A guide to adding Longhorn storage. description: A guide to adding Longhorn storage.
--- ---
## Prerequisites
This follows on from [Create a vagrant 3 node cluster](/guides/local-vagrant-cluster/) so be sure to complete that section before proceeding with this one. This follows on from [Create a vagrant 3 node cluster](/guides/local-vagrant-cluster/) so be sure to complete that section before proceeding with this one.
## Familiarise yourself with …
A pipeline file [vagrant-longhorn.json](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/pipelines/vagrant-longhorn.json) uses A pipeline file [vagrant-longhorn.json](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/pipelines/vagrant-longhorn.json) uses
* [longhorn_prereqs.sh](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/scripts/longhorn_prereqs.sh) * [longhorn_prereqs.sh](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/scripts/longhorn_prereqs.sh) which uses an ansible playbook [install_longhorn_prereqs.yaml](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/ansible/install_longhorn_prereqs.yaml)
* install_longhorn.sh * [install_longhorn.sh](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/scripts/install_longhorn.sh) `kubectl` installs longhorn storage
* wait_for_longhorn.sh * [wait_for_longhorn.sh](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/scripts/wait_for_longhorn.sh) waits for longhorn setup to be complete.
to marshall an installation of Longhorn. ## Run the pipeline …
From the previous directory in our example which was `/home/user/projects/infctl-cli` we need to change directory `cd vagrant/dev/ubuntu/` From the previous directory in our example which was `/home/user/projects/infctl-cli` we need to change directory `cd vagrant/dev/ubuntu/`

View file

@ -3,7 +3,17 @@ title: Create a Local Vagrant K3s cluster
description: A guide to creating a virtualized local k3s instance. description: A guide to creating a virtualized local k3s instance.
--- ---
clone the `infctl-cli` repo in order to have local access to its scripts, files and manifests to use later on. # Prerequisites
MVK on vagrant represents a 'real' 3 node VM in the cloud but running on a local, self-hosted environment.
This is developed and tested on a single machine with >16G of memory, 8 cpus and around 500GB hard drive.
Your mileage may vary depending on CPU and memory but in my experience, 16Gig or more memory is better suited to this kind of work and ideally, more so. It is not uncommon to use 32Gig for a 'home lab' server / desktop these days.
# Getting Started
Clone the [`infctl-cli`](https://codeberg.org/headshed/infctl-cli) repo in order to have local access to its scripts, files and manifests to use later on.
Where you put this is up to you but we will work on the assumption that this will be in `$HOME/projects` : Where you put this is up to you but we will work on the assumption that this will be in `$HOME/projects` :
@ -14,31 +24,37 @@ git clone https://codeberg.org/headshed/infctl-cli.git
cd infctl-cli cd infctl-cli
``` ```
take a look at the script [./scripts/install_vagrant_nodes.sh](https://codeberg.org/headshed/infctl-cli/src/branch/main/scripts/install_vagrant_nodes.sh) to familiarize yourself with what it does ## Familiarise yourself with ...
basically it will run `vagrant up` on what are to be your local cluster [./scripts/install_vagrant_nodes.sh](https://codeberg.org/headshed/infctl-cli/src/branch/main/scripts/install_vagrant_nodes.sh) will run `vagrant up` on what is to be your local cluster.
next take a look at [`./scripts/configure_vagrant_k3s.sh`](https://codeberg.org/headshed/infctl-cli/src/branch/main/scripts/configure_vagrant_k3s.sh) [`./scripts/configure_vagrant_k3s.sh`](https://codeberg.org/headshed/infctl-cli/src/branch/main/scripts/configure_vagrant_k3s.sh) checks and interogates the vagrant hosts to create an ansible inventory file for later use with Ansible.
this checks the vagrant hosts and creates an ansible inventory file for use in a later step [`vagrant/dev/ubuntu/Vagrantfile`](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/Vagrantfile) is used by `vagrant` to coordinate the cluster and workstation build
the Vagrantfile at [`vagrant/dev/ubuntu/Vagrantfile`](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/Vagrantfile) is used by `vagrant` to coordinate host builds [`scripts/install_vagrant_workstation.sh`](https://codeberg.org/headshed/infctl-cli/src/branch/main/scripts/install_vagrant_workstation.sh) brings the workstation up after the server nodes have been established
the [`scripts/install_vagrant_workstation.sh`](https://codeberg.org/headshed/infctl-cli/src/branch/main/scripts/install_vagrant_workstation.sh) script brings the workstation up after the cluster details have been gathered [`./vagrant/dev/ubuntu/ansible/provision_workstation.sh`](https://codeberg.org/headshed/infctl-cli/src/commit/bd222ce39e363bcdb536362d5dcb0699b1dbb2ee/vagrant/dev/ubuntu/ansible/provision_workstation.sh) is quite a bit longer and more involved than those previous, as it is used by `vagrant` to provision our workstation and uses [ansible scripts](https://codeberg.org/headshed/infctl-cli/src/branch/main/vagrant/dev/ubuntu/ansible) to :
* [ansible ping](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ping_module.html) the 3 cluster nodes
* [ansible-playbook](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html) install [`keepalived`](https://www.keepalived.org/) which MVK uses as a 'cheap loadbalancer' by which k3s can access the current, active k8s interface
* ansible-playbook install the 3 node k3s cluster
* ansible-playbook install the k8s config file onto the workstation for management of k8s
* ansible-playbook install [`dnsmasq`](https://thekelleys.org.uk/dnsmasq/doc.html) for later use in development and testing the local cluster
a final script [`./vagrant/dev/ubuntu/ansible/provision_workstation.sh`](https://codeberg.org/headshed/infctl-cli/src/commit/bd222ce39e363bcdb536362d5dcb0699b1dbb2ee/vagrant/dev/ubuntu/ansible/provision_workstation.sh) which is quite a bit longer, is used by `vagrant` to provision our workstation and to finalize the cluster. ## Run the pipeline ...
if you are ready to run the pipeline, this can be run in a single command with `infctl` which we can configure to use a pipeline file at [`pipelines/dev/vagrant-k3s.json`](https://codeberg.org/headshed/infctl-cli/src/branch/main/pipelines/dev/vagrant-k3s.json) if you are ready to run the pipeline, this can be run in a single command with `infctl` which we can configure to use a pipeline file at [`pipelines/dev/vagrant-k3s.json`](https://codeberg.org/headshed/infctl-cli/src/branch/main/pipelines/dev/vagrant-k3s.json)
this marshals each of the above tasks into a single, repeatable operation This marshals each of the above tasks into a single, repeatable operation.
```bash ```bash
LOG_FORMAT=none infctl -f pipelines/dev/vagrant-k3s.json LOG_FORMAT=none infctl -f pipelines/dev/vagrant-k3s.json
``` ```
# Smoke test the cluster ...
if all has gone well, a cluster will now be running on your local system comprising of 3 nodes and a workstation If all has gone well, a cluster will now be running on your local system comprising of 3 nodes and a workstation.
we can check status by now switching directory to the vagrant dev folder and running a `vagrant status` command : We can check status by now switching directory to the vagrant dev folder and running a `vagrant status` command :
```bash ```bash
@ -55,51 +71,17 @@ This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific above with their current state. For more information about a specific
VM, run `vagrant status NAME`. VM, run `vagrant status NAME`.
``` ```
To work on our cluster we must first connect to the `workstation` and then use `kubectl` commands to interact with `k3s` :
to work on our cluster we must first connect to the `workstation` and then use `kubectl` commands to interact with `k3s` :
```bash ```bash
vagrant ssh workstation vagrant ssh workstation
Welcome to Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-144-generic x86_64) ```
From the workstation, use [`kubectl`](https://kubernetes.io/docs/reference/kubectl/) to access our cluster
* Documentation: https://help.ubuntu.com ```bash
* Management: https://landscape.canonical.com kubectl get nodes
* Support: https://ubuntu.com/pro
System information as of Sat Aug 16 15:59:20 UTC 2025
System load: 0.0 Processes: 94
Usage of /: 6.4% of 38.70GB Users logged in: 0
Memory usage: 30% IPv4 address for enp0s3: 10.0.2.15
Swap usage: 0%
* Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s
just raised the bar for easy, resilient and secure K8s cluster deployment.
https://ubuntu.com/engage/secure-kubernetes-at-the-edge
Expanded Security Maintenance for Applications is not enabled.
17 updates can be applied immediately.
17 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
New release '24.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Sat Aug 16 13:00:06 2025 from 10.0.2.2
Agent pid 6586
/home/vagrant/machines/*/virtualbox/private_key: No such file or directory
The agent has no identities.
vagrant@ansible-workstation:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
vm1 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1 vm1 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1
vm2 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1 vm2 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1
vm3 Ready control-plane,etcd,master 4h10m v1.33.3+k3s1 vm3 Ready control-plane,etcd,master 4h10m v1.33.3+k3s1
``` ```
if you have got this far, congratulation you have a locally hosted k3s cluster running in 3 virtual machines and a workstation that can be used to manage it using `kubectl` and `ansible` if if you need it also. If you have got this far, congratulation you have a locally hosted k3s cluster running in 3 virtual machines and a workstation that can be used to manage it using `kubectl` and `ansible`.