Remove outdated guides and restructure documentation for infctl and MVK

- Deleted the following guides:
  - Introduction to infctl
  - Local K3d Instance creation
  - Traefik ingress setup
  - Longhorn storage setup
  - Local Vagrant cluster setup
  - Quick Start guide for MVK
  - Minimal Viable Kubernetes overview

- Added new guides:
  - Local Development Environment setup
  - Initial Pipeline Run for infctl
  - Create a Vagrant 3 node cluster
  - Smoke test for Vagrant cluster
  - Add Longhorn Storage guide
  - Add Ingress guide
  - Smoke test for Ingress

- Updated index and navigation links to reflect new structure.
This commit is contained in:
jon brookes 2025-08-25 18:49:33 +01:00
parent c1cea80b12
commit ff6341edf1
16 changed files with 97 additions and 6811 deletions

View file

@ -1,5 +1,5 @@
---
title: Local Development Environment
title: Local Dev Environment
description: A guide to checking a local environment.
---
@ -88,4 +88,4 @@ After installation, verify both are available in your `$PATH` by running:
```bash
vagrant --version
VBoxManage --version
```
```

View file

@ -1,54 +0,0 @@
---
title: Configuration
description: infctl configuration
---
`infctl` uses `json` files for configuration.
It is designed around the idea of pipelines where each pipeline performs a series of steps.
A short, 2 step pipeline configuration can look like :
```json
[
{
"name": "ensure inf namespace exists",
"function": "k8sNamespaceExists",
"params": ["infctl"],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create php configmap",
"function": "RunCommand",
"params": ["./scripts/create_php_configmap_ctl.sh"],
"retryCount": 0,
"shouldAbort": true
}
]
```
Each Object is a task.
Object task records are executed in a `[]` list and in sequence.
Each task has a `name` to be displayed in logs.
Each task calls a `Function` that is registered within `infctl` and that accepts `params` string, which are any parameters to be passed to that function, script or executable. The simplest example being `RunCommand` which accepts the path to a script or executable as its `params`. This can be anything but can be as simple as :
```bash
echo "hello world"
exit 0
```
So `infctl` is unlimited as to what it can use in its pipeline files to achieve any kind of automation.
If a task fails ( the script or program that is run returns a non zero value ) it may be re-tried up to `retryCount` times.
If a task fails the pipeline will stop running unless `shouldAbort` is set to false, in which case, the pipeline will continue to run with the next step item in the list.

View file

@ -1,11 +0,0 @@
---
title: Example Guide
description: A guide in my new Starlight docs site.
---
Guides lead a user through a specific task they want to accomplish, often with a sequence of steps.
Writing a good guide requires thinking about what your users are trying to do.
## Further reading
- Read [about how-to guides](https://diataxis.fr/how-to-guides/) in the Diátaxis framework

View file

@ -1,14 +0,0 @@
---
title: Introduction
description: introducing infctl and its guiding principles
---
Kubernetes is complicated, so getting started can be a pain.
There are many tools out there to create a development Kubernetes environment.
`infctl` is just another such tool, however it is designed with simplicity in mind, yet with a view to it being extended into production and beyond.

View file

@ -1,5 +1,5 @@
---
title: Creating Initial Infrastructure
title: Initial Pipeline Run
description: A guide to running infctl for the first time.
---

View file

@ -1,39 +0,0 @@
---
title: Minimal Viable Kubernetes
description: introducing minimal viable Kubernetes and its guiding principles
---
Kubernetes is complicated but additional to this it is designed with cloud compute in mind.
Thus, self hosting options can be limiting.
Out of the box, Kubernetes expects key ingredients of infrastructure to be present. For example, ingress, storage and load balancers.
If any of these are not present, they will need to be supplied in a form that Kubernetes can consume them.
Adding the missing pre-requisites can be a challenge. Even learning about Kubernetes, let alone setting up a development or 'lab' environment, can quickly become messy.
Knowing what to create in 'dev' as opposed to 'prod` may introduce inconsistent configurations and can result in adding technical debt.
Production environments can also become complex, difficult to manage, even over engineered.
Responding to these challenging scenarios, Minimal Viable Kubernetes (MVK) provides tooling and design patterns aimed toward solving these issues.
MVK helps create a minimal K8s setup and one which is a viable implementation of Kubernetes. Initially this may be used for test and development work, but extending this into production is also its goal.
Typical use cases of MVK are aimed toward entirely self hosted environments. This may be outside of any managed Kubernetes or cloud platform, which are often as not proprietary in their design and provisioning of k8s.
The goals of MVK are to increase portability and reduce 'vendor lock in'.
It can also reduce costs, provided that its users are already, or become, familiar with Kubernetes and its day to day operational and management.
*We think that **everyone** should know more about Kubernetes*.
A greater understanding of k8s results in us all becoming less reliant on others providing this as a service for us. It protects our industry from losing these skills entirely.
**Above all it can help us to gain control over our digital sovereignty.**

View file

@ -1,15 +0,0 @@
---
title: Quick Start Guide
description: A guide to setting up MVK quickly.
---
Install `infctl` command line tool with
```bash
curl -L https://codeberg.org/headshed/infctl-cli/raw/branch/main/install.sh | bash
```
Alternatively, go to [Releases](https://codeberg.org/headshed/infctl-cli/releases) to find a current version for your Operating System, download it and place a copy in your `PATH`.

View file

@ -1,5 +1,5 @@
---
title: Create a Local Vagrant K3s cluster
title: Create a vagrant 3 node cluster
description: A guide to creating a virtualized local k3s instance.
---
@ -50,38 +50,3 @@ This marshals each of the above tasks into a single, repeatable operation.
```bash
LOG_FORMAT=none infctl -f pipelines/dev/vagrant-k3s.json
```
# Smoke test the cluster ...
If all has gone well, a cluster will now be running on your local system comprising of 3 nodes and a workstation.
We can check status by now switching directory to the vagrant dev folder and running a `vagrant status` command :
```bash
cd vagrant/dev/ubuntu/
vagrant status
Current machine states:
vm1 running (virtualbox)
vm2 running (virtualbox)
vm3 running (virtualbox)
workstation running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
```
To work on our cluster we must first connect to the `workstation` and then use `kubectl` commands to interact with `k3s` :
```bash
vagrant ssh workstation
```
From the workstation, use [`kubectl`](https://kubernetes.io/docs/reference/kubectl/) to access our cluster
```bash
kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm1 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1
vm2 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1
vm3 Ready control-plane,etcd,master 4h10m v1.33.3+k3s1
```
If you have got this far, congratulation you have a locally hosted k3s cluster running in 3 virtual machines and a workstation that can be used to manage it using `kubectl` and `ansible`.

View file

@ -0,0 +1,40 @@
---
title: Smoke test the cluster
description: Basic smoke tests
---
# Smoke test the cluster ...
If all has gone well, a cluster will now be running on your local system comprising of 3 nodes and a workstation.
We can check status by now switching directory to the vagrant dev folder and running a `vagrant status` command :
```bash
cd vagrant/dev/ubuntu/
vagrant status
Current machine states:
vm1 running (virtualbox)
vm2 running (virtualbox)
vm3 running (virtualbox)
workstation running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
```
To work on our cluster we must first connect to the `workstation` and then use `kubectl` commands to interact with `k3s` :
```bash
vagrant ssh workstation
```
From the workstation, use [`kubectl`](https://kubernetes.io/docs/reference/kubectl/) to access our cluster
```bash
kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm1 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1
vm2 Ready control-plane,etcd,master 4h11m v1.33.3+k3s1
vm3 Ready control-plane,etcd,master 4h10m v1.33.3+k3s1
```
If you have got this far, congratulation you have a locally hosted k3s cluster running in 3 virtual machines and a workstation that can be used to manage it using `kubectl` and `ansible`.

View file

@ -1,5 +1,5 @@
---
title: Longhorn storage Layer
title: Add Longhorn Storage
description: A guide to adding Longhorn storage.
---

View file

@ -1,5 +1,5 @@
---
title: Traefik ingress
title: Add Ingress
description: A guide to adding ingress.
---
@ -38,18 +38,3 @@ traefik ingress can be installed with
LOG_FORMAT=basic infctl -f pipelines/vagrant-ingress.json
```
## Smoke test ingress
If all has gone well, we should now be able to get the service for `traefik` and see an external IP address and type of `LoadBalancer` :
```bash
kubectl -n traefik get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.5.252 192.168.56.230 80:32066/TCP,443:32410/TCP 16s
```
Here the address of `192.168.56.230` is available to use to ingress route our services, pods and Kubernetes hosted apps on both plain text port 89 and over TLS https on port 443.
We will be able next to assign a certificate to this 2nd port such that `traefik` will be able to serve URL's on that port to our pods and apps.
Initially this will use a self signed certificate but we will be able to crate our own CA and have a secure connection without browser warnings on our local, vagrant mks cluster environment.

View file

@ -0,0 +1,20 @@
---
title: Smoke Test Ingress
description: Simple test for ingress.
---
## Smoke test ingress
If all has gone well, we should now be able to get the service for `traefik` and see an external IP address and type of `LoadBalancer` :
```bash
kubectl -n traefik get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.5.252 192.168.56.230 80:32066/TCP,443:32410/TCP 16s
```
Here the address of `192.168.56.230` is available to use to ingress route our services, pods and Kubernetes hosted apps on both plain text port 89 and over TLS https on port 443.
We will be able next to assign a certificate to this 2nd port such that `traefik` will be able to serve URL's on that port to our pods and apps.
Initially this will use a self signed certificate but we will be able to crate our own CA and have a secure connection without browser warnings on our local, vagrant mks cluster environment.