update: Added Longhorn installation process and updated memory allocation for VMs

update: Added 'git' and 'vagrant' to required tools in pre-flight checks

fix: configured k3s install to use internal nic for flanel network

update: Added Longhorn installation process and updated memory allocation for VMs

update: Added 'git' and 'vagrant' to required tools in pre-flight checks

fix: configured k3s install to use internal nic for flanel network

fix: corrected JSON formatting for config json

update: reduce VM memory allocation to 2GB, add Longhorn installation scripts and prerequisites, and implement checks for existing pods

fix: merge issues

fix: merge issues

update: Added Longhorn installation process and updated memory allocation for VMs

update: Added 'git' and 'vagrant' to required tools in pre-flight checks

fix: configured k3s install to use internal nic for flanel network

update: Added Longhorn installation process and updated memory allocation for VMs

update: Added 'git' and 'vagrant' to required tools in pre-flight checks

fix: configured k3s install to use internal nic for flanel network

fix: corrected JSON formatting for config json

update: reduce VM memory allocation to 2GB, add Longhorn installation scripts and prerequisites, and implement checks for existing pods

update: improve error logging in RunJsonDeployment and RunCommand functions

update: add jq installation to provision script

update: add version flag

bump version

fix: improve error messages for config file reading

feat: add Windows gitbash installation support and improve binary download process

clean up tmp code

fix: increase timeout for some slower windows clients

feat: add Ingress and Service configurations for nginx deployment, and implement MetalLB  and Traeik installation scripts

refactor: remove obsolete Traefik installation script

feat: add environment checks and configurations for Vagrant setup, including dnsmasq  MetalLB  and ingress

feat: add deployment and installation scripts for infmon-cli, including Kubernetes configurations

feat: refactor customer project creation and add success/failure job scripts

refactor: rename customer references to project in configuration and application logic

feat: enhance JSON deployment handling with retry logic and command execution improvements

feat: enhance RunJsonDeployment with error handling and retry logic; add tests for configuration reading

feat: add automatic creation of base and config JSON files from examples if they do not exist

refactor: remove database package and related functionality; update app state initialization and error handling

refactor: update deployment handling to use ProjectConfig; improve error messages and logging

feat: enhance RunJsonDeployment retry logic with configurable delay; improve logging for retries

feat: implement LoadConfigs function for improved configuration loading; add logger setup

refactor: remove unused fields from BaseConfig and ProjectConfig structs for cleaner configuration management

refactor: clean up tests by removing obsolete functions and simplifying test cases

chore: update version to v0.0.5 in install script

feat: implement default configuration creation for BaseConfig and ProjectConfig; enhance validation logic

fix: enhance configuration parsing and loading; streamline flag handling and error reporting

refactor: remove obsolete configuration download logic from installation script
This commit is contained in:
jon brookes 2025-08-16 18:00:28 +01:00
parent d839fd5687
commit 11b1f1b637
61 changed files with 1573 additions and 761 deletions

View file

@ -1 +0,0 @@
export VAGRANT_BRIDGE=<preferred interface to bride to>

View file

@ -21,6 +21,7 @@ Vagrant.configure("2") do |config|
# VM 1 Configuration
config.vm.define "vm1" do |vm1|
vm1.vm.box = "ubuntu/jammy64"
vm1.vm.boot_timeout = 600
vm1.vm.hostname = "vm1"
# Fixed private network IP
@ -34,7 +35,7 @@ Vagrant.configure("2") do |config|
end
vm1.vm.provider "virtualbox" do |vb|
vb.memory = "2048" # 2GB memory
vb.memory = "2048" # 4GB memory
vb.cpus = 2
end
@ -48,6 +49,7 @@ Vagrant.configure("2") do |config|
# VM 2 Configuration
config.vm.define "vm2" do |vm2|
vm2.vm.box = "ubuntu/jammy64"
vm2.vm.boot_timeout = 600
vm2.vm.hostname = "vm2"
# Fixed private network IP
@ -61,7 +63,7 @@ Vagrant.configure("2") do |config|
end
vm2.vm.provider "virtualbox" do |vb|
vb.memory = "2048" # 2GB memory
vb.memory = "2048" # 4GB memory
vb.cpus = 2
end
@ -75,6 +77,7 @@ Vagrant.configure("2") do |config|
# VM 3 Configuration
config.vm.define "vm3" do |vm3|
vm3.vm.box = "ubuntu/jammy64"
vm3.vm.boot_timeout = 600
vm3.vm.hostname = "vm3"
# Fixed private network IP
@ -88,7 +91,7 @@ Vagrant.configure("2") do |config|
end
vm3.vm.provider "virtualbox" do |vb|
vb.memory = "2048" # 2GB memory
vb.memory = "2048" # 4GB memory
vb.cpus = 2
end
@ -102,6 +105,7 @@ Vagrant.configure("2") do |config|
# Ansible Controller/Workstation Configuration
config.vm.define "workstation" do |ws|
ws.vm.box = "ubuntu/jammy64"
ws.vm.boot_timeout = 600
ws.vm.hostname = "ansible-workstation"
ws.vm.synced_folder ".", "/vagrant"

View file

@ -0,0 +1,78 @@
---
- name: Install Dnsmasq on workstation
hosts: localhost
become: true
become_user: root
serial: 1 # Ensure tasks are executed one host at a time
vars_files:
- vars.yaml
tasks:
- name: Install dnsmasq
ansible.builtin.apt:
name: dnsmasq
state: present
- name: Stop systemd-resolved
ansible.builtin.systemd:
name: systemd-resolved
state: stopped
- name: Disable systemd-resolved
ansible.builtin.systemd:
name: systemd-resolved
enabled: false
- name: check to see if /etc/resolv.conf is a symlink
ansible.builtin.stat:
path: /etc/resolv.conf
register: resolv_conf
- name: Remove /etc/resolv.conf if it is a symlink
ansible.builtin.file:
path: /etc/resolv.conf
state: absent
when: resolv_conf.stat.islnk
- name: Ensure /etc/resolv.conf is a regular file
ansible.builtin.file:
path: /etc/resolv.conf
state: touch
- name: Ensure /etc/resolv.conf uses 127.0.0.1 for server
ansible.builtin.lineinfile:
path: /etc/resolv.conf
regexp: '^nameserver'
line: 'nameserver 127.0.0.1'
state: present
- name: Configure dnsmasq
ansible.builtin.copy:
dest: /etc/dnsmasq.d/k3s-cluster.conf
content: |
address=/{{ dnsmasq_k3s_domain }}
server=1.1.1.1
server=8.8.8.8
owner: root
group: root
mode: "0644"
notify: Restart dnsmasq
- name: Ensure conf-dir is uncommented in /etc/dnsmasq.conf
ansible.builtin.lineinfile:
path: /etc/dnsmasq.conf
regexp: '^#?conf-dir=/etc/dnsmasq.d'
line: 'conf-dir=/etc/dnsmasq.d'
state: present
owner: root
group: root
mode: '0644'
handlers:
- name: Restart dnsmasq
ansible.builtin.systemd:
name: dnsmasq
state: restarted

View file

@ -55,7 +55,7 @@
- name: Install k3s on first node
ansible.builtin.shell: |
set -o pipefail
K3S_TOKEN=$(cat /opt/k3s-token) /bin/bash /tmp/k3s_install.sh server --cluster-init --disable traefik --disable servicelb --tls-san {{ k3s_url_ip }} --node-name vm1 --node-ip {{ vm1_ip }}
K3S_TOKEN=$(cat /opt/k3s-token) /bin/bash /tmp/k3s_install.sh server --cluster-init --disable traefik --disable servicelb --tls-san {{ k3s_url_ip }} --node-name vm1 --node-ip {{ vm1_ip }} --flannel-iface=enp0s8
if [ $? -eq 0 ]; then
mkdir -p /home/vagrant/.kube && cp /etc/rancher/k3s/k3s.yaml /home/vagrant/.kube/config && chown vagrant:vagrant /home/vagrant/.kube/config
fi
@ -91,7 +91,7 @@
{% endif %}
K3S_URL=https://{{ k3s_url_ip }}:6443 \
K3S_TOKEN={{ k3s_token_content.stdout }} \
INSTALL_K3S_EXEC="server --server https://{{ k3s_url_ip }}:6443 --disable traefik --disable servicelb --node-name={{ inventory_hostname }} --node-ip ${NODE_IP}" \
INSTALL_K3S_EXEC="server --server https://{{ k3s_url_ip }}:6443 --disable traefik --disable servicelb --node-name={{ inventory_hostname }} --node-ip ${NODE_IP} --flannel-iface=enp0s8" \
/bin/bash /tmp/k3s_install.sh 2>&1
exit_code=$?
if [ $exit_code -ne 0 ]; then

View file

@ -0,0 +1,47 @@
---
- name: Install k3s on 3-node cluster
hosts: vm1,vm2,vm3
become: true
become_user: root
serial: 1 # Ensure tasks are executed one host at a time
vars_files:
- vars.yaml
tasks:
- name: Install open-iscsi on all nodes
ansible.builtin.package:
name: open-iscsi
state: present
- name: Install nfs-common on all nodes
ansible.builtin.package:
name: nfs-common
state: present
- name: Install cryptsetup and dmsetup packages
ansible.builtin.package:
name:
- cryptsetup
- dmsetup
state: present
- name: Load dm_crypt kernel module
community.general.modprobe:
name: dm_crypt
state: present
- name: Make dm_crypt module load on boot
ansible.builtin.lineinfile:
path: /etc/modules
line: dm_crypt
create: yes
- name: Check if dm_crypt module is loaded
ansible.builtin.shell: lsmod | grep dm_crypt
register: dm_crypt_check
failed_when: false
changed_when: false
- name: Show dm_crypt status
ansible.builtin.debug:
msg: "dm_crypt module is {{ 'loaded' if dm_crypt_check.rc == 0 else 'not loaded' }}"

View file

@ -1,18 +1,27 @@
#!/usr/bin/env bash
sudo apt-get update
sudo apt-get install -y software-properties-common git vim python3.10-venv
sudo apt-get update
sudo apt-get install -y software-properties-common git vim python3.10-venv jq figlet
source /vagrant/.envrc
# Set up ansible environment for vagrant user
sudo -u vagrant mkdir -p /home/vagrant/.ansible
sudo -u vagrant touch /home/vagrant/.ansible/ansible.cfg
# Create workspace and SSH directories
sudo -u vagrant mkdir -p /home/vagrant/ansible
sudo -u vagrant mkdir -p /home/vagrant/.ssh
sudo chmod 700 /home/vagrant/.ssh
# create directories and copy files to /home/vagrant
mkdir -p /home/vagrant/{ansible,scripts,pipelines,k8s}
sudo cp -r /vagrant/ansible/* /home/vagrant/ansible/
sudo cp -r /vagrant/scripts/* /home/vagrant/scripts/
sudo cp -r /vagrant/pipelines/* /home/vagrant/pipelines
sudo cp -r /vagrant/k8s/* /home/vagrant/k8s
sudo chmod +x /home/vagrant/pipelines/*.sh
# Copy the Vagrant private keys (these will be synced by Vagrant)
for i in {1..3}; do
sudo -u vagrant cp /vagrant/.vagrant/machines/vm$i/virtualbox/private_key /home/vagrant/.ssh/vm${i}_key
@ -82,7 +91,6 @@ if [ $? -ne 0 ]; then
exit 1
fi
cp -r /vagrant/ansible/* /home/vagrant/ansible/
eval `ssh-agent -s`
ssh-add # ~/machines/*/virtualbox/private_key
@ -98,12 +106,26 @@ if ! grep -qF "$BLOCK_START" "$BASHRC"; then
eval `ssh-agent -s`
ssh-add ~/machines/*/virtualbox/private_key
ssh-add -L
source /vagrant/.envrc
EOF
else
echo "Provisioning block already present in $BASHRC"
fi
ANSIBLE_HOST_KEY_CHECKING=False ansible --inventory-file /home/vagrant/ansible/ansible_inventory.ini -m ping vm1,vm2,vm3
echo
echo -------------------------
echo
su - vagrant
id
echo
echo -------------------------
echo
ssh-add ~/.ssh/vm*_key
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible --inventory-file /home/vagrant/ansible/ansible_inventory.ini -m ping vm1,vm2,vm3
if [ $? -ne 0 ]; then
echo "Ansible ping failed. Please check your Vagrant VMs and network configuration."
@ -111,7 +133,7 @@ if [ $? -ne 0 ]; then
fi
# install_keepalived.yaml
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_keepalived.yaml --inventory-file ansible_inventory.ini
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_keepalived.yaml --inventory-file ansible_inventory.ini
if [ $? -ne 0 ]; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
@ -119,17 +141,32 @@ fi
echo "Keepalived installation completed."
# install_k3s_3node.yaml
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_k3s_3node.yaml --inventory-file ansible_inventory.ini
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_k3s_3node.yaml --inventory-file ansible_inventory.ini
if [ $? -ne 0 ]; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
# copy_k8s_config.yaml
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook copy_k8s_config.yaml --inventory-file ansible_inventory.ini
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook copy_k8s_config.yaml --inventory-file ansible_inventory.ini
if [ $? -ne 0 ]; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_dnsmasq.yaml --inventory-file ansible_inventory.ini
if [ $? -ne 0 ]; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
# check infctl
cd /home/vagrant
bash /home/vagrant/scripts/check_install_infctl.sh
if [ $? -ne 0 ]; then
echo "infctl check failed. Please check your installation."
exit 1
fi

View file

@ -7,6 +7,8 @@ k3s_url_ip: "{{ lookup('env', 'K3S_URL_IP') | default('192.168.56.250', true) }}
workstation_ip: "{{ lookup('env', 'WORKSTATION_IP') | default('192.168.56.10', true) }}"
network_prefix: "{{ lookup('env', 'VAGRANT_NETWORK_PREFIX') | default('192.168.56', true) }}"
dnsmasq_k3s_domain: "{{ lookup('env', 'DNSMASQ_K3S_DOMAIN') | default('headshed.it/192.168.56.230', true) }}"
# K3s configuration
k3s_cluster_name: "dev-cluster"
k3s_token_file: "/opt/k3s-token"

View file

@ -0,0 +1,35 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-with-storage
namespace: default
spec:
selector:
matchLabels:
app: nginx-storage
replicas: 1
template:
metadata:
labels:
app: nginx-storage
spec:
initContainers:
- name: init-nginx-content
image: busybox
command: ["sh", "-c", "echo '<html><body><h1>Welcome to nginx!</h1><h2>using MVK</h2><p><a href=\"https://mvk.headshed.dev/\">https://mvk.headshed.dev/</a></p></body></html>' > /usr/share/nginx/html/index.html"]
volumeMounts:
- name: nginx-data
mountPath: /usr/share/nginx/html
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: nginx-data
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data-pvc

View file

@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: infmon-cli
namespace: default
spec:
selector:
matchLabels:
app: infmon-cli
replicas: 1
template:
metadata:
labels:
app: infmon-cli
spec:
containers:
- name: infmon-cli
image: 192.168.2.190:5000/infmon-cli:0.0.1
command: ["sleep", "infinity"]
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
volumeMounts:
- name: kubeconfig
mountPath: /root/.kube/config
subPath: config
volumes:
- name: kubeconfig
secret:
secretName: infmon-kubeconfig

View file

@ -0,0 +1,27 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: default
# This annotation is good practice to ensure it uses the right entrypoint
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
# This block is the key. It tells Ingress controllers like Traefik
# to use the specified secret for TLS termination for the listed hosts.
tls:
- hosts:
- "*.headshed.it" # Or a specific subdomain like test.headshed.it
secretName: wildcard-headshed-it-tls # <-- The name of the secret you created
rules:
- host: nginx.headshed.it # The actual domain you will use to access the service
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-storage # The name of the k8s service for your app
port:
number: 80 # The port your service is listening on

View file

@ -0,0 +1,7 @@
#!/usr/bin/env bash
kubectl apply -f pvc.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml

View file

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 1Gi

View file

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: nginx-storage
namespace: default
spec:
selector:
app: nginx-storage
ports:
- protocol: TCP
port: 80
targetPort: 80

View file

@ -0,0 +1,8 @@
apiVersion: traefik.io/v1alpha1
kind: TLSStore
metadata:
name: default
namespace: traefik
spec:
defaultCertificate:
secretName: wildcard-headshed-it-tls

View file

@ -0,0 +1,20 @@
[
{
"name": "Install Helm",
"function": "RunCommand",
"params": [
"./scripts/helm_check_install.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "Install traefik",
"function": "RunCommand",
"params": [
"./scripts/install_traefik.sh"
],
"retryCount": 0,
"shouldAbort": true
}
]

View file

@ -0,0 +1,29 @@
[
{
"name": "Install Longhorn pre-requisites",
"function": "RunCommand",
"params": [
"./scripts/longhorn_prereqs.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "Install Longhorn",
"function": "RunCommand",
"params": [
"./scripts/install_longhorn.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "Wait for Longhorn pods to come up",
"function": "RunCommand",
"params": [
"./scripts/wait_for_longhorn.sh"
],
"retryCount": 10,
"shouldAbort": true
}
]

View file

@ -0,0 +1,11 @@
[
{
"name": "Install metallb",
"function": "RunCommand",
"params": [
"./scripts/install_metallb.sh"
],
"retryCount": 0,
"shouldAbort": true
}
]

View file

@ -0,0 +1,21 @@
#!/usr/bin/env bash
# function to install infctl
install_infctl() {
echo "Installing infctl..."
# Add installation commands here
curl -L https://codeberg.org/headshed/infctl-cli/raw/branch/main/install.sh | bash
}
if ! command -v infctl &> /dev/null
then
echo "infctl could not be found, installing..."
install_infctl
fi
echo "infctl is installed and ready to use."

View file

@ -0,0 +1,15 @@
#!/usr/bin/env bash
# check to see if helm is installed
if ! command -v helm &> /dev/null; then
echo "Helm is not installed. Installing it now ..."
# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
if [ $? -ne 0 ]; then
echo "Failed to install Helm."
exit 1
fi
fi
helm version

View file

@ -0,0 +1,22 @@
#!/usr/bin/env bash
echo
echo "vagrant longhorn installation"
echo
ssh-add ~/.ssh/vm*_key
source /home/vagrant/ansible/venv/bin/activate
# Check if there are any pods in the longhorn-system namespace
if kubectl -n longhorn-system get pods --no-headers 2>/dev/null | grep -q '^[^ ]'; then
echo "Pods already exist in the longhorn-system namespace. Skipping installation."
exit 0
fi
# https://github.com/longhorn/longhorn/releases
# v1.8.1 in prod 1.9.1 is latest
LONGHORN_RELEASE="v1.8.1"
LONGHORN_RELEASE_URL="https://raw.githubusercontent.com/longhorn/longhorn/$LONGHORN_RELEASE/deploy/longhorn.yaml"
echo "Applying Longhorn release $LONGHORN_RELEASE..."
echo "Using Longhorn release URL: $LONGHORN_RELEASE_URL"
kubectl apply -f $LONGHORN_RELEASE_URL

View file

@ -0,0 +1,65 @@
#!/usr/bin/env bash
source /vagrant/.envrc
# Check if MetalLB is already installed by looking for the controller deployment
if ! kubectl get deployment -n metallb-system controller &>/dev/null; then
echo "Installing MetalLB..."
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/config/manifests/metallb-native.yaml
if [ $? -ne 0 ]; then
echo "Fatal: Failed to apply MetalLB manifest." >&2
exit 1
fi
# Wait for MetalLB components to be ready
echo "Waiting for MetalLB components to be ready..."
kubectl wait --namespace metallb-system \
--for=condition=ready pod \
--selector=app=metallb \
--timeout=90s
else
echo "MetalLB is already installed."
fi
# Wait for the webhook service to be ready
echo "Waiting for MetalLB webhook service to be ready..."
kubectl wait --namespace metallb-system \
--for=condition=ready pod \
--selector=component=webhook \
--timeout=90s
# Check if the IPAddressPool already exists
if ! kubectl get ipaddresspool -n metallb-system default &>/dev/null; then
echo "Creating MetalLB IPAddressPool..."
cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- ${METALLB_IP_RANGE}
EOF
else
echo "MetalLB IPAddressPool already exists."
fi
# Check if the L2Advertisement already exists
if ! kubectl get l2advertisement -n metallb-system default &>/dev/null; then
echo "Creating MetalLB L2Advertisement..."
cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default
EOF
else
echo "MetalLB L2Advertisement already exists."
fi

View file

@ -0,0 +1,68 @@
#!/usr/bin/env bash
# Exit immediately if a command exits with a non-zero status.
set -e
TMPFILE=$(mktemp)
trap 'rm -f "$TMPFILE"' EXIT
cat > "$TMPFILE" <<EOF
ingressClass:
enabled: true
isDefaultClass: true
ports:
web:
port: 80
websecure:
port: 443
traefik:
port: 9000
api:
dashboard: true
insecure: true
ingressRoute:
dashboard:
enabled: true
ping: true
log:
level: INFO
service:
enabled: true
type: LoadBalancer
annotations: {}
ports:
web:
port: 80
protocol: TCP
targetPort: web
websecure:
port: 443
protocol: TCP
targetPort: websecure
EOF
if helm status traefik --namespace traefik &> /dev/null; then
echo "Traefik is already installed in the 'traefik' namespace. Upgrading..."
helm upgrade traefik traefik/traefik --namespace traefik -f "$TMPFILE"
else
echo "Installing Traefik..."
helm repo add traefik https://traefik.github.io/charts
helm repo update
# Using --create-namespace is good practice, though traefik will always exist.
helm install traefik traefik/traefik --namespace traefik --create-namespace -f "$TMPFILE"
fi
# Apply the TLS store configuration
kubectl apply -f k8s/traefik-tlsstore.yaml
if [ $? -ne 0 ]; then
echo "Failed to apply TLS store configuration."
exit 1
fi
echo
echo "To access the dashboard:"
echo "kubectl port-forward -n traefik \$(kubectl get pods -n traefik -l \"app.kubernetes.io/name=traefik\" -o name) 9000:9000"
echo "Then visit http://localhost:9000/dashboard/ in your browser"

View file

@ -0,0 +1,54 @@
#!/usr/bin/env bash
echo
echo "vagrant longhorn installation"
echo
ssh-add ~/.ssh/vm*_key
source /home/vagrant/ansible/venv/bin/activate
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible --inventory-file /home/vagrant/ansible/ansible_inventory.ini -m ping vm1,vm2,vm3
if [ $? -ne 0 ]; then
echo "Ansible ping failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
echo "Ansible ping successful."
# Check if there are any pods in the longhorn-system namespace
if kubectl -n longhorn-system get pods --no-headers 2>/dev/null | grep -q '^[^ ]'; then
echo "Pods already exist in the longhorn-system namespace. Skipping installation."
exit 0
fi
echo "Installing Longhorn prerequisites..."
# install_longhorn_prereqs.yaml
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ~/ansible/install_longhorn_prereqs.yaml --inventory-file /home/vagrant/ansible/ansible_inventory.ini
if [ $? -ne 0 ]; then
echo "Ansible playbook failed. Please check the playbook and your inventory."
exit 1
fi
echo "installing Longhorn ..."
# https://github.com/longhorn/longhorn/releases
# v1.8.1 in prod 1.9.1 is latest
LONGHORN_RELEASE="v1.8.1"
LONGHORN_RELEASE_URL="https://raw.githubusercontent.com/longhorn/longhorn/$LONGHORN_RELEASE/deploy/longhorn.yaml"
echo "Applying Longhorn release $LONGHORN_RELEASE..."
echo "Using Longhorn release URL: $LONGHORN_RELEASE_URL"
kubectl apply -f $LONGHORN_RELEASE_URL
# Wait for all pods in longhorn-system namespace to be ready
echo "Waiting for Longhorn pods to be ready..."
while true; do
not_ready=$(kubectl -n longhorn-system get pods --no-headers 2>/dev/null | grep -vE 'Running|Completed' | wc -l)
total=$(kubectl -n longhorn-system get pods --no-headers 2>/dev/null | wc -l)
if [[ $total -gt 0 && $not_ready -eq 0 ]]; then
echo "All Longhorn pods are ready."
break
fi
sleep 10
done

View file

@ -0,0 +1,32 @@
#!/usr/bin/env bash
echo
echo "vagrant longhorn prerequisites"
echo
ssh-add ~/.ssh/vm*_key
source /home/vagrant/ansible/venv/bin/activate
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible --inventory-file /home/vagrant/ansible/ansible_inventory.ini -m ping vm1,vm2,vm3
if [ $? -ne 0 ]; then
echo "Ansible ping failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
echo "Ansible ping successful."
# Check if there are any pods in the longhorn-system namespace
if kubectl -n longhorn-system get pods --no-headers 2>/dev/null | grep -q '^[^ ]'; then
echo "Pods already exist in the longhorn-system namespace. Skipping installation."
exit 0
fi
exit
echo "Installing Longhorn prerequisites..."
# install_longhorn_prereqs.yaml
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ~/ansible/install_longhorn_prereqs.yaml --inventory-file /home/vagrant/ansible/ansible_inventory.ini
if [ $? -ne 0 ]; then
echo "Ansible playbook failed. Please check the playbook and your inventory."
exit 1
fi

View file

@ -0,0 +1,17 @@
#!/usr/bin/env bash
echo
echo "wait for longhorn installation"
echo
ssh-add ~/.ssh/vm*_key
source /home/vagrant/ansible/venv/bin/activate
while true; do
not_ready=$(kubectl -n longhorn-system get pods --no-headers 2>/dev/null | grep -vE 'Running|Completed' | wc -l)
total=$(kubectl -n longhorn-system get pods --no-headers 2>/dev/null | wc -l)
if [[ $total -gt 0 && $not_ready -eq 0 ]]; then
echo "All Longhorn pods are ready."
break
fi
sleep 10
done