Compare commits

...
Sign in to create a new pull request.

13 commits

Author SHA1 Message Date
jon brookes
2f11890779 feat: add .env file check and load it in startup script 2025-10-14 18:15:32 +01:00
jon brookes
a8f25e733c Merge branch 'feat/mvk-gcloud-template' of ssh://codeberg.org/headshed/infctl-cli into feat/mvk-gcloud-template 2025-10-14 18:05:46 +01:00
jon brookes
e0906c821d fix: change to repo dir 2025-10-14 18:01:55 +01:00
jon brookes
8f19558826 feat: update Forgejo deployment
add DNS update step and complete forgejo deployment after build
2025-10-14 17:52:00 +01:00
jon brookes
02b114e0e6 feat: add scripts for pre-flight checks and user input wait in k3s pipeline 2025-10-14 17:49:59 +01:00
jon brookes
f23e1c41ff feat: add .env file existence check and load it in startup script 2025-10-14 16:50:12 +01:00
jon brookes
b4c0f17b12 feat: add script to copy .env file to k3s-vm-1 after pre-flight checks 2025-10-14 16:32:05 +01:00
jon brookes
bb4d0cc701 feat: update Forgejo deployment URL and add installation check in startup script 2025-10-14 15:58:09 +01:00
jon brookes
8faa97a8bb feat: env INSTALL_LONGHORN
Add Ansible playbooks for Longhorn, MetalLB, and Traefik installation

conditional on presence of INSTALL_LONGHORN=true
2025-10-10 13:33:11 +01:00
jon brookes
80f4e5a53b fix: Update cert-manager
improve installation script and increase max readines retries for cert-manager
2025-10-08 15:03:24 +01:00
jon brookes
9fc84486a1 test cloudflare terraform dns updates 2025-10-04 12:24:03 +01:00
jon brookes
e0891f6c09 fix: Add create_tfvars script and update pipeline configuration 2025-10-03 15:46:30 +01:00
jon brookes
2ab7872af1 Add Google Cloud K3s infrastructure support
- Add Terraform configuration for GCP instance and storage
- Add startup script for K3s installation and configuration
- Add pipeline scripts for deployment and management
- Add Forgejo deployment manifests and configuration
2025-10-02 15:41:50 +01:00
42 changed files with 1443 additions and 384 deletions

3
.env.gcloud-example Normal file
View file

@ -0,0 +1,3 @@
PROJECT_NAME="the name of your gcp project, often referred to as the project"
EMAIL="your email address to identify yourself with letsencrypt"
APP_DOMAIN_NAME="your domain name for the app, e.g., frgdr.some-domain.com"

9
.gitignore vendored
View file

@ -24,3 +24,12 @@ vagrant/dev/ubuntu/ansible/ansible_inventory.ini
*.cast
vagrant/dev/ubuntu/certs/
vagrant/dev/ubuntu/config-dev
.terraform*
registry*.json*
terraform.tfstate**
*history*.txt
*.tfvars
gcloud/tf/.env
gcloud/tf/k3s/forgejo/issuer.yaml
gcloud/tf/k3s/forgejo/ingress.yaml
.env

378
README.md
View file

@ -1,351 +1,81 @@
# INFCTL CLI
A command-line tool for automated deployment and management of an [MVK (Minimal Viable Kubernetes) infrastructure](https://mvk.headshed.dev/). The CLI orchestrates Kubernetes deployments by executing shell scripts and applying Kubernetes manifests through a JSON-defined pipeline approach.
# infctl-cli
## Table of Contents
- [Overview](#overview)
- [Features](#features)
- [Design Philosophy](#design-philosophy)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Configuration](#configuration)
- [Usage](#usage)
- [Pipeline Execution](#pipeline-execution)
- [Scripts Directory](#scripts-directory)
- [Kubernetes Manifests](#kubernetes-manifests)
- [Project Structure](#project-structure)
- [Development](#development)
- [Contributing](#contributing)
- [License](#license)
## Overview
INFCTL CLI is a Go-based deployment orchestrator that automates the setup and deployment of INFCTL applications in Kubernetes environments. The tool executes a series of predefined scripts from the `scripts/` directory and applies Kubernetes manifests from the `k8s-manifests/` directory using kubectl and kustomize, all defined in a JSON pipeline file.
## Features
- **JSON-Defined Pipeline Execution**: Runs deployment scripts and manifests in an order specified in a JSON pipeline file
- **Script Orchestration**: Executes shell scripts from the `scripts/` directory for various setup tasks
- **Kustomize Integration**: Applies Kubernetes manifests using kubectl kustomize
- **Namespace Management**: Automatically creates required Kubernetes namespaces
- **Secret Management**: Automated creation of secrets for databases, SMTP, AWS, etc.
- **ConfigMap Management**: Creates and manages application configuration maps
- **Infrastructure Setup**: Installs and configures cert-manager, Traefik, Longhorn, and PostgreSQL operators
- **Retry Logic**: Built-in retry mechanism for failed operations
- **Structured Logging**: JSON-based logging with debug support
- **Integrated Testing**: Includes smoke tests using k3d for validation
## Design Philosophy
infctl-cli is built on principles derived from over 20 years of experience tackling deployment and orchestration challenges. The design is inspired by a "plugin" mentality, where each plugin is essentially a script. This approach emphasizes simplicity and modularity, allowing each script to act as an independent unit of execution.
Key principles include:
- **Script-Based Orchestration**: Each script or program, when executed, returns an exit code that indicates success or failure. This exit code is used to determine the next steps in the pipeline, enabling robust and predictable orchestration.
- **Structured Logging**: Scripts produce structured logs that can be consumed by web interfaces or stored in a database. This ensures transparency and traceability, making it easier to debug and monitor deployments.
- **Modularity and Reusability**: By treating scripts as plugins, the system encourages reusability and flexibility. New functionality can be added by simply introducing new scripts without altering the core logic.
- **UNIX Philosophy**: The design adheres to the UNIX philosophy of building small, composable tools that do one thing well. Each script is a self-contained unit that performs a specific task.
This philosophy underpins infctl's ability to orchestrate complex deployments while remaining simple and extensible.
## Prerequisites
- Go 1.23.3 or later
- Bash shell environment
- For running tests: k3d installed
- Kubernetes cluster with kubectl configured (for actual deployments)
- Required Kubernetes operators (installed by the tool for K8s deployments):
- cert-manager
- Traefik ingress controller
- PostgreSQL operator (Crunchy Data)
- Longhorn storage
## Installation
### Option 1: Download Pre-built Binary
You can run an install from `curl -L https://codeberg.org/headshed/infctl-cli/raw/branch/chore/code-refactor/install.sh | bash `
.or.
manually download the pre-built binary for your platform from the [releases page](https://codeberg.org/headshed/infctl-cli/releases).
1. Download the binary for your platform:
- **Linux**:
```bash
wget https://codeberg.org/headshed/infctl-cli/releases/download/v0.0.2/infctl-linux-amd64 -O /usr/local/bin/infctl
```
- **Windows**:
Download the `.exe` file from the [releases page](https://codeberg.org/headshed/infctl-cli/releases) and place it in a directory included in your `PATH`.
- **macOS (Intel)**:
```bash
wget https://codeberg.org/headshed/infctl-cli/releases/download/v0.0.2/infctl-darwin-amd64 -O /usr/local/bin/infctl
```
- **macOS (Apple Silicon)**:
```bash
wget https://codeberg.org/headshed/infctl-cli/releases/download/v0.0.2/infctl-darwin-arm64 -O /usr/local/bin/infctl
```
2. Make the binary executable:
```bash
chmod +x /usr/local/bin/infctl
```
### Option 2: Clone Repository and Build Binary
1. Clone the repository:
```bash
git clone <repository-url>
cd infctl-cli
```
2. Build the application:
```bash
go mod download
go build -o infctl-cli .
```
### Quick start example
```bash
# Copy configuration examples
cp base.json.example base.json
cp config.json.example config.json
cp pipeline.json.example pipeline.json
# Edit* configuration files as needed
# vim base.json
# vim config.json
# vim pipeline.json
# - where vim may be your default or chosen editor be it nano, vi, ed, emacs, etc ...
# Run with pipeline file
./infctl-cli --deployment-file pipeline.json
# or using the short format
./infctl-cli -f pipeline.json--deployment-file
```
## Configuration
The infctl-cli requires three configuration files:
### Base Configuration (`base.json`)
Copy and customize the base configuration:
```bash
cp base.json.example base.json
```
Key configuration options:
- `projects_directory`: Base directory for project deployments
- `app_image`: Docker image for the INFCTL application
- `webserver_image`: Docker image for the web server
- `env`: Environment file path
- `preview_path`: Path for preview functionality
### Project Configuration (`config.json`)
Copy and customize the project-specific configuration:
```bash
cp config.json.example config.json
```
Key configuration options:
- `project`: Project name/identifier (used as Kubernetes namespace)
- `project_directory`: Project-specific directory
- `ui_url`: UI service URL
- `static_url`: Static content URL
- `port`: Service port
### Pipeline Configuration (`pipeline.json`)
Copy and customize the pipeline definition:
```bash
cp pipeline.json.example pipeline.json
```
This file defines the sequence of operations to be executed, including:
- Scripts to run
- Kubernetes manifests to apply
- Order of operations
- Specific deployment type configuration
## Usage
Run the CLI by providing a path to your pipeline JSON file:
```bash
./infctl-cli --deployment-file /path/to/pipeline.json
# or using the short format
./infctl-cli -f /path/to/pipeline.json
```
The tool will automatically:
1. Load base and project configurations
2. Initialize SQLite database for state management
3. Execute the deployment pipeline defined in your JSON file
4. Run scripts from the `scripts/` directory
5. Apply Kubernetes manifests using kustomize (for K8s deployments)
--deployment-file
### Command Line Options
- `--deployment-file <path>` or `-f <path>`: Path to the pipeline JSON configuration file
- `--help`: Show help message and usage information
### Running from Source
You can also run directly with Go:
```bash
go run main.go --deployment-file /path/to/pipeline.json
# or using the short format
go run main.go -f /path/to/pipeline.json
```
### Running Tests
The project includes smoke tests using k3d for validation:
```bash
# Run all tests
go test ./... -v
# Run specific test
go test ./app -run TestRunPipeline
```
## Pipeline Execution
The CLI executes deployment tasks defined in your pipeline.json file, which typically includes:
### Infrastructure Setup Pipeline
Sets up the Kubernetes cluster infrastructure:
- Creates required namespaces (project, redis, traefik, metallb-system, longhorn-system, cert-manager)
- Installs Traefik ingress controller
- Sets up PostgreSQL operator (Crunchy Data)
- Configures cert-manager and Redis
### Application Deployment Pipeline
Deploys the INFCTL application:
- Creates database secrets and configurations
- Sets up SMTP secrets
- Creates application secrets and config maps
- Applies INFCTL Kubernetes manifests
- Configures ingress and networking
## Scripts Directory
The `scripts/` directory contains shell scripts executed by the CLI:
### Infrastructure Scripts
- `install_traefik.sh` - Installs Traefik ingress controller
- `install_cert-manager.sh` - Installs cert-manager
- `install_longhorn.sh` - Installs Longhorn storage
- `install_cloudnative_pg.sh` - Installs PostgreSQL operator
### Database Scripts
- `create_crunchy_operator.sh` - Sets up PostgreSQL operator
- `check_crunchy_operator.sh` - Verifies operator installation
- `create_crunchy_db.sh` - Creates PostgreSQL database
- `create_crunchy_inf_secrets.sh` - Creates database secrets
### Application Scripts
- `create_app_secret_inf.sh` - Creates application secrets
- `create_smtp_inf_secrets.sh` - Creates SMTP configuration
- `create_init_configmap_inf.sh` - Creates initialization config maps
- `create_nginx_configmap_inf.sh` - Creates Nginx configuration
- `create_php_configmap_inf.sh` - Creates PHP configuration
### Cloud Provider Scripts
- `create_aws_secrets.sh` - Creates AWS secrets
- `create_cloudflare_secret.sh` - Creates Cloudflare secrets
- `create_redis_secret.sh` - Creates Redis secrets
## Kubernetes Manifests
The `k8s-manifests/` directory contains Kubernetes resources applied via kustomize:
### INFCTL Application (`k8s-manifests/inf/`)
- `deployment.yaml` - Main application deployment
- `pvc.yaml` - Persistent volume claims
- `kustomization.yaml` - Kustomize configuration
### INFCTL Ingress (`k8s-manifests/inf-ingress/`)
- `ingress.yaml` - Ingress rules
- `service.yaml` - Service definitions
- `issuer.yaml` - Certificate issuer
- `kustomization.yaml` - Kustomize configuration
`infctl-cli` is a Go command-line tool for orchestrating deployment pipelines using shell scripts and Kubernetes manifests. The tool is configured via JSON files and executes tasks as defined in a pipeline configuration.
## Project Structure
```
infctl-cli/
├── main.go # Application entry point
├── go.mod # Go module definition
├── base.json.example # Base configuration template
├── config.json.example # Project configuration template
├── app/ # Core application logic
│ ├── app.go # Pipeline orchestration and state management
│ └── k8s.go # Kubernetes operations (kubectl, kustomize)
├── config/ # Configuration management
│ ├── base.go # Base configuration handling
│ └── project.go # Project configuration handling
├── database/ # SQLite database operations
├── scripts/ # Shell scripts executed by the CLI
│ ├── install_*.sh # Infrastructure installation scripts
│ ├── create_*_secrets.sh # Secret creation scripts
│ └── create_*_configmap_*.sh # ConfigMap creation scripts
├── k8s-manifests/ # Kubernetes manifests applied via kustomize
│ ├── ctl/ # INFCTL application manifests
│ └── ctl-ingress/ # INFCTL ingress configuration
├── templates/ # Template files for configuration generation
└── files/ # Static configuration files
├── main.go # Application entry point
├── go.mod # Go module definition
├── app/ # Core application logic
│ ├── app.go # Pipeline orchestration and state management
│ └── k8s.go # Kubernetes operations
├── config/ # Configuration management
│ ├── base.go # Base configuration handling
│ └── project.go # Project configuration handling
├── docs/ # Documentation
│ ├── API_REFERENCE.md # API reference
│ └── CONFIG_SCHEMA.md # Config schema
├── scripts/ # Shell scripts executed by the CLI
├── k8s-manifests/ # Kubernetes manifests
├── templates/ # Template files
└── files/ # Static configuration files
```
## Development
## Configuration Files
### Building from Source
Three JSON files are used:
- `base.json`: Base configuration (e.g., `projects_directory`, images, environment file path)
- `config.json`: Project-specific configuration (e.g., project name, directory, URLs, port)
- `pipeline.json`: Defines the sequence of scripts and manifests to execute
Example configuration files are provided as `.example` files in the repository.
## Usage
Build the CLI:
```bash
go mod download
go build -o infctl-cli .
```
### Running with Debug Logging
Run the CLI with a pipeline file:
The CLI uses structured JSON logging. Debug logs are enabled by default and include detailed information about script execution and kustomize operations.
```bash
./infctl-cli --deployment-file pipeline.json
# or
./infctl-cli -f pipeline.json
```
### Adding New Scripts
The CLI will:
1. Place shell scripts / executables in the `scripts/` directory
2. Add confiiguration as appropriate into `pipeline.json`
3. Re-run `infctl-cli --deployment-file pipeline.json` or `infctl-cli -f pipeline.json`
1. Load base and project configuration
2. Initialize SQLite database for state management
3. Execute the pipeline defined in the JSON file
4. Run scripts from the `scripts/` directory
5. Apply Kubernetes manifests from the `k8s-manifests/` directory
### Adding New Manifests
## Scripts
1. Create Kubernetes YAML files in the appropriate `k8s-manifests/` subdirectory
2. Include a `kustomization.yaml` file for kustomize processing
3. Add confiiguration as appropriate into `pipeline.json`
4. Re-run `infctl-cli --deployment-file pipeline.json` or `infctl-cli -f pipeline.json`
Shell scripts in `scripts/` are executed as defined in the pipeline configuration. Scripts are responsible for infrastructure setup, secret creation, configmap generation, and other deployment tasks. The pipeline JSON determines the order and parameters for each script.
## Contributing
## Kubernetes Manifests
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
Manifests in `k8s-manifests/` are applied using kubectl and kustomize. The pipeline configuration specifies which manifests to apply and in what order.
## Testing
Run tests with:
```bash
go test ./... -v
```
## License
This project is licensed under the GNU General Public License v3.0. See the [LICENSE](./LICENSE) file for details.
This project is licensed under the GNU General Public License v3.0. See `LICENSE` for details.

50
docs/API_REFERENCE.md Normal file
View file

@ -0,0 +1,50 @@
# API Reference
This document describes the API and pipeline functions available in `infctl-cli`.
## PipelineStep Structure
Each pipeline step is defined as:
- `name`: Step name (string)
- `function`: Function to call (string)
- `params`: List of parameters (array of strings)
- `retryCount`: Number of retries (integer)
- `shouldAbort`: Whether to abort on failure (boolean)
## Available Functions
### k8sNamespaceExists
Checks if a Kubernetes namespace exists.
- Params: `[namespace]` (string)
- Returns: error if namespace does not exist
### RunCommand
Runs a shell command.
- Params: `[command]` (string)
- Returns: error if command fails
## Example Pipeline JSON
```
[
{
"name": "ensure inf namespace exists",
"function": "k8sNamespaceExists",
"params": ["infctl"],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create php configmap",
"function": "RunCommand",
"params": ["./scripts/create_php_configmap_ctl.sh"],
"retryCount": 0,
"shouldAbort": true
}
]
```
## Notes
- Only functions defined in the codebase are available for use in pipelines.
- The API does not expose any HTTP endpoints; all orchestration is via CLI and pipeline JSON.

51
docs/CONFIG_SCHEMA.md Normal file
View file

@ -0,0 +1,51 @@
# Configuration Schema
This document describes the configuration schema for `infctl-cli`.
## Base Configuration (`base.json`)
Example:
```json
{
"retry_delay_seconds": 3
}
```
- `retry_delay_seconds` (integer): Delay in seconds before retrying failed steps.
## Project Configuration (`config.json`)
Project configuration fields are defined in the code and may include:
- Project name
- Directory paths
- URLs
- Port numbers
- Log format
Refer to the code for exact field names and types.
## Pipeline Configuration (`pipeline.json`)
Pipeline configuration is an array of steps. Each step:
- `name`: Step name (string)
- `function`: Function to call (string)
- `params`: List of parameters (array of strings)
- `retryCount`: Number of retries (integer)
- `shouldAbort`: Whether to abort on failure (boolean)
Example:
```json
[
{
"name": "ensure inf namespace exists",
"function": "k8sNamespaceExists",
"params": ["infctl"],
"retryCount": 0,
"shouldAbort": true
}
]
```
## Notes
- Example configuration files are provided as `.example` files in the repository.
- All configuration fields must match those defined in the codebase; do not add undocumented fields.

10
gcloud/tf/Dockerfile Normal file
View file

@ -0,0 +1,10 @@
FROM python:3.12-slim
# Install dependencies
RUN pip install --no-cache-dir gunicorn httpbin
# Expose the application port
EXPOSE 80
# Launch the application
CMD ["gunicorn", "-b", "0.0.0.0:80", "httpbin:app"]

16
gcloud/tf/firewall.tf Normal file
View file

@ -0,0 +1,16 @@
// Firewall
// ----------------------------------
resource "google_compute_firewall" "allow_http" {
name = "allow-http"
network = "default"
allow {
protocol = "tcp"
ports = [
"80", "443" // http/https
]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["web"]
}

View file

@ -0,0 +1,68 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: forgejo-deployment
namespace: forgejo
labels:
app: forgejo-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: forgejo-app
template:
metadata:
labels:
app: forgejo-app
spec:
terminationGracePeriodSeconds: 10
containers:
- name: forgejo
image: codeberg.org/forgejo/forgejo:11.0.6
imagePullPolicy: IfNotPresent
env:
- name: FORGEJO__repository__ENABLE_PUSH_CREATE_USER
value: "true"
- name: FORGEJO__server__ROOT_URL
value: "https://frgdr.headshed.dev/"
- name: FORGEJO__repository__DEFAULT_BRANCH
value: "main"
- name: FORGEJO__server__LFS_START_SERVER
value: "true"
- name: FORGEJO__security__INSTALL_LOCK
value: "true"
- name: FORGEJO__service__DISABLE_REGISTRATION
value: "false"
ports:
- name: http
containerPort: 3000
protocol: TCP
- name: ssh
containerPort: 22
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
tty: true
volumeMounts:
- name: forgejo-data
mountPath: /data
# - name: forgejo-timezone
# mountPath: /etc/timezone
# - name: forgejo-localtime
# mountPath: /etc/localtime
volumes:
- name: forgejo-data
persistentVolumeClaim:
claimName: forgejo-data-pvc
# - name: forgejo-timezone
# configMap:
# name: forgejo-timezone
# - name: forgejo-localtime
# configMap:
# name: forgejo-localtime

View file

@ -0,0 +1,24 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-forgejo-ingress-http
namespace: forgejo
annotations:
cert-manager.io/issuer: "le-cluster-issuer-http"
spec:
tls:
- hosts:
- ${APP_DOMAIN_NAME}
secretName: tls-frg-ingress-http
rules:
- host: ${APP_DOMAIN_NAME}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: forgejo-app-service
port:
name: web

View file

@ -0,0 +1,17 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: le-cluster-issuer-http
namespace: forgejo
spec:
acme:
email: ${EMAIL}
# We use the staging server here for testing to avoid throttling.
server: https://acme-staging-v02.api.letsencrypt.org/directory
# server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: http-issuer-account-key
solvers:
- http01:
ingress:
class: traefik

View file

@ -0,0 +1,26 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: forgejo-local-pv
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/disks/app-data/forgejo
storageClassName: local-path
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-data-pvc
namespace: forgejo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
volumeName: forgejo-local-pv
storageClassName: local-path

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: forgejo-app-service
namespace: forgejo
spec:
selector:
app: forgejo-app
ports:
- name: web
protocol: TCP
port: 3000
targetPort: 3000

95
gcloud/tf/main.tf Normal file
View file

@ -0,0 +1,95 @@
// Compute
// ----------------------------------
// The instance for K3S
resource "google_compute_instance" "k3s" {
name = "k3s-vm-1"
machine_type = "e2-small" # This instance will have 2 Gb of RAM
zone = var.zone
tags = ["web"]
// Set the boot disk and the image (10 Gb)
boot_disk {
initialize_params {
image = "debian-cloud/debian-12"
size = 10
}
}
// ensures that the instance is a Spot VM
// means it can be preempted, but it's cheaper
# scheduling {
# automatic_restart = false
# provisioning_model = "SPOT"
# preemptible = true
# }
// attach a disk for K3S
attached_disk {
source = google_compute_disk.k3s_disk.id
device_name = "k3s-disk"
}
// attach a disk for app data
attached_disk {
source = google_compute_disk.app_data_disk.id
device_name = "app-data-disk"
}
network_interface {
network = "default"
// enable ephemeral ip
access_config {}
}
labels = {
env = var.env
region = var.region
app = var.app_name
sensitive = "false"
}
metadata_startup_script = file("scripts/k3s-vm-startup.sh")
allow_stopping_for_update = true
}
// Storage
// ----------------------------------
// The disk attached to the instance (15 Gb)
resource "google_compute_disk" "k3s_disk" {
name = "k3s-disk"
size = 15
type = "pd-standard"
zone = var.zone
}
// The disk for app data (20 Gb)
resource "google_compute_disk" "app_data_disk" {
name = "app-data-disk"
size = 20
type = "pd-standard"
zone = var.zone
}
// Outputs
// ----------------------------------
data "google_project" "project" {
project_id = var.project_name # Use variable from tfvars
}
output "project_number" {
value = data.google_project.project.number
}
output "k3s_vm_public_ip" {
value = google_compute_instance.k3s.network_interface[0].access_config[0].nat_ip
description = "Ephemeral public IP of the k3s VM"
}

64
gcloud/tf/provider.tf Normal file
View file

@ -0,0 +1,64 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
# cloudflare = {
# source = "cloudflare/cloudflare"
# version = "~> 5"
# }
}
}
// Provider
// ----------------------------------
// Connect to the GCP project
provider "google" {
# Configuration options
project = var.project_name # Use variable from tfvars
region = "us-central1" # Replace with your desired region
}
# provider "google" {
# credentials = file("<my-gcp-creds>.json")
# project = var.project_name
# region = var.region
# zone = var.zone
# }
# provider "cloudflare" {
# api_token = var.cloudflare_api_token
# }
# variable "cloudflare_api_token" {
# description = "Cloudflare API token"
# sensitive = true
# }
# variable "cloudflare_account_id" {
# description = "Cloudflare Account ID"
# sensitive = true
# }
# variable "cloudflare_zone_id" {
# description = "Cloudflare Zone ID"
# sensitive = true
# }
# variable "cloudflare_domain" {
# description = "Cloudflare Domain"
# sensitive = true
# }
# resource "cloudflare_dns_record" "frgdr" {
# zone_id = var.cloudflare_zone_id
# name = "frgdr"
# content = google_compute_instance.k3s.network_interface[0].access_config[0].nat_ip
# type = "A"
# ttl = 300
# proxied = false
# comment = "Application domain record"
# }

14
gcloud/tf/registry.tf Normal file
View file

@ -0,0 +1,14 @@
// Registry
// ----------------------------------
// The Artifact Registry repository for our app
resource "google_artifact_registry_repository" "app-repo" {
location = var.region
repository_id = "app-repo"
description = "App Docker repository"
format = "DOCKER"
docker_config {
immutable_tags = true
}
}

37
gcloud/tf/remote_state.tf Normal file
View file

@ -0,0 +1,37 @@
// Remote state
// ----------------------------------
# variable "bucket_name" {
# type = string
# default = "your-project-name-k3s-bucket"
# description = "your-project-name k3s Bucket"
# }
# terraform {
# # Use a shared bucket (wich allows collaborative work)
# backend "gcs" {
# bucket = "<my-bucket-for-states>"
# prefix = "k3s-infra"
# }
# // Set versions
# required_version = ">=1.8.0"
# required_providers {
# google = {
# source = "hashicorp/google"
# version = ">=4.0.0"
# }
# }
# }
// The bucket where you can store other data
# resource "google_storage_bucket" "k3s-storage" {
# name = var.bucket_name
# location = var.region
# labels = {
# env = var.env
# region = var.region
# app = var.app_name
# sensitive = "false"
# }
# }

View file

@ -0,0 +1,56 @@
[
{
"name": "run pre-flight checks",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/pre-flight-checks.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "list gcloud infrastructure",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/list_gloud_infra.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create tfvars",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/create_tfvars.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "run tofu",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/run_tofu.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "wait for user input to continue",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/wait_for_user_input_dns.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "copy .env to k3s-vm-1",
"function": "RunCommand",
"params": [
"gcloud/tf/scripts/copy_env_to_first_node.sh"
],
"retryCount": 0,
"shouldAbort": true
}
]

View file

@ -0,0 +1,47 @@
[
{
"name": "run pre-flight checks",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/pre-flight-checks.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "list gcloud infrastructure",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/list_gloud_infra.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create tfvars",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/create_tfvars.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "run tofu",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/run_tofu.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "copy .env to k3s-vm-1",
"function": "RunCommand",
"params": [
"gcloud/tf/scripts/copy_env_to_first_node.sh"
],
"retryCount": 0,
"shouldAbort": true
}
]

View file

@ -0,0 +1,34 @@
#!/usr/bin/env bash
if kubectl -n cert-manager get pods 2>/dev/null | grep -q 'Running'; then
echo "cert-manager pods already running. Skipping installation."
exit 0
fi
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.2/cert-manager.yaml
echo "Waiting for cert-manager pods to be in 'Running' state..."
MAX_RETRIES=10
RETRY=0
while [ $RETRY -lt $MAX_RETRIES ]; do
NOT_READY_PODS=$(kubectl -n cert-manager get pods --no-headers | grep -v 'Running' | wc -l)
if [ "$NOT_READY_PODS" -eq 0 ]; then
echo "All cert-manager pods are running."
break
else
echo "$NOT_READY_PODS pods are not ready yet. Waiting..."
RETRY=$((RETRY + 1))
sleep 5
fi
done
if [ "$NOT_READY_PODS" -ne 0 ]; then
echo "Failed to get all cert-manager pods running after $MAX_RETRIES attempts."
exit 1
fi

View file

@ -0,0 +1,31 @@
#!/usr/bin/env bash
source .env
for i in {1..10}; do
# Check if the instance is running
INSTANCE_STATUS=$(gcloud compute instances describe k3s-vm-1 --zone=us-central1-a --project="$PROJECT_NAME" --format='get(status)')
if [[ "$INSTANCE_STATUS" != "RUNNING" ]]; then
echo "Instance k3s-vm-1 is not running. Attempt $i/10. Waiting 5 seconds..."
sleep 5
continue
fi
# Check if the directory exists on the remote host
if gcloud compute ssh k3s-vm-1 --zone=us-central1-a --project="$PROJECT_NAME" --command="test -d /opt/src/infctl-cli/"; then
echo "/opt/src/infctl-cli/ exists on k3s-vm-1."
break
else
echo "/opt/src/infctl-cli/ does not exist yet. Attempt $i/10. Waiting 5 seconds..."
sleep 5
fi
done
# Final check after loop
if ! gcloud compute ssh k3s-vm-1 --zone=us-central1-a --project="$PROJECT_NAME" --command="test -d /opt/src/infctl-cli/"; then
echo "ERROR: /opt/src/infctl-cli/ does not exist on k3s-vm-1 after 10 attempts. Exiting."
exit 1
fi
gcloud compute scp .env k3s-vm-1:/opt/src/infctl-cli/.env --zone=us-central1-a --project=$PROJECT_NAME

View file

@ -0,0 +1,32 @@
#!/bin/bash
set -a
# read environment variables from .env file
# for value $PROJECT_NAME
. .env
# Check if PROJECT_NAME environment variable is set
if [ -z "$PROJECT_NAME" ]; then
echo "Error: PROJECT_NAME environment variable is not set."
echo "Please set the PROJECT_NAME variable and try again."
exit 1
fi
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
cd "$SCRIPT_DIR" || { echo "Failed to change directory to $SCRIPT_DIR"; exit 1; }
# Define the template file path and output file path
TEMPLATE_FILE="../terraform.tfvars.template"
OUTPUT_FILE="../terraform.tfvars"
# Use envsubst to substitute the PROJECT_NAME variable into the template
envsubst < "$TEMPLATE_FILE" > "$OUTPUT_FILE"
if [ $? -ne 0 ]; then
echo "Error: Failed to substitute variables in the template."
exit 1
fi
echo "tfvars has been created at $OUTPUT_FILE"

View file

@ -0,0 +1,30 @@
#!/bin/bash
set -a
# read environment variables from .env file
# for value of APP_DOMAIN_NAME
. .env
if [ -z "$APP_DOMAIN_NAME" ]; then
echo "Error: APP_DOMAIN_NAME environment variable is not set. Please set it in the .env file."
exit 1
fi
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
cd "$SCRIPT_DIR" || { echo "Failed to change directory to $SCRIPT_DIR"; exit 1; }
# Define the template file path and output file path
TEMPLATE_FILE="../../k3s/forgejo/ingress.yaml.template"
OUTPUT_FILE="../../k3s/forgejo/ingress.yaml"
# Use envsubst to substitute the APP_DOMAIN_NAME variable into the template
envsubst < "$TEMPLATE_FILE" > "$OUTPUT_FILE"
if [ $? -ne 0 ]; then
echo "Error: Failed to substitute variables in the template."
exit 1
fi
echo "Ingress configuration has been created at $OUTPUT_FILE"

View file

@ -0,0 +1,33 @@
#!/bin/bash
set -a
# read environment variables from .env file
# for value of EMAIL
. .env
# Check if EMAIL environment variable is set
if [ -z "$EMAIL" ]; then
echo "Error: EMAIL environment variable is not set."
echo "Please set the EMAIL variable and try again."
exit 1
fi
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
cd "$SCRIPT_DIR" || { echo "Failed to change directory to $SCRIPT_DIR"; exit 1; }
# Define the template file path and output file path
TEMPLATE_FILE="../../k3s/forgejo/issuer.yaml.template"
OUTPUT_FILE="../../k3s/forgejo/issuer.yaml"
# Use envsubst to substitute the EMAIL variable into the template
envsubst < "$TEMPLATE_FILE" > "$OUTPUT_FILE"
if [ $? -ne 0 ]; then
echo "Error: Failed to substitute variables in the template."
exit 1
fi
echo "Issuer configuration has been created at $OUTPUT_FILE"

View file

@ -0,0 +1,45 @@
#!/bin/bash
set -e
echo "Installing Forgejo"
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
# Define namespace
NAMESPACE="forgejo"
MANIFESTS_DIR="${SCRIPT_DIR}/../../k3s/forgejo"
echo "Creating namespace..."
if ! kubectl get namespace "${NAMESPACE}" >/dev/null 2>&1; then
kubectl create namespace "${NAMESPACE}"
else
echo "Namespace '${NAMESPACE}' already exists."
fi
echo "Creating PersistentVolumeClaim..."
kubectl apply -f ${MANIFESTS_DIR}/pvc.yaml
echo "Creating Service..."
kubectl apply -f ${MANIFESTS_DIR}/service.yaml
echo "Creating Deployment..."
kubectl apply -f ${MANIFESTS_DIR}/deployment.yaml
echo "Creating Certificate Issuer..."
kubectl apply -f ${MANIFESTS_DIR}/issuer.yaml
echo "Creating Ingress..."
kubectl apply -f ${MANIFESTS_DIR}/ingress.yaml
echo "Forgejo installation complete."
echo "Verify deployment with: kubectl -n ${NAMESPACE} get pods,svc,ingress,pvc"
exit;
# Note: The ingressTCP.yaml is for a different application (galene) and should be applied separately
# echo "Note: The ingressTCP.yaml is for the galene application and has not been applied."

View file

@ -0,0 +1,47 @@
[
{
"name": "install cert-manager",
"function": "RunCommand",
"params": [
"gcloud/tf/scripts/cert-manager/install_cert-manager.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "install traefik",
"function": "RunCommand",
"params": [
"gcloud/tf/scripts/install_traefik.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create forgejo ingress",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/forgejo/create_ingress.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create forgejo issuer",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/forgejo/create_issuer.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "install forgejo",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/forgejo/install_forgejo.sh"
],
"retryCount": 0,
"shouldAbort": true
}
]

View file

@ -0,0 +1,64 @@
#!/usr/bin/env bash
# Exit immediately if a command exits with a non-zero status.
set -e
TMPFILE=$(mktemp /tmp/traefik-values-XXXXXX.yaml)
cat > "$TMPFILE" <<EOF
ingressClass:
enabled: true
isDefaultClass: true
ports:
web:
port: 80
hostPort: 80
websecure:
port: 443
hostPort: 443
traefik:
port: 9000
api:
dashboard: true
insecure: true
ingressRoute:
dashboard:
enabled: true
ping: true
log:
level: INFO
service:
enabled: true
type: ClusterIP
annotations: {}
ports:
web:
port: 80
protocol: TCP
targetPort: web
websecure:
port: 443
protocol: TCP
targetPort: websecure
EOF
if helm status traefik --namespace traefik &> /dev/null; then
echo "Traefik is already installed in the 'traefik' namespace. Upgrading..."
helm upgrade traefik traefik/traefik --namespace traefik -f "$TMPFILE"
else
echo "Installing Traefik..."
helm repo add traefik https://traefik.github.io/charts
helm repo update
# Using --create-namespace is good practice, though traefik will always exist.
helm install traefik traefik/traefik --namespace traefik --create-namespace -f "$TMPFILE"
fi
# echo
# echo "To access the dashboard:"
# echo "kubectl port-forward -n traefik \$(kubectl get pods -n traefik -l \"app.kubernetes.io/name=traefik\" -o name) 9000:9000"
# echo "Then visit http://localhost:9000/dashboard/ in your browser"

View file

@ -0,0 +1,134 @@
#!/bin/bash
# Redirect all output to a log file for reliability
exec > /tmp/startup.log 2>&1
INFCTL_GIT_REPO="https://codeberg.org/headshed/infctl-cli.git"
INFCTL_GIT_REPO_BRANCH="main"
INFCTL_INSTALL_DIR="/opt/src"
# ensure only run once
if [[ -f /etc/startup_was_launched ]]; then exit 0; fi
touch /etc/startup_was_launched
# Format the k3s disk if not already formatted
# This creates an ext4 filesystem on the specified
# disk with no reserved space for root, forces the operation,
# fully initializes inode tables and the journal, and enables
# discard/TRIM for better performance on SSDs or
# thin-provisioned storage.
if ! lsblk | grep -q "/var/lib/rancher/k3s"; then
mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google-k3s-disk
mkdir -p /var/lib/rancher/k3s
mount -o discard,defaults /dev/disk/by-id/google-k3s-disk /var/lib/rancher/k3s
chmod a+w /var/lib/rancher/k3s
fi
# A disk named k3s-disk in your Terraform configuration will
# appear as /dev/disk/by-id/google-k3s-disk.
# Format the app-data-disk if not already formatted
if ! lsblk | grep -q "/mnt/disks/app-data"; then
mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google-app-data-disk
mkdir -p /mnt/disks/app-data
mount -o discard,defaults /dev/disk/by-id/google-app-data-disk /mnt/disks/app-data
chmod a+w /mnt/disks/app-data
fi
# Similarly, a disk named app-data-disk will appear as /dev/
# disk/by-id/google-app-data-disk.
# Add to /etc/fstab for persistence (only if not already present)
if ! grep -q "/var/lib/rancher/k3s" /etc/fstab; then
echo "/dev/disk/by-id/google-k3s-disk /var/lib/rancher/k3s ext4 defaults,discard 0 0" >> /etc/fstab
fi
if ! grep -q "/mnt/disks/app-data" /etc/fstab; then
echo "/dev/disk/by-id/google-app-data-disk /mnt/disks/app-data ext4 defaults,discard 0 0" >> /etc/fstab
fi
# apt install
apt update
apt install -y ncdu htop git curl
# helm install
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
/bin/bash get_helm.sh
# user bashrc config
rc=/home/user/.bashrc
{
echo "export KUBECONFIG=~/.kube/config"
echo "alias l='ls -lah'"
echo "alias ll='ls -lh'"
echo "alias k=kubectl"
echo "export dry='--dry-run=client'"
echo "export o='-oyaml'"
echo "alias kcd='kubectl config use-context'"
echo "source <(kubectl completion bash)"
echo "complete -F __start_kubectl k"
echo "alias k='kubectl'"
} >> $rc
# Install k3s
k3s_version="v1.32.8+k3s1"
curl -sfL https://get.k3s.io \
| \
INSTALL_K3S_VERSION="$k3s_version" sh -s - server \
--cluster-init \
--disable traefik \
--disable servicelb
# Set up kubeconfig for the 'user' user
mkdir -p /home/user/.kube
chown user:user /home/user/.kube
chmod 700 /home/user/.kube
# Copy the kubeconfig file to the user's home directory
# for easier access
cp /etc/rancher/k3s/k3s.yaml /home/user/.kube/config
chown user:user /home/user/.kube/config
# install infctl
curl -L https://codeberg.org/headshed/infctl-cli/raw/branch/main/install.sh | bash
# clone infctl repo if not already present
if [[ ! -d "$INFCTL_INSTALL_DIR" ]]; then
mkdir -p "$INFCTL_INSTALL_DIR"
cd ${INFCTL_INSTALL_DIR} || "echo 'Failed to change directory to $INFCTL_INSTALL_DIR' ; exit 1"
git clone --branch "$INFCTL_GIT_REPO_BRANCH" "$INFCTL_GIT_REPO" || "echo 'Failed to clone $INFCTL_GIT_REPO' ; exit 1"
chown -R user:user "$INFCTL_INSTALL_DIR"
fi
for i in {1..100}; do
if [[ -f /opt/src/infctl-cli/.env ]]; then
echo ".env file found."
break
else
echo ".env file not found. Attempt $i/100. Waiting 5 seconds..."
sleep 5
fi
done
# Final check after loop
if [[ ! -f /opt/src/infctl-cli/.env ]]; then
echo "ERROR: .env file not found after 10 attempts. Exiting."
exit 1
fi
# load .env file
source /opt/src/infctl-cli/.env
cd $INFCTL_INSTALL_DIR/infctl-cli || "echo 'Failed to change directory to $INFCTL_INSTALL_DIR/infctl-cli' ; exit 1"
# check to see if INSTALL_FORGEJO is set to "true"
if [[ "$INSTALL_FORGEJO" == "true" ]]; then
# install forgejo using infctl
# ....
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
LOG_FORMAT=none infctl -f "${INFCTL_INSTALL_DIR}/infctl-cli/gcloud/tf/scripts/install-forgejo-pipeline.json"
touch /etc/forgejo_was_installed
fi

View file

@ -0,0 +1,16 @@
#!/usr/bin/env bash
. .env
if [ -z "$PROJECT_NAME" ]; then
echo "❌ PROJECT_NAME is not set. Please add PROJECT_NAME=<your_project_name> to your .env file before running this script."
exit 1
fi
gcloud compute instances list --project="$PROJECT_NAME" && gcloud compute disks list --project="$PROJECT_NAME" && gcloud compute firewall-rules list --project="$PROJECT_NAME" && gcloud storage buckets list --project="$PROJECT_NAME"
if [ $? -ne 0 ]; then
echo "❌ gcloud is not authenticated, please run 'gcloud auth login' first"
echo
exit 1
fi

View file

@ -0,0 +1,65 @@
#!/usr/bin/env bash
echo "🧪 checking we have tofu insatalled..."
if ! command -v tofu &> /dev/null
then
echo "❌ tofu could not be found, please install it first"
echo
echo "see https://opentofu.org/docs/intro/install/standalone/"
echo
echo "and https://opentofu.org/docs/intro/install/ for more details"
echo
exit 1
fi
echo "✅ tofu is installed,..."
echo
tofu version
echo
echo "🧪 checking we have gcloud insatalled..."
if ! command -v gcloud &> /dev/null
then
echo "❌ gcloud could not be found, please install it first"
echo
echo "see https://cloud.google.com/sdk/docs/install"
echo
exit 1
fi
echo "✅ gcloud is installed,..."
echo
gcloud version
echo
echo "🧪 checking we have kubectl insatalled..."
if ! command -v kubectl &> /dev/null
then
echo "❌ kubectl could not be found, please install it first"
echo
echo "see https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/"
echo
exit 1
fi
echo "✅ kubectl is installed,..."
echo
kubectl version --client
echo
echo "🧪 checking we have envsubst insatalled..."
if ! command -v envsubst &> /dev/null
then
echo "❌ envsubst could not be found, please install it first"
echo
echo "on ubuntu you can install it with: sudo apt-get install -y gettext-base"
echo
exit 1
fi
echo "✅ envsubst is installed,..."
echo
envsubst --version
echo
echo "✅ Pre-flight checks passed. You are ready to proceed 🙂"
echo

29
gcloud/tf/scripts/run_tofu.sh Executable file
View file

@ -0,0 +1,29 @@
#!/usr/bin/env bash
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
cd "$SCRIPT_DIR" || { echo "Failed to change directory to $SCRIPT_DIR"; exit 1; }
TF_DIR="../"
cd "$TF_DIR" || { echo "Failed to change directory to $TF_DIR"; exit 1; }
if [[ -d ".terraform" && -f ".terraform.lock.hcl" ]]; then
echo "✅ Terraform already initialized"
# tofu init
else
echo "⚠️ Initializing Terraform..."
tofu init
fi
if [[ $? -ne 0 ]]; then
echo "❌ tofu init failed, please check the output above"
exit 1
fi
# tofu apply with auto-approve to make it non-interactive
tofu apply -auto-approve
if [[ $? -ne 0 ]]; then
echo "❌ tofu apply failed, please check the output above"
exit 1
fi

View file

@ -0,0 +1,11 @@
#!/usr/bin/env bash
echo "Please configure DNS using the IP address from the previous stage."
echo "you have 120 seconds."
for i in {120..1}; do
echo -ne "Time remaining: $i seconds\r"
sleep 1
done
echo ""
exit 0

View file

@ -0,0 +1,14 @@
// Your GCP project name
// it will be refererred as the project id
// in Google Cloud
// ----------------------------------
project_name = "<your gpc project name>"
// application name
app_name = "your-projects-k3s-cluster"
// where to deploy to
// region
region = "us-central1"
zone = "us-central1-a"

View file

@ -0,0 +1,13 @@
// Your GCP project name
// it will be refererred as the project id
// in Google Cloud
// ----------------------------------
project_name = "${PROJECT_NAME}"
// where to deploy to
// region
region = "us-central1"
zone = "us-central1-a"
// application name
app_name = "${PROJECT_NAME}-k3s-cluster"

28
gcloud/tf/vars.tf Normal file
View file

@ -0,0 +1,28 @@
// Env vars
// ----------------------------------
variable "project_name" {
type = string
}
variable "env" {
type = string
default = "dev"
description = "Environment"
}
variable "region" {
type = string
description = "GCP Region"
}
variable "zone" {
type = string
description = "GCP Zone"
}
variable "app_name" {
type = string
description = "Application name"
}

View file

@ -1,20 +1,29 @@
#!/usr/bin/env bash
# sleep 5
for i in {1..5}; do
echo "working ..."
sleep 0.5
done
echo "crash"
sleep 2
# sleep 1
echo "not working ..."
echo "bang"
sleep 1
# sleep 2
figlet "boom"
echo "wallop"
sleep 1
# sleep 1
figlet "bang"
echo "Houston, we have a problem"
sleep 2
echo "oh dear, oh my..."
sleep 1
figlet "Houston, we have a problem"
sleep 1

View file

@ -1,20 +1,29 @@
#!/usr/bin/env bash
# sleep 5
for i in {1..5}; do
echo "working ..."
sleep 0.5
done
echo "bish"
sleep 2
# sleep 1
echo "still working ..."
echo "bash"
sleep 1
# sleep 2
figlet "bish"
echo "bosh"
sleep 1
# sleep 1
figlet "bash"
echo "lovely jubbly"
sleep 2
figlet "bosh"
sleep 1
figlet "LOVELY JUBBLY"
sleep 1

View file

@ -117,7 +117,11 @@ Vagrant.configure("2") do |config|
vb.cpus = 1
end
ws.vm.provision "shell", path: "ansible/provision_workstation.sh"
ws.vm.provision "shell",
path: "ansible/provision_workstation.sh",
env: {
"INSTALL_LONGHORN" => ENV['INSTALL_LONGHORN'] || "false"
}
end

View file

@ -0,0 +1,16 @@
---
- name: Install longhorn using infctl
hosts: localhost
become: true
become_user: vagrant
serial: 1 # Ensure tasks are executed one host at a time
vars_files:
- vars.yaml
tasks:
- name: run infctl longhorn pipeline
ansible.builtin.command: >
bash -c 'cd /home/vagrant && LOG_FILE=/tmp/longhorn_log.txt LOG_FORMAT=basic infctl -f pipelines/vagrant-longhorn.json'
register: longhorn_result
ignore_errors: false

View file

@ -0,0 +1,16 @@
---
- name: Install metallb using infctl
hosts: localhost
become: true
become_user: vagrant
serial: 1 # Ensure tasks are executed one host at a time
vars_files:
- vars.yaml
tasks:
- name: run ======== infctl metallb pipeline
ansible.builtin.command: >
bash -c 'cd /home/vagrant && LOG_FILE=/tmp/metallb_log.txt LOG_FORMAT=basic infctl -f ./pipelines/vagrant-metallb.json'
register: metallb_result
ignore_errors: false

View file

@ -0,0 +1,20 @@
---
- name: Install traefik using infctl
hosts: localhost
become: true
become_user: vagrant
serial: 1 # Ensure tasks are executed one host at a time
vars_files:
- vars.yaml
tasks:
- name: run infctl traefik pipeline
ansible.builtin.command: infctl -f pipelines/vagrant-ingress.json
args:
chdir: /home/vagrant
environment:
LOG_FILE: /tmp/traefik_log.txt
LOG_FORMAT: none
register: traefik_result
ignore_errors: false

View file

@ -4,6 +4,7 @@
sudo apt-get update
sudo apt-get install -y software-properties-common git vim python3.10-venv jq figlet
# shellcheck disable=SC1091
source /vagrant/.envrc
# Set up ansible environment for vagrant user
@ -24,10 +25,10 @@ sudo chmod +x /home/vagrant/pipelines/*.sh
# Copy the Vagrant private keys (these will be synced by Vagrant)
for i in {1..3}; do
sudo -u vagrant cp /vagrant/.vagrant/machines/vm$i/virtualbox/private_key /home/vagrant/.ssh/vm${i}_key
sudo -u root cp /vagrant/.vagrant/machines/vm$i/virtualbox/private_key /root/.ssh/vm${i}_key
sudo chmod 600 /home/vagrant/.ssh/vm${i}_key
sudo chmod 600 /root/.ssh/vm${i}_key
sudo -u vagrant cp "/vagrant/.vagrant/machines/vm$i/virtualbox/private_key" "/home/vagrant/.ssh/vm${i}_key"
sudo -u root cp "/vagrant/.vagrant/machines/vm$i/virtualbox/private_key" "/root/.ssh/vm${i}_key"
sudo chmod 600 "/home/vagrant/.ssh/vm${i}_key"
sudo chmod 600 "/root/.ssh/vm${i}_key"
done
# Disable host key checking for easier learning
@ -46,18 +47,17 @@ cd "$ANSIBLE_DIR" || {
if [ ! -d "venv" ]; then
echo "Creating Python virtual environment in ./venv..."
python3 -m venv venv
source "venv/bin/activate"
if [ $? -ne 0 ]; then
# shellcheck disable=SC1091
if ! source "venv/bin/activate"; then
echo "Failed to activate virtual environment. Please check your Python installation."
exit 1
fi
echo "Virtual environment created and activated."
cp /vagrant/ansible/requirements.txt .
cp "/vagrant/ansible/requirements.txt" .
if [ -f "requirements.txt" ]; then
echo "Installing dependencies from requirements.txt..."
pip install --upgrade pip
pip install -r requirements.txt
if [ $? -ne 0 ]; then
if ! pip install -r requirements.txt; then
echo "Failed to install dependencies from requirements.txt."
exit 1
fi
@ -76,7 +76,13 @@ ls -al "$ANSIBLE_VENV_DIR/bin/activate"
if [ -d "$ANSIBLE_VENV_DIR" ]; then
echo "Activating Ansible virtual environment..."
source "$ANSIBLE_VENV_DIR/bin/activate"
if [ -f "$ANSIBLE_VENV_DIR/bin/activate" ]; then
# shellcheck source=/dev/null
source "$ANSIBLE_VENV_DIR/bin/activate"
else
echo "Virtualenv activate script not found!" >&2
exit 1
fi
else
echo "Ansible virtual environment not found at $ANSIBLE_VENV_DIR. Please create it before running this script."
exit 1
@ -86,13 +92,13 @@ echo ""
ansible --version
if [ $? -ne 0 ]; then
if ! ansible --version; then
echo "Ansible is not installed or not found in the virtual environment. Please check your installation."
exit 1
fi
eval `ssh-agent -s`
eval "$(ssh-agent -s)"
ssh-add # ~/machines/*/virtualbox/private_key
BASHRC="/home/vagrant/.bashrc"
@ -103,9 +109,10 @@ if ! grep -qF "$BLOCK_START" "$BASHRC"; then
cat <<'EOF' >> "$BASHRC"
# ADDED BY infctl provisioning
eval `ssh-agent -s`
eval "$(ssh-agent -s)"
ssh-add ~/machines/*/virtualbox/private_key
ssh-add -L
# shellcheck disable=SC1091
source /vagrant/.envrc
EOF
else
@ -125,48 +132,63 @@ echo
ssh-add ~/.ssh/vm*_key
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible --inventory-file /home/vagrant/ansible/ansible_inventory.ini -m ping vm1,vm2,vm3
if [ $? -ne 0 ]; then
if ! ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible --inventory-file /home/vagrant/ansible/ansible_inventory.ini -m ping vm1,vm2,vm3; then
echo "Ansible ping failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
# install_keepalived.yaml
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_keepalived.yaml --inventory-file ansible_inventory.ini
if [ $? -ne 0 ]; then
if ! ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_keepalived.yaml --inventory-file ansible_inventory.ini; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
echo "Keepalived installation completed."
# install_k3s_3node.yaml
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_k3s_3node.yaml --inventory-file ansible_inventory.ini
if [ $? -ne 0 ]; then
if ! ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_k3s_3node.yaml --inventory-file ansible_inventory.ini; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
# copy_k8s_config.yaml
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook copy_k8s_config.yaml --inventory-file ansible_inventory.ini
if [ $? -ne 0 ]; then
if ! ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook copy_k8s_config.yaml --inventory-file ansible_inventory.ini; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_dnsmasq.yaml --inventory-file ansible_inventory.ini
if [ $? -ne 0 ]; then
if ! ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_dnsmasq.yaml --inventory-file ansible_inventory.ini; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
# Wait for Kubernetes API to be ready
echo "Waiting for 30 seconds for Kubernetes API to be ready..."
sleep 30
echo "done waiting for kubernetes API"
# check infctl
cd /home/vagrant
bash /home/vagrant/scripts/check_install_infctl.sh
if [ $? -ne 0 ]; then
cd /home/vagrant || exit
if ! bash /home/vagrant/scripts/check_install_infctl.sh; then
echo "infctl check failed. Please check your installation."
exit 1
fi
# Optionally install Longhorn, MetalLB, and Traefik
if [ "${INSTALL_LONGHORN}" = "true" ]; then
cd /home/vagrant/ansible || { echo "Failed to change directory to /home/vagrant/ansible"; exit 1; }
if ! ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_longhorn.yaml --inventory-file ansible_inventory.ini; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
if ! ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_metallb.yaml --inventory-file ansible_inventory.ini; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
if ! ANSIBLE_SUPPRESS_INTERPRETER_DISCOVERY_WARNING=1 ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook install_traefik.yaml --inventory-file ansible_inventory.ini; then
echo "Ansible playbook failed. Please check your Vagrant VMs and network configuration."
exit 1
fi
fi

View file

@ -12,24 +12,31 @@ if ! kubectl get deployment -n metallb-system controller &>/dev/null; then
exit 1
fi
# Wait for MetalLB components to be ready
echo "Waiting for MetalLB components to be ready..."
kubectl wait --namespace metallb-system \
--for=condition=ready pod \
--selector=app=metallb \
--timeout=90s
echo "Waiting for MetalLB pods to be in 'Running' state..."
MAX_RETRIES=10
RETRY=0
while [ $RETRY -lt $MAX_RETRIES ]; do
NOT_READY_PODS=$(kubectl -n metallb-system get pods --no-headers | grep -v 'Running' | wc -l)
if [ "$NOT_READY_PODS" -eq 0 ]; then
echo "All MetalLB pods are running."
break
else
echo "$NOT_READY_PODS MetalLB pods are not ready yet. Waiting..."
RETRY=$((RETRY + 1))
sleep 5
fi
done
if [ "$NOT_READY_PODS" -ne 0 ]; then
echo "Failed to get all MetalLB pods running after $MAX_RETRIES attempts."
exit 1
fi
else
echo "MetalLB is already installed."
fi
# Wait for the webhook service to be ready
echo "Waiting for MetalLB webhook service to be ready..."
kubectl wait --namespace metallb-system \
--for=condition=ready pod \
--selector=component=webhook \
--timeout=90s
# Check if the IPAddressPool already exists
if ! kubectl get ipaddresspool -n metallb-system default &>/dev/null; then
echo "Creating MetalLB IPAddressPool..."