Add Google Cloud K3s infrastructure support

- Add Terraform configuration for GCP instance and storage
- Add startup script for K3s installation and configuration
- Add pipeline scripts for deployment and management
- Add Forgejo deployment manifests and configuration
This commit is contained in:
jon brookes 2025-09-06 19:03:55 +01:00
parent 7384722305
commit 2ab7872af1
30 changed files with 1024 additions and 324 deletions

3
.env.gcloud-example Normal file
View file

@ -0,0 +1,3 @@
PROJECT_NAME="the name of your gcp project, often referred to as the project"
EMAIL="your email address to identify yourself with letsencrypt"
APP_DOMAIN_NAME="your domain name for the app, e.g., frgdr.some-domain.com"

9
.gitignore vendored
View file

@ -24,3 +24,12 @@ vagrant/dev/ubuntu/ansible/ansible_inventory.ini
*.cast
vagrant/dev/ubuntu/certs/
vagrant/dev/ubuntu/config-dev
.terraform*
registry*.json*
terraform.tfstate**
*history*.txt
*.tfvars
gcloud/tf/.env
gcloud/tf/k3s/forgejo/issuer.yaml
gcloud/tf/k3s/forgejo/ingress.yaml
.env

360
README.md
View file

@ -1,289 +1,7 @@
# INFCTL CLI
A command-line tool for automated deployment and management of an [MVK (Minimal Viable Kubernetes) infrastructure](https://mvk.headshed.dev/). The CLI orchestrates Kubernetes deployments by executing shell scripts and applying Kubernetes manifests through a JSON-defined pipeline approach.
# infctl-cli
## Table of Contents
- [Overview](#overview)
- [Features](#features)
- [Design Philosophy](#design-philosophy)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Configuration](#configuration)
- [Usage](#usage)
- [Pipeline Execution](#pipeline-execution)
- [Scripts Directory](#scripts-directory)
- [Kubernetes Manifests](#kubernetes-manifests)
- [Project Structure](#project-structure)
- [Development](#development)
- [Contributing](#contributing)
- [License](#license)
## Overview
INFCTL CLI is a Go-based deployment orchestrator that automates the setup and deployment of INFCTL applications in Kubernetes environments. The tool executes a series of predefined scripts from the `scripts/` directory and applies Kubernetes manifests from the `k8s-manifests/` directory using kubectl and kustomize, all defined in a JSON pipeline file.
## Features
- **JSON-Defined Pipeline Execution**: Runs deployment scripts and manifests in an order specified in a JSON pipeline file
- **Script Orchestration**: Executes shell scripts from the `scripts/` directory for various setup tasks
- **Kustomize Integration**: Applies Kubernetes manifests using kubectl kustomize
- **Namespace Management**: Automatically creates required Kubernetes namespaces
- **Secret Management**: Automated creation of secrets for databases, SMTP, AWS, etc.
- **ConfigMap Management**: Creates and manages application configuration maps
- **Infrastructure Setup**: Installs and configures cert-manager, Traefik, Longhorn, and PostgreSQL operators
- **Retry Logic**: Built-in retry mechanism for failed operations
- **Structured Logging**: JSON-based logging with debug support
- **Integrated Testing**: Includes smoke tests using k3d for validation
## Design Philosophy
infctl-cli is built on principles derived from over 20 years of experience tackling deployment and orchestration challenges. The design is inspired by a "plugin" mentality, where each plugin is essentially a script. This approach emphasizes simplicity and modularity, allowing each script to act as an independent unit of execution.
Key principles include:
- **Script-Based Orchestration**: Each script or program, when executed, returns an exit code that indicates success or failure. This exit code is used to determine the next steps in the pipeline, enabling robust and predictable orchestration.
- **Structured Logging**: Scripts produce structured logs that can be consumed by web interfaces or stored in a database. This ensures transparency and traceability, making it easier to debug and monitor deployments.
- **Modularity and Reusability**: By treating scripts as plugins, the system encourages reusability and flexibility. New functionality can be added by simply introducing new scripts without altering the core logic.
- **UNIX Philosophy**: The design adheres to the UNIX philosophy of building small, composable tools that do one thing well. Each script is a self-contained unit that performs a specific task.
This philosophy underpins infctl's ability to orchestrate complex deployments while remaining simple and extensible.
## Prerequisites
- Go 1.23.3 or later
- Bash shell environment
- For running tests: k3d installed
- Kubernetes cluster with kubectl configured (for actual deployments)
- Required Kubernetes operators (installed by the tool for K8s deployments):
- cert-manager
- Traefik ingress controller
- PostgreSQL operator (Crunchy Data)
- Longhorn storage
## Installation
### Option 1: Download Pre-built Binary
You can run an install from `curl -L https://codeberg.org/headshed/infctl-cli/raw/branch/chore/code-refactor/install.sh | bash `
.or.
manually download the pre-built binary for your platform from the [releases page](https://codeberg.org/headshed/infctl-cli/releases).
1. Download the binary for your platform:
- **Linux**:
```bash
wget https://codeberg.org/headshed/infctl-cli/releases/download/v0.0.2/infctl-linux-amd64 -O /usr/local/bin/infctl
```
- **Windows**:
Download the `.exe` file from the [releases page](https://codeberg.org/headshed/infctl-cli/releases) and place it in a directory included in your `PATH`.
- **macOS (Intel)**:
```bash
wget https://codeberg.org/headshed/infctl-cli/releases/download/v0.0.2/infctl-darwin-amd64 -O /usr/local/bin/infctl
```
- **macOS (Apple Silicon)**:
```bash
wget https://codeberg.org/headshed/infctl-cli/releases/download/v0.0.2/infctl-darwin-arm64 -O /usr/local/bin/infctl
```
2. Make the binary executable:
```bash
chmod +x /usr/local/bin/infctl
```
### Option 2: Clone Repository and Build Binary
1. Clone the repository:
```bash
git clone <repository-url>
cd infctl-cli
```
2. Build the application:
```bash
go mod download
go build -o infctl-cli .
```
### Quick start example
```bash
# Copy configuration examples
cp base.json.example base.json
cp config.json.example config.json
cp pipeline.json.example pipeline.json
# Edit* configuration files as needed
# vim base.json
# vim config.json
# vim pipeline.json
# - where vim may be your default or chosen editor be it nano, vi, ed, emacs, etc ...
# Run with pipeline file
./infctl-cli --deployment-file pipeline.json
# or using the short format
./infctl-cli -f pipeline.json--deployment-file
```
## Configuration
The infctl-cli requires three configuration files:
### Base Configuration (`base.json`)
Copy and customize the base configuration:
```bash
cp base.json.example base.json
```
Key configuration options:
- `projects_directory`: Base directory for project deployments
- `app_image`: Docker image for the INFCTL application
- `webserver_image`: Docker image for the web server
- `env`: Environment file path
- `preview_path`: Path for preview functionality
### Project Configuration (`config.json`)
Copy and customize the project-specific configuration:
```bash
cp config.json.example config.json
```
Key configuration options:
- `project`: Project name/identifier (used as Kubernetes namespace)
- `project_directory`: Project-specific directory
- `ui_url`: UI service URL
- `static_url`: Static content URL
- `port`: Service port
### Pipeline Configuration (`pipeline.json`)
Copy and customize the pipeline definition:
```bash
cp pipeline.json.example pipeline.json
```
This file defines the sequence of operations to be executed, including:
- Scripts to run
- Kubernetes manifests to apply
- Order of operations
- Specific deployment type configuration
## Usage
Run the CLI by providing a path to your pipeline JSON file:
```bash
./infctl-cli --deployment-file /path/to/pipeline.json
# or using the short format
./infctl-cli -f /path/to/pipeline.json
```
The tool will automatically:
1. Load base and project configurations
2. Initialize SQLite database for state management
3. Execute the deployment pipeline defined in your JSON file
4. Run scripts from the `scripts/` directory
5. Apply Kubernetes manifests using kustomize (for K8s deployments)
--deployment-file
### Command Line Options
- `--deployment-file <path>` or `-f <path>`: Path to the pipeline JSON configuration file
- `--help`: Show help message and usage information
### Running from Source
You can also run directly with Go:
```bash
go run main.go --deployment-file /path/to/pipeline.json
# or using the short format
go run main.go -f /path/to/pipeline.json
```
### Running Tests
The project includes smoke tests using k3d for validation:
```bash
# Run all tests
go test ./... -v
# Run specific test
go test ./app -run TestRunPipeline
```
## Pipeline Execution
The CLI executes deployment tasks defined in your pipeline.json file, which typically includes:
### Infrastructure Setup Pipeline
Sets up the Kubernetes cluster infrastructure:
- Creates required namespaces (project, redis, traefik, metallb-system, longhorn-system, cert-manager)
- Installs Traefik ingress controller
- Sets up PostgreSQL operator (Crunchy Data)
- Configures cert-manager and Redis
### Application Deployment Pipeline
Deploys the INFCTL application:
- Creates database secrets and configurations
- Sets up SMTP secrets
- Creates application secrets and config maps
- Applies INFCTL Kubernetes manifests
- Configures ingress and networking
## Scripts Directory
The `scripts/` directory contains shell scripts executed by the CLI:
### Infrastructure Scripts
- `install_traefik.sh` - Installs Traefik ingress controller
- `install_cert-manager.sh` - Installs cert-manager
- `install_longhorn.sh` - Installs Longhorn storage
- `install_cloudnative_pg.sh` - Installs PostgreSQL operator
### Database Scripts
- `create_crunchy_operator.sh` - Sets up PostgreSQL operator
- `check_crunchy_operator.sh` - Verifies operator installation
- `create_crunchy_db.sh` - Creates PostgreSQL database
- `create_crunchy_inf_secrets.sh` - Creates database secrets
### Application Scripts
- `create_app_secret_inf.sh` - Creates application secrets
- `create_smtp_inf_secrets.sh` - Creates SMTP configuration
- `create_init_configmap_inf.sh` - Creates initialization config maps
- `create_nginx_configmap_inf.sh` - Creates Nginx configuration
- `create_php_configmap_inf.sh` - Creates PHP configuration
### Cloud Provider Scripts
- `create_aws_secrets.sh` - Creates AWS secrets
- `create_cloudflare_secret.sh` - Creates Cloudflare secrets
- `create_redis_secret.sh` - Creates Redis secrets
## Kubernetes Manifests
The `k8s-manifests/` directory contains Kubernetes resources applied via kustomize:
### INFCTL Application (`k8s-manifests/inf/`)
- `deployment.yaml` - Main application deployment
- `pvc.yaml` - Persistent volume claims
- `kustomization.yaml` - Kustomize configuration
### INFCTL Ingress (`k8s-manifests/inf-ingress/`)
- `ingress.yaml` - Ingress rules
- `service.yaml` - Service definitions
- `issuer.yaml` - Certificate issuer
- `kustomization.yaml` - Kustomize configuration
`infctl-cli` is a Go command-line tool for orchestrating deployment pipelines using shell scripts and Kubernetes manifests. The tool is configured via JSON files and executes tasks as defined in a pipeline configuration.
## Project Structure
@ -291,61 +9,73 @@ The `k8s-manifests/` directory contains Kubernetes resources applied via kustomi
infctl-cli/
├── main.go # Application entry point
├── go.mod # Go module definition
├── base.json.example # Base configuration template
├── config.json.example # Project configuration template
├── app/ # Core application logic
│ ├── app.go # Pipeline orchestration and state management
│ └── k8s.go # Kubernetes operations (kubectl, kustomize)
│ └── k8s.go # Kubernetes operations
├── config/ # Configuration management
│ ├── base.go # Base configuration handling
│ └── project.go # Project configuration handling
├── database/ # SQLite database operations
├── docs/ # Documentation
│ ├── API_REFERENCE.md # API reference
│ └── CONFIG_SCHEMA.md # Config schema
├── scripts/ # Shell scripts executed by the CLI
│ ├── install_*.sh # Infrastructure installation scripts
│ ├── create_*_secrets.sh # Secret creation scripts
│ └── create_*_configmap_*.sh # ConfigMap creation scripts
├── k8s-manifests/ # Kubernetes manifests applied via kustomize
│ ├── ctl/ # INFCTL application manifests
│ └── ctl-ingress/ # INFCTL ingress configuration
├── templates/ # Template files for configuration generation
├── k8s-manifests/ # Kubernetes manifests
├── templates/ # Template files
└── files/ # Static configuration files
```
## Development
## Configuration Files
### Building from Source
Three JSON files are used:
- `base.json`: Base configuration (e.g., `projects_directory`, images, environment file path)
- `config.json`: Project-specific configuration (e.g., project name, directory, URLs, port)
- `pipeline.json`: Defines the sequence of scripts and manifests to execute
Example configuration files are provided as `.example` files in the repository.
## Usage
Build the CLI:
```bash
go mod download
go build -o infctl-cli .
```
### Running with Debug Logging
Run the CLI with a pipeline file:
The CLI uses structured JSON logging. Debug logs are enabled by default and include detailed information about script execution and kustomize operations.
```bash
./infctl-cli --deployment-file pipeline.json
# or
./infctl-cli -f pipeline.json
```
### Adding New Scripts
The CLI will:
1. Place shell scripts / executables in the `scripts/` directory
2. Add confiiguration as appropriate into `pipeline.json`
3. Re-run `infctl-cli --deployment-file pipeline.json` or `infctl-cli -f pipeline.json`
1. Load base and project configuration
2. Initialize SQLite database for state management
3. Execute the pipeline defined in the JSON file
4. Run scripts from the `scripts/` directory
5. Apply Kubernetes manifests from the `k8s-manifests/` directory
### Adding New Manifests
## Scripts
1. Create Kubernetes YAML files in the appropriate `k8s-manifests/` subdirectory
2. Include a `kustomization.yaml` file for kustomize processing
3. Add confiiguration as appropriate into `pipeline.json`
4. Re-run `infctl-cli --deployment-file pipeline.json` or `infctl-cli -f pipeline.json`
Shell scripts in `scripts/` are executed as defined in the pipeline configuration. Scripts are responsible for infrastructure setup, secret creation, configmap generation, and other deployment tasks. The pipeline JSON determines the order and parameters for each script.
## Contributing
## Kubernetes Manifests
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
Manifests in `k8s-manifests/` are applied using kubectl and kustomize. The pipeline configuration specifies which manifests to apply and in what order.
## Testing
Run tests with:
```bash
go test ./... -v
```
## License
This project is licensed under the GNU General Public License v3.0. See the [LICENSE](./LICENSE) file for details.
This project is licensed under the GNU General Public License v3.0. See `LICENSE` for details.

50
docs/API_REFERENCE.md Normal file
View file

@ -0,0 +1,50 @@
# API Reference
This document describes the API and pipeline functions available in `infctl-cli`.
## PipelineStep Structure
Each pipeline step is defined as:
- `name`: Step name (string)
- `function`: Function to call (string)
- `params`: List of parameters (array of strings)
- `retryCount`: Number of retries (integer)
- `shouldAbort`: Whether to abort on failure (boolean)
## Available Functions
### k8sNamespaceExists
Checks if a Kubernetes namespace exists.
- Params: `[namespace]` (string)
- Returns: error if namespace does not exist
### RunCommand
Runs a shell command.
- Params: `[command]` (string)
- Returns: error if command fails
## Example Pipeline JSON
```
[
{
"name": "ensure inf namespace exists",
"function": "k8sNamespaceExists",
"params": ["infctl"],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create php configmap",
"function": "RunCommand",
"params": ["./scripts/create_php_configmap_ctl.sh"],
"retryCount": 0,
"shouldAbort": true
}
]
```
## Notes
- Only functions defined in the codebase are available for use in pipelines.
- The API does not expose any HTTP endpoints; all orchestration is via CLI and pipeline JSON.

51
docs/CONFIG_SCHEMA.md Normal file
View file

@ -0,0 +1,51 @@
# Configuration Schema
This document describes the configuration schema for `infctl-cli`.
## Base Configuration (`base.json`)
Example:
```json
{
"retry_delay_seconds": 3
}
```
- `retry_delay_seconds` (integer): Delay in seconds before retrying failed steps.
## Project Configuration (`config.json`)
Project configuration fields are defined in the code and may include:
- Project name
- Directory paths
- URLs
- Port numbers
- Log format
Refer to the code for exact field names and types.
## Pipeline Configuration (`pipeline.json`)
Pipeline configuration is an array of steps. Each step:
- `name`: Step name (string)
- `function`: Function to call (string)
- `params`: List of parameters (array of strings)
- `retryCount`: Number of retries (integer)
- `shouldAbort`: Whether to abort on failure (boolean)
Example:
```json
[
{
"name": "ensure inf namespace exists",
"function": "k8sNamespaceExists",
"params": ["infctl"],
"retryCount": 0,
"shouldAbort": true
}
]
```
## Notes
- Example configuration files are provided as `.example` files in the repository.
- All configuration fields must match those defined in the codebase; do not add undocumented fields.

10
gcloud/tf/Dockerfile Normal file
View file

@ -0,0 +1,10 @@
FROM python:3.12-slim
# Install dependencies
RUN pip install --no-cache-dir gunicorn httpbin
# Expose the application port
EXPOSE 80
# Launch the application
CMD ["gunicorn", "-b", "0.0.0.0:80", "httpbin:app"]

0
gcloud/tf/doit.tf Normal file
View file

16
gcloud/tf/firewall.tf Normal file
View file

@ -0,0 +1,16 @@
// Firewall
// ----------------------------------
resource "google_compute_firewall" "allow_http" {
name = "allow-http"
network = "default"
allow {
protocol = "tcp"
ports = [
"80", "443" // http/https
]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["web"]
}

View file

@ -0,0 +1,68 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: forgejo-deployment
namespace: forgejo
labels:
app: forgejo-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: forgejo-app
template:
metadata:
labels:
app: forgejo-app
spec:
terminationGracePeriodSeconds: 10
containers:
- name: forgejo
image: codeberg.org/forgejo/forgejo:11.0.6
imagePullPolicy: IfNotPresent
env:
- name: FORGEJO__repository__ENABLE_PUSH_CREATE_USER
value: "true"
- name: FORGEJO__server__ROOT_URL
value: "https://frg.headshed.dev/"
- name: FORGEJO__repository__DEFAULT_BRANCH
value: "main"
- name: FORGEJO__server__LFS_START_SERVER
value: "true"
- name: FORGEJO__security__INSTALL_LOCK
value: "true"
- name: FORGEJO__service__DISABLE_REGISTRATION
value: "false"
ports:
- name: http
containerPort: 3000
protocol: TCP
- name: ssh
containerPort: 22
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
tty: true
volumeMounts:
- name: forgejo-data
mountPath: /data
# - name: forgejo-timezone
# mountPath: /etc/timezone
# - name: forgejo-localtime
# mountPath: /etc/localtime
volumes:
- name: forgejo-data
persistentVolumeClaim:
claimName: forgejo-data-pvc
# - name: forgejo-timezone
# configMap:
# name: forgejo-timezone
# - name: forgejo-localtime
# configMap:
# name: forgejo-localtime

View file

@ -0,0 +1,24 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-forgejo-ingress-http
namespace: forgejo
annotations:
cert-manager.io/issuer: "le-cluster-issuer-http"
spec:
tls:
- hosts:
- ${APP_DOMAIN_NAME}
secretName: tls-frg-ingress-http
rules:
- host: ${APP_DOMAIN_NAME}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: forgejo-app-service
port:
name: web

View file

@ -0,0 +1,17 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: le-cluster-issuer-http
namespace: forgejo
spec:
acme:
email: ${EMAIL}
# We use the staging server here for testing to avoid throttling.
server: https://acme-staging-v02.api.letsencrypt.org/directory
# server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: http-issuer-account-key
solvers:
- http01:
ingress:
class: traefik

View file

@ -0,0 +1,26 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: forgejo-local-pv
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/disks/app-data/forgejo
storageClassName: local-path
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-data-pvc
namespace: forgejo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
volumeName: forgejo-local-pv
storageClassName: local-path

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: forgejo-app-service
namespace: forgejo
spec:
selector:
app: forgejo-app
ports:
- name: web
protocol: TCP
port: 3000
targetPort: 3000

95
gcloud/tf/main.tf Normal file
View file

@ -0,0 +1,95 @@
// Compute
// ----------------------------------
// The instance for K3S
resource "google_compute_instance" "k3s" {
name = "k3s-vm-1"
machine_type = "e2-small" # This instance will have 2 Gb of RAM
zone = var.zone
tags = ["web"]
// Set the boot disk and the image (10 Gb)
boot_disk {
initialize_params {
image = "debian-cloud/debian-12"
size = 10
}
}
// ensures that the instance is a Spot VM
// means it can be preempted, but it's cheaper
# scheduling {
# automatic_restart = false
# provisioning_model = "SPOT"
# preemptible = true
# }
// attach a disk for K3S
attached_disk {
source = google_compute_disk.k3s_disk.id
device_name = "k3s-disk"
}
// attach a disk for app data
attached_disk {
source = google_compute_disk.app_data_disk.id
device_name = "app-data-disk"
}
network_interface {
network = "default"
// enable ephemeral ip
access_config {}
}
labels = {
env = var.env
region = var.region
app = var.app_name
sensitive = "false"
}
metadata_startup_script = file("scripts/k3s-vm-startup.sh")
allow_stopping_for_update = true
}
// Storage
// ----------------------------------
// The disk attached to the instance (15 Gb)
resource "google_compute_disk" "k3s_disk" {
name = "k3s-disk"
size = 15
type = "pd-standard"
zone = var.zone
}
// The disk for app data (20 Gb)
resource "google_compute_disk" "app_data_disk" {
name = "app-data-disk"
size = 20
type = "pd-standard"
zone = var.zone
}
// Outputs
// ----------------------------------
data "google_project" "project" {
project_id = var.project_name # Use variable from tfvars
}
output "project_number" {
value = data.google_project.project.number
}
output "k3s_vm_public_ip" {
value = google_compute_instance.k3s.network_interface[0].access_config[0].nat_ip
description = "Ephemeral public IP of the k3s VM"
}

24
gcloud/tf/provider.tf Normal file
View file

@ -0,0 +1,24 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
// Provider
// ----------------------------------
// Connect to the GCP project
provider "google" {
# Configuration options
project = var.project_name # Use variable from tfvars
region = "us-central1" # Replace with your desired region
}
# provider "google" {
# credentials = file("<my-gcp-creds>.json")
# project = var.project_name
# region = var.region
# zone = var.zone
# }

14
gcloud/tf/registry.tf Normal file
View file

@ -0,0 +1,14 @@
// Registry
// ----------------------------------
// The Artifact Registry repository for our app
resource "google_artifact_registry_repository" "app-repo" {
location = var.region
repository_id = "app-repo"
description = "App Docker repository"
format = "DOCKER"
docker_config {
immutable_tags = true
}
}

37
gcloud/tf/remote_state.tf Normal file
View file

@ -0,0 +1,37 @@
// Remote state
// ----------------------------------
# variable "bucket_name" {
# type = string
# default = "your-project-name-k3s-bucket"
# description = "your-project-name k3s Bucket"
# }
# terraform {
# # Use a shared bucket (wich allows collaborative work)
# backend "gcs" {
# bucket = "<my-bucket-for-states>"
# prefix = "k3s-infra"
# }
# // Set versions
# required_version = ">=1.8.0"
# required_providers {
# google = {
# source = "hashicorp/google"
# version = ">=4.0.0"
# }
# }
# }
// The bucket where you can store other data
# resource "google_storage_bucket" "k3s-storage" {
# name = var.bucket_name
# location = var.region
# labels = {
# env = var.env
# region = var.region
# app = var.app_name
# sensitive = "false"
# }
# }

View file

@ -0,0 +1,29 @@
[
{
"name": "run pre-flight checks",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/pre-flight-checks.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "list gcloud infrastructure",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/list_gloud_infra.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "run tofu",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/run_tofu.sh"
],
"retryCount": 0,
"shouldAbort": true
}
]

View file

@ -0,0 +1,11 @@
#!/usr/bin/env bash
if kubectl -n cert-manager get pods 2>/dev/null | grep -q 'Running'; then
echo "cert-manager pods already running. Skipping installation."
exit 0
fi
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.2/cert-manager.yaml

View file

@ -0,0 +1,30 @@
#!/bin/bash
set -a
# read environment variables from .env file
# for value of APP_DOMAIN_NAME
. .env
if [ -z "$APP_DOMAIN_NAME" ]; then
echo "Error: APP_DOMAIN_NAME environment variable is not set. Please set it in the .env file."
exit 1
fi
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
cd "$SCRIPT_DIR" || { echo "Failed to change directory to $SCRIPT_DIR"; exit 1; }
# Define the template file path and output file path
TEMPLATE_FILE="../../k3s/forgejo/ingress.yaml.template"
OUTPUT_FILE="../../k3s/forgejo/ingress.yaml"
# Use envsubst to substitute the APP_DOMAIN_NAME variable into the template
envsubst < "$TEMPLATE_FILE" > "$OUTPUT_FILE"
if [ $? -ne 0 ]; then
echo "Error: Failed to substitute variables in the template."
exit 1
fi
echo "Ingress configuration has been created at $OUTPUT_FILE"

View file

@ -0,0 +1,33 @@
#!/bin/bash
set -a
# read environment variables from .env file
# for value of EMAIL
. .env
# Check if EMAIL environment variable is set
if [ -z "$EMAIL" ]; then
echo "Error: EMAIL environment variable is not set."
echo "Please set the EMAIL variable and try again."
exit 1
fi
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
cd "$SCRIPT_DIR" || { echo "Failed to change directory to $SCRIPT_DIR"; exit 1; }
# Define the template file path and output file path
TEMPLATE_FILE="../../k3s/forgejo/issuer.yaml.template"
OUTPUT_FILE="../../k3s/forgejo/issuer.yaml"
# Use envsubst to substitute the EMAIL variable into the template
envsubst < "$TEMPLATE_FILE" > "$OUTPUT_FILE"
if [ $? -ne 0 ]; then
echo "Error: Failed to substitute variables in the template."
exit 1
fi
echo "Issuer configuration has been created at $OUTPUT_FILE"

View file

@ -0,0 +1,45 @@
#!/bin/bash
set -e
echo "Installing Forgejo"
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
# Define namespace
NAMESPACE="forgejo"
MANIFESTS_DIR="${SCRIPT_DIR}/../../k3s/forgejo"
echo "Creating namespace..."
if ! kubectl get namespace "${NAMESPACE}" >/dev/null 2>&1; then
kubectl create namespace "${NAMESPACE}"
else
echo "Namespace '${NAMESPACE}' already exists."
fi
echo "Creating PersistentVolumeClaim..."
kubectl apply -f ${MANIFESTS_DIR}/pvc.yaml
echo "Creating Service..."
kubectl apply -f ${MANIFESTS_DIR}/service.yaml
echo "Creating Deployment..."
kubectl apply -f ${MANIFESTS_DIR}/deployment.yaml
echo "Creating Certificate Issuer..."
kubectl apply -f ${MANIFESTS_DIR}/issuer.yaml
echo "Creating Ingress..."
kubectl apply -f ${MANIFESTS_DIR}/ingress.yaml
echo "Forgejo installation complete."
echo "Verify deployment with: kubectl -n ${NAMESPACE} get pods,svc,ingress,pvc"
exit;
# Note: The ingressTCP.yaml is for a different application (galene) and should be applied separately
# echo "Note: The ingressTCP.yaml is for the galene application and has not been applied."

View file

@ -0,0 +1,47 @@
[
{
"name": "install cert-manager",
"function": "RunCommand",
"params": [
"gcloud/tf/scripts/cert-manager/install_cert-manager.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "install traefik",
"function": "RunCommand",
"params": [
"gcloud/tf/scripts/install_traefik.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create forgejo ingress",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/forgejo/create_ingress.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "create forgejo issuer",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/forgejo/create_issuer.sh"
],
"retryCount": 0,
"shouldAbort": true
},
{
"name": "install forgejo",
"function": "RunCommand",
"params": [
"./gcloud/tf/scripts/forgejo/install_forgejo.sh"
],
"retryCount": 0,
"shouldAbort": true
}
]

View file

@ -0,0 +1,64 @@
#!/usr/bin/env bash
# Exit immediately if a command exits with a non-zero status.
set -e
TMPFILE=$(mktemp /tmp/traefik-values-XXXXXX.yaml)
cat > "$TMPFILE" <<EOF
ingressClass:
enabled: true
isDefaultClass: true
ports:
web:
port: 80
hostPort: 80
websecure:
port: 443
hostPort: 443
traefik:
port: 9000
api:
dashboard: true
insecure: true
ingressRoute:
dashboard:
enabled: true
ping: true
log:
level: INFO
service:
enabled: true
type: ClusterIP
annotations: {}
ports:
web:
port: 80
protocol: TCP
targetPort: web
websecure:
port: 443
protocol: TCP
targetPort: websecure
EOF
if helm status traefik --namespace traefik &> /dev/null; then
echo "Traefik is already installed in the 'traefik' namespace. Upgrading..."
helm upgrade traefik traefik/traefik --namespace traefik -f "$TMPFILE"
else
echo "Installing Traefik..."
helm repo add traefik https://traefik.github.io/charts
helm repo update
# Using --create-namespace is good practice, though traefik will always exist.
helm install traefik traefik/traefik --namespace traefik --create-namespace -f "$TMPFILE"
fi
# echo
# echo "To access the dashboard:"
# echo "kubectl port-forward -n traefik \$(kubectl get pods -n traefik -l \"app.kubernetes.io/name=traefik\" -o name) 9000:9000"
# echo "Then visit http://localhost:9000/dashboard/ in your browser"

View file

@ -0,0 +1,102 @@
#!/bin/bash
INFCTL_GIT_REPO="https://codeberg.org/headshed/infctl-cli.git"
INFCTL_GIT_REPO_BRANCH="feature/gcloud-k3s"
INFCTL_INSTALL_DIR="/opt/src"
# ensure only run once
if [[ -f /etc/startup_was_launched ]]; then exit 0; fi
touch /etc/startup_was_launched
# Format the k3s disk if not already formatted
# This creates an ext4 filesystem on the specified
# disk with no reserved space for root, forces the operation,
# fully initializes inode tables and the journal, and enables
# discard/TRIM for better performance on SSDs or
# thin-provisioned storage.
if ! lsblk | grep -q "/var/lib/rancher/k3s"; then
mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google-k3s-disk
mkdir -p /var/lib/rancher/k3s
mount -o discard,defaults /dev/disk/by-id/google-k3s-disk /var/lib/rancher/k3s
chmod a+w /var/lib/rancher/k3s
fi
# A disk named k3s-disk in your Terraform configuration will
# appear as /dev/disk/by-id/google-k3s-disk.
# Format the app-data-disk if not already formatted
if ! lsblk | grep -q "/mnt/disks/app-data"; then
mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google-app-data-disk
mkdir -p /mnt/disks/app-data
mount -o discard,defaults /dev/disk/by-id/google-app-data-disk /mnt/disks/app-data
chmod a+w /mnt/disks/app-data
fi
# Similarly, a disk named app-data-disk will appear as /dev/
# disk/by-id/google-app-data-disk.
# Add to /etc/fstab for persistence (only if not already present)
if ! grep -q "/var/lib/rancher/k3s" /etc/fstab; then
echo "/dev/disk/by-id/google-k3s-disk /var/lib/rancher/k3s ext4 defaults,discard 0 0" >> /etc/fstab
fi
if ! grep -q "/mnt/disks/app-data" /etc/fstab; then
echo "/dev/disk/by-id/google-app-data-disk /mnt/disks/app-data ext4 defaults,discard 0 0" >> /etc/fstab
fi
# apt install
apt update
apt install -y ncdu htop git curl
# helm install
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
/bin/bash get_helm.sh
# user bashrc config
rc=/home/user/.bashrc
{
echo "export KUBECONFIG=~/.kube/config"
echo "alias l='ls -lah'"
echo "alias ll='ls -lh'"
echo "alias k=kubectl"
echo "export dry='--dry-run=client'"
echo "export o='-oyaml'"
echo "alias kcd='kubectl config use-context'"
echo "source <(kubectl completion bash)"
echo "complete -F __start_kubectl k"
echo "alias k='kubectl'"
} >> $rc
# Install k3s
k3s_version="v1.32.8+k3s1"
curl -sfL https://get.k3s.io \
| \
INSTALL_K3S_VERSION="$k3s_version" sh -s - server \
--cluster-init \
--disable traefik \
--disable servicelb
# Set up kubeconfig for the 'user' user
mkdir -p /home/user/.kube
chown user:user /home/user/.kube
chmod 700 /home/user/.kube
# Copy the kubeconfig file to the user's home directory
# for easier access
cp /etc/rancher/k3s/k3s.yaml /home/user/.kube/config
chown user:user /home/user/.kube/config
# install infctl
curl -L https://codeberg.org/headshed/infctl-cli/raw/branch/main/install.sh | bash
# clone infctl repo if not already present
if [[ ! -d "$INFCTL_INSTALL_DIR" ]]; then
mkdir -p "$INFCTL_INSTALL_DIR"
cd ${INFCTL_INSTALL_DIR} || "echo 'Failed to change directory to $INFCTL_INSTALL_DIR' ; exit 1"
git clone --branch "$INFCTL_GIT_REPO_BRANCH" "$INFCTL_GIT_REPO" || "echo 'Failed to clone $INFCTL_GIT_REPO' ; exit 1"
chown -R user:user "$INFCTL_INSTALL_DIR"
fi

View file

@ -0,0 +1,16 @@
#!/usr/bin/env bash
. .env
if [ -z "$PROJECT_NAME" ]; then
echo "❌ PROJECT_NAME is not set. Please add PROJECT_NAME=<your_project_name> to your .env file before running this script."
exit 1
fi
gcloud compute instances list --project="$PROJECT_NAME" && gcloud compute disks list --project="$PROJECT_NAME" && gcloud compute firewall-rules list --project="$PROJECT_NAME" && gcloud storage buckets list --project="$PROJECT_NAME"
if [ $? -ne 0 ]; then
echo "❌ gcloud is not authenticated, please run 'gcloud auth login' first"
echo
exit 1
fi

View file

@ -0,0 +1,65 @@
#!/usr/bin/env bash
echo "🧪 checking we have tofu insatalled..."
if ! command -v tofu &> /dev/null
then
echo "❌ tofu could not be found, please install it first"
echo
echo "see https://opentofu.org/docs/intro/install/standalone/"
echo
echo "and https://opentofu.org/docs/intro/install/ for more details"
echo
exit 1
fi
echo "✅ tofu is installed,..."
echo
tofu version
echo
echo "🧪 checking we have gcloud insatalled..."
if ! command -v gcloud &> /dev/null
then
echo "❌ gcloud could not be found, please install it first"
echo
echo "see https://cloud.google.com/sdk/docs/install"
echo
exit 1
fi
echo "✅ gcloud is installed,..."
echo
gcloud version
echo
echo "🧪 checking we have kubectl insatalled..."
if ! command -v kubectl &> /dev/null
then
echo "❌ kubectl could not be found, please install it first"
echo
echo "see https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/"
echo
exit 1
fi
echo "✅ kubectl is installed,..."
echo
kubectl version --client
echo
echo "🧪 checking we have envsubst insatalled..."
if ! command -v envsubst &> /dev/null
then
echo "❌ envsubst could not be found, please install it first"
echo
echo "on ubuntu you can install it with: sudo apt-get install -y gettext-base"
echo
exit 1
fi
echo "✅ envsubst is installed,..."
echo
envsubst --version
echo
echo "✅ Pre-flight checks passed. You are ready to proceed 🙂"
echo

29
gcloud/tf/scripts/run_tofu.sh Executable file
View file

@ -0,0 +1,29 @@
#!/usr/bin/env bash
# Get the directory where the script is located
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
cd "$SCRIPT_DIR" || { echo "Failed to change directory to $SCRIPT_DIR"; exit 1; }
TF_DIR="../"
cd "$TF_DIR" || { echo "Failed to change directory to $TF_DIR"; exit 1; }
if [[ -d ".terraform" && -f ".terraform.lock.hcl" ]]; then
echo "✅ Terraform already initialized"
# tofu init
else
echo "⚠️ Initializing Terraform..."
tofu init
fi
if [[ $? -ne 0 ]]; then
echo "❌ tofu init failed, please check the output above"
exit 1
fi
# tofu apply with auto-approve to make it non-interactive
tofu apply -auto-approve
if [[ $? -ne 0 ]]; then
echo "❌ tofu apply failed, please check the output above"
exit 1
fi

View file

@ -0,0 +1,14 @@
// Your GCP project name
// it will be refererred as the project id
// in Google Cloud
// ----------------------------------
project_name = "<your gpc project name>"
// application name
app_name = "your-projects-k3s-cluster"
// where to deploy to
// region
region = "us-central1"
zone = "us-central1-a"

28
gcloud/tf/vars.tf Normal file
View file

@ -0,0 +1,28 @@
// Env vars
// ----------------------------------
variable "project_name" {
type = string
}
variable "env" {
type = string
default = "dev"
description = "Environment"
}
variable "region" {
type = string
description = "GCP Region"
}
variable "zone" {
type = string
description = "GCP Zone"
}
variable "app_name" {
type = string
description = "Application name"
}