Terraform Infrastructure as Code

Automate infrastructure provisioning and configuration using Terraform. Manage Vault, KVM VMs, k3s applications, and Cloudflare from code.

Executive Summary

Problem

Manual infrastructure provisioning is slow, error-prone, and undocumented

Solution

Terraform IaC with version-controlled state and reproducible deployments

Target

All infrastructure: VMs, Vault config, k3s apps, Cloudflare

Architecture Overview

Terraform IaC Architecture

Provider Summary

Provider Source Status Manages

Vault

hashicorp/vault

Official

PKI roles, SSH CA, policies, auth methods

libvirt

dmacvicar/libvirt

Community (active)

KVM virtual machines on kvm-01, kvm-02

Kubernetes

hashicorp/kubernetes

Official

k3s namespaces, helm releases, secrets

Cloudflare

cloudflare/cloudflare

Official

DNS records, Pages, Access policies

Repository Structure

Repository Structure
Figure 1. Repository Structure

Workflow

Terraform Workflow

Standard Commands

# Initialize providers
terraform init

# Preview changes
terraform plan

# Apply changes (after review)
terraform apply

# Destroy resources (careful!)
terraform destroy

Prerequisites

1. Install Terraform

# Arch Linux
sudo pacman -S terraform

# Or use tfenv for version management
git clone https://github.com/tfutils/tfenv.git ~/.tfenv
echo 'export PATH="$HOME/.tfenv/bin:$PATH"' >> ~/.zshrc
tfenv install latest
tfenv use latest
terraform version

2. Configure State Backend

State stored on NAS-01 via NFS for persistence and sharing.

# Verify NFS mount
mount | grep nas-01

If not mounted:

sudo mkdir -p /mnt/nas-01/terraform
sudo mount -t nfs nas-01.inside.domusdigitalis.dev:/volume1/terraform /mnt/nas-01/terraform

3. Provider Authentication

Vault Provider

# Use existing Vault token
export VAULT_ADDR="https://vault-01.inside.domusdigitalis.dev:8200"
export VAULT_TOKEN="$(cat ~/.vault-token)"

# Or authenticate
vault login -method=token

libvirt Provider

# SSH key authentication to KVM hosts
# Vault SSH CA cert should already work
ssh kvm-01 "virsh list"
ssh kvm-02 "virsh list"

Kubernetes Provider

# Use existing kubeconfig
export KUBECONFIG=~/.kube/config

# Verify
kubectl get nodes

Cloudflare Provider

# Load from secrets (CF_DNS_TOKEN is in dev/app)
dsource d000 dev/app
export CLOUDFLARE_API_TOKEN="$CF_DNS_TOKEN"
echo $CLOUDFLARE_API_TOKEN
Add CLOUDFLARE_API_TOKEN=$CF_DNS_TOKEN to d000/dev/app to avoid the export step.

Phase 1: Repository Setup

1.1 Create Repository

mkdir -p ~/atelier/_projects/personal/domus-terraform
cd ~/atelier/_projects/personal/domus-terraform
git init

1.2 Create versions.tf

terraform {
  required_version = ">= 1.5.0"

  required_providers {
    vault = {
      source  = "hashicorp/vault"
      version = "~> 4.0"
    }
    libvirt = {
      source  = "dmacvicar/libvirt"
      version = "~> 0.8.0"  # 0.9.x has breaking schema changes
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.25"
    }
    cloudflare = {
      source  = "cloudflare/cloudflare"
      version = "~> 4.0"
    }
  }
}

1.3 Create backend.tf

terraform {
  backend "local" {
    path = "/mnt/nas-01/terraform/domus.tfstate"
  }
}

For team collaboration, consider migrating to Terraform Cloud or S3-compatible backend later.

1.4 Create providers.tf

# Vault Provider
provider "vault" {
  address = "https://vault-01.inside.domusdigitalis.dev:8200"
  # Token from VAULT_TOKEN env var
}

# libvirt Provider - kvm-01
provider "libvirt" {
  alias = "kvm01"
  uri   = "qemu+ssh://root@kvm-01.inside.domusdigitalis.dev/system"
}

# libvirt Provider - kvm-02
provider "libvirt" {
  alias = "kvm02"
  uri   = "qemu+ssh://root@kvm-02.inside.domusdigitalis.dev/system"
}

# Kubernetes Provider
provider "kubernetes" {
  config_path = "~/.kube/config"
}

# Cloudflare Provider
provider "cloudflare" {
  # Token from CLOUDFLARE_API_TOKEN env var
}

Phase 2: Vault Configuration

Codify existing Vault configuration.

2.1 environments/prod/vault/main.tf

# Existing PKI mount (imported, not created)
data "vault_mount" "pki_int" {
  path = "pki_int"
}

# Existing SSH mount (imported, not created)
data "vault_mount" "ssh" {
  path = "ssh"
}

2.2 environments/prod/vault/pki.tf

# PKI Role: domus-server (for server certificates)
resource "vault_pki_secret_backend_role" "domus_server" {
  backend          = data.vault_mount.pki_int.path
  name             = "domus-server"
  ttl              = 31536000  # 1 year
  max_ttl          = 31536000
  allow_localhost  = true
  allowed_domains  = ["inside.domusdigitalis.dev"]
  allow_subdomains = true
  key_type         = "rsa"
  key_bits         = 2048
}

# PKI Role: domus-client (for EAP-TLS client certs)
resource "vault_pki_secret_backend_role" "domus_client" {
  backend          = data.vault_mount.pki_int.path
  name             = "domus-client"
  ttl              = 31536000
  max_ttl          = 31536000
  allowed_domains  = ["inside.domusdigitalis.dev"]
  allow_subdomains = true
  key_type         = "rsa"
  key_bits         = 2048
  key_usage        = ["DigitalSignature", "KeyEncipherment"]
  ext_key_usage    = ["ClientAuth"]
}

2.3 environments/prod/vault/ssh.tf

# SSH CA Role: domus-client
resource "vault_ssh_secret_backend_role" "domus_client" {
  backend                 = data.vault_mount.ssh.path
  name                    = "domus-client"
  key_type                = "ca"
  ttl                     = "8h"
  max_ttl                 = "24h"
  allowed_users           = "*"
  default_user            = "ansible"
  allowed_extensions      = "permit-pty,permit-agent-forwarding"
  default_extensions = {
    permit-pty = ""
  }
  allow_user_certificates = true
}

Phase 3: KVM VM Provisioning

Provision k3s and Vault nodes on kvm-02.

3.1 modules/vm/main.tf

terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
    }
  }
}

variable "name" {
  type        = string
  description = "VM name"
}

variable "memory" {
  type        = number
  default     = 4096
  description = "Memory in MB"
}

variable "vcpu" {
  type        = number
  default     = 2
  description = "Number of vCPUs"
}

variable "disk_size" {
  type        = number
  default     = 40
  description = "Disk size in GB"
}

variable "base_image" {
  type        = string
  description = "Path to base cloud image"
}

variable "network_bridge" {
  type        = string
  default     = "virbr0"
  description = "Network bridge"
}

variable "cloud_init_config" {
  type        = string
  description = "Cloud-init user-data"
}

# Cloud-init disk
resource "libvirt_cloudinit_disk" "cloudinit" {
  name      = "${var.name}-cloudinit.iso"
  user_data = var.cloud_init_config
}

# VM disk (clone from base)
resource "libvirt_volume" "disk" {
  name           = "${var.name}.qcow2"
  base_volume_id = var.base_image
  size           = var.disk_size * 1024 * 1024 * 1024
}

# VM definition
resource "libvirt_domain" "vm" {
  name   = var.name
  memory = var.memory
  vcpu   = var.vcpu

  cloudinit = libvirt_cloudinit_disk.cloudinit.id

  disk {
    volume_id = libvirt_volume.disk.id
  }

  network_interface {
    bridge = var.network_bridge
  }

  console {
    type        = "pty"
    target_type = "serial"
    target_port = "0"
  }

  graphics {
    type        = "vnc"
    listen_type = "address"
  }
}

output "name" {
  value = libvirt_domain.vm.name
}

3.2 environments/prod/kvm/k3s-nodes.tf

locals {
  k3s_nodes = {
    "k3s-master-02" = {
      ip     = "10.50.1.121"
      memory = 8192
      vcpu   = 4
    }
    "k3s-master-03" = {
      ip     = "10.50.1.122"
      memory = 8192
      vcpu   = 4
    }
  }

  cloud_init_template = <<-EOF
    #cloud-config
    hostname: %s
    users:
      - name: evanusmodestus
        groups: wheel
        sudo: ALL=(ALL) NOPASSWD:ALL
        ssh_authorized_keys:
          - ${file("~/.ssh/id_ed25519_vault.pub")}
    runcmd:
      - nmcli conn add con-name eth0 type ethernet ifname eth0 ipv4.method manual ipv4.addresses %s/24 ipv4.gateway 10.50.1.1 ipv4.dns "10.50.1.1,10.50.1.90"
  EOF
}

module "k3s_nodes" {
  for_each = local.k3s_nodes
  source   = "../../modules/vm"

  providers = {
    libvirt = libvirt.kvm02
  }

  name              = each.key
  memory            = each.value.memory
  vcpu              = each.value.vcpu
  disk_size         = 50
  base_image        = "/mnt/onboard-ssd/libvirt/images/Rocky-9-GenericCloud.latest.x86_64.qcow2"
  cloud_init_config = format(local.cloud_init_template, each.key, each.value.ip)
}

Phase 4: Kubernetes Resources

Manage k3s applications via Terraform.

4.1 environments/prod/k3s/namespaces.tf

resource "kubernetes_namespace" "monitoring" {
  metadata {
    name = "monitoring"
    labels = {
      "app.kubernetes.io/managed-by" = "terraform"
    }
  }
}

resource "kubernetes_namespace" "wazuh" {
  metadata {
    name = "wazuh"
    labels = {
      "app.kubernetes.io/managed-by" = "terraform"
    }
  }
}

Phase 5: Cloudflare

Manage DNS records and Pages via Terraform.

5.1 environments/prod/cloudflare/dns.tf

data "cloudflare_zone" "domusdigitalis" {
  name = "domusdigitalis.dev"
}

resource "cloudflare_record" "wazuh" {
  zone_id = data.cloudflare_zone.domusdigitalis.id
  name    = "wazuh.inside"
  value   = "10.50.1.134"
  type    = "A"
  proxied = false
  comment = "Wazuh Manager VIP"
}

Demonstration Guide

For team demonstration tomorrow:

Quick Demo (10 min)

# 1. Show the code
cd ~/atelier/_projects/personal/domus-terraform
tree -L 2

# 2. Initialize
terraform init

# 3. Show providers
terraform providers

# 4. Plan (dry run)
terraform plan

# 5. Show state
terraform state list

Key Talking Points

  1. Infrastructure as Code - All config in Git, version controlled

  2. Reproducibility - Same code = same infrastructure

  3. Self-documenting - Code IS the documentation

  4. Collaboration - PR reviews for infrastructure changes

  5. Drift detection - terraform plan shows changes

Import Existing Resources

For resources already created manually:

# Import existing Vault PKI role
terraform import vault_pki_secret_backend_role.domus_server pki_int/roles/domus-server

# Import existing VM
terraform import module.k3s_nodes["k3s-master-01"].libvirt_domain.vm k3s-master-01

Troubleshooting

Provider Issues

# Clear provider cache
rm -rf .terraform
terraform init

State Lock

# Force unlock (use with caution)
terraform force-unlock <lock-id>

libvirt Connection

# Test SSH to KVM host
ssh root@kvm-01 "virsh list --all"

# Check libvirt socket
ssh root@kvm-01 "systemctl status libvirtd"