Back to Blog|Tutorials

Proxmox with Terraform: The Ultimate Automation Guide

January 18, 2026
Timo WevelsiepTimo Wevelsiep

A common misconception when switching from AWS to Proxmox is the fear of "Click-Ops". DevOps teams worry about losing their modern GitOps workflows and having to manually install servers like it's 2010.

The reality: Proxmox is fully automatable. But the path is a bit rockier than with AWS because you need to choose the right Terraform provider and master Cloud-Init.

In this technical deep-dive, we show how OutaCloud automates highly scalable clusters – and how to securely store your Terraform state on Hetzner.

Table of Contents

The Provider Choice: Telmate vs. BPG

Unlike AWS, where there's an official provider from HashiCorp, Proxmox relies on community providers. There are two major players, and your choice determines your capabilities.

A) The Incumbent: Telmate (telmate/proxmox)

This is the most widely used provider. It's stable, battle-tested, and perfectly suited for 90% of all "VM deployments".

  • Strength: Excellent documentation, huge community, stable for standard VMs and LXC containers.
  • Weakness: Sometimes lags behind on new Proxmox features (like SDN).

B) The Challenger: BPG (bpg/proxmox)

This provider is more modern and closer to the Proxmox API. It's often preferred when you need deep network configuration control.

  • Strength: Native support for Proxmox SDN (Software Defined Network), better handling of ISO uploads and complex ACME/certificate topics.
  • Weakness: Configuration is sometimes more complex (XML/SSH wrapping).

Our recommendation for getting started: Use Telmate for compute resources. It feels most similar to the AWS provider.

The Secret Weapon: Cloud-Init Templates

Terraform can create VMs, but it can't configure them (users, SSH keys, IP addresses). On AWS, "User Data" handles this. On Proxmox, we use Cloud-Init.

Before you can run Terraform, you need a "Golden Image" (template) in Proxmox that has Cloud-Init installed.

How to create the template (one-time on the Proxmox host):

# 1. Download Ubuntu 22.04 Image
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

# 2. Create VM & import disk
qm create 9000 --name "ubuntu-cloud-template" --memory 2048 --net0 virtio,bridge=vmbr0
qm importdisk 9000 jammy-server-cloudimg-amd64.img local-lvm

# 3. Add Cloud-Init drive (IMPORTANT!)
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
qm set 9000 --ide2 local-lvm:cloudinit

# 4. Boot order & Serial Console (for Terraform logs)
qm set 9000 --boot c --bootdisk scsi0
qm set 9000 --serial0 socket --vga serial0

# 5. Convert to template
qm template 9000

Now you have a template (ID 9000) that Terraform can clone.

The Code: main.tf Example (Telmate)

Here's a production example that deploys a VM on a Hetzner-Proxmox host, assigns a static IP, and configures your SSH key.

Provider Config:

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc1"
    }
  }
}

provider "proxmox" {
  pm_api_url  = "https://your-proxmox-host:8006/api2/json"
  pm_user     = "terraform-prov@pve"
  pm_password = var.pm_password
}

Resource Config (The VM):

resource "proxmox_vm_qemu" "web_server" {
  count       = 3
  name        = "web-${count.index + 1}"
  target_node = "pve-01"

  # Clone the template from above
  clone       = "ubuntu-cloud-template"
  full_clone  = true

  # Hardware specs (comparable to c6i.large, but dedicated)
  cores       = 4
  sockets     = 1
  cpu         = "host" # Important for performance!
  memory      = 8192
  scsihw      = "virtio-scsi-pci"

  # Cloud-Init magic (User Data)
  ciuser      = "deploy"
  sshkeys     = <<EOF
  ssh-rsa AAAAB3NzaC1yc2E... your-public-key ...
  EOF

  # Network (Static IP via Cloud-Init)
  ipconfig0 = "ip=10.0.0.1${count.index + 1}/24,gw=10.0.0.1"

  # Disk specs (NVMe)
  disk {
    storage = "local-lvm"
    size    = "50G"
    type    = "scsi"
    ssd     = 1
    discard = "on"
  }
}

State Management: Where to Store the tfstate?

In the AWS world, you use S3 for Terraform state. A local tfstate file is a no-go in teams. When you leave AWS, where do you store the state securely and centrally?

The Solution: Hetzner Object Storage

We use Hetzner S3 buckets as the backend. It's S3-compatible, located in the same network as your servers (low latency), and costs almost nothing.

How to configure the backend:

terraform {
  backend "s3" {
    bucket = "terraform-state-bucket"
    key    = "prod/infrastructure.tfstate"

    # Hetzner Endpoint (Falkenstein)
    region   = "us-east-1" # Dummy region (required for plugin)
    endpoint = "https://fsn1.your-objectstorage.com"

    # Credentials (best passed via environment variables!)
    # access_key = "..."
    # secret_key = "..."

    # Important for S3-compatible backends that aren't AWS:
    skip_credentials_validation = true
    skip_region_validation      = true
    skip_metadata_api_check     = true
    force_path_style            = true
  }
}

Tip: Use export AWS_ACCESS_KEY_ID=... in your CI/CD pipeline to keep credentials out of your code.

Conclusion: Same Workflow, Better Hardware

Switching to bare metal doesn't mean abandoning your DevOps principles.

AWS: You write code -> AWS API -> VM starts.

OutaCloud: You write code -> Proxmox API -> VM starts.

The difference? The code for Proxmox deploys servers that give you dedicated performance, cost no egress fees, and are 100% GDPR-compliant. Your developers won't notice any difference in their daily work – but your accounting department will.


Want to Migrate Your Terraform Modules?

We analyze your existing HCL files and help refactor them for the Proxmox provider.

Start here:

Or contact us directly for a free DevOps analysis.

Professional Proxmox Support

From monitoring to emergency help - we've got your back.

Emergency Support
Cluster Administration
Performance Tuning
Migrations & Upgrades
Proxmox Terraform Guide: Telmate vs. BPG + Hetzner State | OutaCloud Blog