💡 IP Discovery This deployer uses qemu-guest-agent for VM IP discovery - direct communication with the VM guest agent. Ensure qemu-guest-agent is installed and enabled in your Talos template for reliable IP detection. Visit https://factory.talos.dev/ to generate your own image or use mine
An HTTP service that automates Talos Linux VM deployment on Proxmox VE and seamlessly registers them with existing Talos Kubernetes clusters.
This tool streamlines the process of creating and managing Talos Linux VMs by:
- Automated VM Creation: Clones from templates with configurable CPU, memory, and disk settings
- Smart Node Selection: Uses weighted algorithms to distribute VMs across Proxmox nodes
- NUMA & CPU Affinity: Optimizes performance with proper NUMA topology and core pinning
- Cluster Integration: Automatically registers new nodes with existing Talos clusters
- IP Discovery: Uses qemu-guest-agent for reliable VM IP detection
- Bulk Operations: Create multiple VMs in a single API call
- Kubernetes ready: You can deploy this application directly to your Kubernetes cluster so you can deploy Kubernetes using Kubernetes (yo-dawg)
- Proxmox VE cluster with Talos Linux template (with qemu-guest-agent enabled)
- Existing Talos cluster or control plane
- talosctl installed (quick install:
curl -sL https://talos.dev/install | sh
)
docker run -p 8080:8080 \
-e LISTEN_ADDR="0.0.0.0" \
-e LISTEN_PORT="8080" \
-e CONFIG_PATH="/app/config.yaml" \
-e PROXMOX_BASE_ADDR="https://your-proxmox.example.com:8006/api2/json" \
-e PROXMOX_TOKEN="user@pve!token=your-token-here" \
-e AUTH_TOKEN="your-api-auth-token" \
-e SENTRY_DSN="http://foobar@127.0.0.1:1234/1" \
-e TALOS_CONTROLPLANE_ENDPOINT="https://your-controlplane:6443" \
-e TALOS_MACHINE_TEMPLATE="/app/talos-machine-config.yaml" \
-e TALOS_VM_INTERFACE="eth0" \
-v $(pwd)/config.yaml:/app/config.yaml \
-v $(pwd)/talos-machine-config.yaml:/app/talos-machine-config.yaml \
ghcr.io/d13410n3/proxmox-talos-vm-deployer:latest
curl -X POST http://localhost:8080/api/v1/create \
-H "X-Auth-Token: your-auth-token" \
-d "base_template=talos-template" \
-d "vm_template=talos-worker-small"
PROXMOX_BASE_ADDR
: Proxmox VE API base URL (e.g.,https://proxmox.example.com:8006/api2/json
)PROXMOX_TOKEN
: Proxmox VE API token (format:user@realm!tokenname=token-value
)AUTH_TOKEN
: Authentication token for API accessTALOS_MACHINE_TEMPLATE
: Path to Talos machine configuration templateTALOS_CONTROLPLANE_ENDPOINT
: Talos control plane endpointSENTRY_DSN
: Sentry DSN for error tracking (usehttp://foobar@127.0.0.1:1234/1
if no Sentry)
LISTEN_ADDR
: HTTP server listen address (default:0.0.0.0
)LISTEN_PORT
: HTTP server listen port (default:8080
)CONFIG_PATH
: Path to YAML configuration file (default:config.yaml
)TALOS_VM_INTERFACE
: Network interface to check for IP address (default:eth0
)DEBUG
: Enable debug mode (default:false
)LOG_LEVEL
: Log level - 0: Debug, 1: Info, 2: Error (default:1
)VERIFY_SSL
: Verify SSL certificates (default:true
)TALOS_VM_INTERFACE
: Network interface to check for IP address (default:eth0
)
The service uses a YAML configuration file to define nodes and VM templates:
nodes:
- name: proxmox-node1 # Name of the node
weight: 10 # Weight of the node (higher weight means more VMs)
suffix: "1" # Suffix for VM names, i.e. talos-worker-small-<suffix>-<....>
ht: true # Is Hyperthreading enabled. Used for CPU allocation
hugepages: false # Is hugepages enabled. Used for memory allocation
numa: # Host node NUMA topology. Check yours with numactl --hardware
- id: 0 # NUMA node id
cores:
phy: 0-15 # Physical cores of NUMA node 0
ht: 32-47 # Hyperthreaded cores of NUMA node 0
- id: 1
cores:
phy: 16-31
ht: 48-63
base_templates: # Proxmox templates to use for VM creation
- name: talos-template # Name (used in "base_template" parameter in deploy)
id: 1901 # Actual template VM id
vm_templates:
- name: talos-worker-small # Template name (used in "vm_template" parameter in deploy)
cpu: 4 # Number of cores
memory: 8192 # Memory
disk: 20 # Boot disk size
cpu_model: kvm64 # CPU model to set
role: worker # Role, will be used in machine configuration template
- name: talos-worker-medium
cpu: 8
memory: 16384
disk: 50
cpu_model: kvm64
role: worker
- name: talos-controlplane
cpu: 4
memory: 8192
disk: 30
cpu_model: kvm64
role: controlplane
Create a Talos machine configuration template with placeholders that will be automatically replaced during VM creation:
version: v1alpha1
debug: false
persist: true
machine:
type: {role}
token: your-machine-token-here
ca:
crt: LS0tLS1CRUdJTi0tLS0t # Your cluster CA certificate
network:
hostname: {vm_name}
install:
disk: /dev/vda
image: factory.talos.dev/installer/your-installer-id:v1.10.6
sysctls:
net.core.somaxconn: 65535
net.core.netdev_max_backlog: 4096
nodeLabels:
talos.dev/worker: ""
node-role.kubernetes.io/worker: ""
topology.kubernetes.io/zone: home-{suffix}
cluster:
id: your-cluster-id-here
secret: your-cluster-secret-here
controlPlane:
endpoint: https://your-controlplane:6443
clusterName: your-cluster-name
network:
dnsDomain: cluster.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/12
token: your-cluster-token-here
ca:
crt: LS0tLS1CRUdJTi0tLS0t # Your cluster CA certificate
Placeholder | Description | Example |
---|---|---|
{role} |
VM template role | worker , controlplane |
{vm_name} |
Generated or specified VM name | talos-worker-small-1-abc123 |
{node} |
Proxmox node name | proxmox-node1 |
{vm_template} |
VM template name | talos-worker-small |
{cpu} |
CPU model | kvm64 , host |
{cpu_cores} |
Number of CPU cores | 4 , 8 |
{memory} |
Memory in MB | 8192 |
{disk} |
Disk size in GB | 20 , 50 |
{suffix} |
Node suffix from config | 1 , 2 |
POST /api/v1/create
Creates and configures a new Talos VM, then registers it with the cluster.
Headers:
X-Auth-Token
: Your authentication token
Parameters:
base_template
(required): Proxmox template name to clone fromvm_template
(required): VM configuration template namename
(optional): Custom VM name (auto-generated if not provided)node
(optional): Target Proxmox node (auto-selected by weight if not provided)count
(optional): Number of VMs to create for bulk operationsreset
(optional): Reset VM after creation ("1"
to enable)
Advanced CPU/NUMA Options:
numa
(optional): Specific NUMA node IDphy
(optional): Physical cores to pin (e.g.,"0-3,8-11"
)ht
(optional): Hyperthreaded cores to pin (e.g.,"32-35,40-43"
)phy_only
(optional): Use only physical cores ("1"
to enable)ht_only
(optional): Use only hyperthreaded cores ("1"
to enable)
Response:
{
"vm_id": 12345,
"node": "proxmox-node1",
"name": "talos-worker-small-1-12345-abc123",
"ip": "192.168.88.175",
"role": "worker",
"reset": false,
"duration_seconds": 127.45
}
POST /api/v1/delete
Headers:
X-Auth-Token
: Your authentication token
Parameters:
vm_name
(optional): VM name to deletenode
+vm_id
(optional): Alternative to vm_namestop_method
(optional):"shutdown"
or"stop"
(default:"shutdown"
)
- GET
/health-check
- Service health status - GET
/metrics
- Prometheus metrics
# Create a worker node
curl -X POST http://localhost:8080/api/v1/create \
-H "X-Auth-Token: your-auth-token" \
-d "base_template=talos-template" \
-d "vm_template=talos-worker-small"
# Create a control plane node
curl -X POST http://localhost:8080/api/v1/create \
-H "X-Auth-Token: your-auth-token" \
-d "base_template=talos-template" \
-d "vm_template=talos-controlplane"
# Create 3 worker nodes at once
curl -X POST http://localhost:8080/api/v1/create \
-H "X-Auth-Token: your-auth-token" \
-d "base_template=talos-template" \
-d "vm_template=talos-worker-medium" \
-d "count=3"
# Pin to specific physical cores
curl -X POST http://localhost:8080/api/v1/create \
-H "X-Auth-Token: your-auth-token" \
-d "base_template=talos-template" \
-d "vm_template=talos-worker-small" \
-d "phy=0-3,8-11" \
-d "numa=0"
# Delete by VM name
curl -X POST http://localhost:8080/api/v1/delete \
-H "X-Auth-Token: your-auth-token" \
-d "vm_name=talos-worker-small-1-12345-abc123"
docker run -p 8080:8080 \
-e LISTEN_ADDR="0.0.0.0" \
-e LISTEN_PORT="8080" \
-e CONFIG_PATH="/app/config.yaml" \
-e PROXMOX_BASE_ADDR="https://your-proxmox.example.com:8006/api2/json" \
-e PROXMOX_TOKEN="user@pve!token=your-token-here" \
-e AUTH_TOKEN="your-api-auth-token" \
-e SENTRY_DSN="your-sentry-dsn" \
-e TALOS_CONTROLPLANE_ENDPOINT="https://your-controlplane:6443" \
-e TALOS_MACHINE_TEMPLATE="/app/talos-machine-config.yaml" \
-e TALOS_VM_INTERFACE="eth0" \
-e DEBUG="true" \
-e LOG_LEVEL="0" \
-e VERIFY_SSL="false" \
-v $(pwd)/config.yaml:/app/config.yaml \
-v $(pwd)/talos-machine-config.yaml:/app/talos-machine-config.yaml \
ghcr.io/d13410n3/proxmox-talos-vm-deployer:latest
docker build -t proxmox-talos-vm-deployer .
docker run -p 8080:8080 \
-e PROXMOX_BASE_ADDR="https://your-proxmox.example.com:8006/api2/json" \
# ... other environment variables ...
-v $(pwd)/config.yaml:/app/config.yaml \
-v $(pwd)/talos-machine-config.yaml:/app/talos-machine-config.yaml \
proxmox-talos-vm-deployer
Requirements: Go 1.21+, talosctl
go mod tidy
go build -o proxmox-talos-vm-deployer
./proxmox-talos-vm-deployer
Create a Talos Linux template in Proxmox:
- Download Talos ISO from Talos releases
- Create VM with desired specs (will be used as template)
- Install Talos using the ISO
- Enable QEMU Guest Agent in VM options
- Configure network for DHCP
- Convert to template in Proxmox
Note: QEMU Guest Agent is required for IP discovery. Ensure it's enabled in your template.
- Existing cluster or control plane must be running
- Install talosctl:
curl -sL https://talos.dev/install | sh
- Machine config template with proper cluster credentials
graph TD
A[API Request] --> B[Select Proxmox Node]
B --> C[Clone VM from Template]
C --> D[Configure CPU/Memory/NUMA]
D --> E[Start VM]
E --> F[Query Guest Agent for IP]
F --> G[Generate Talos Config]
G --> H[Apply Config to VM]
H --> I[Register with Cluster]
I --> J[VM Ready]
Process Details:
- Node Selection - Weighted algorithm chooses optimal Proxmox node
- VM Cloning - Creates new VM from base Talos template
- Resource Configuration - Sets CPU, memory, disk, NUMA topology
- Network Bootstrap - Starts VM and waits for network initialization
- IP Discovery - Uses qemu-guest-agent to get VM IP address
- Talos Configuration - Generates machine config with replaced placeholders
- Cluster Integration - Applies config and registers node with cluster
Available at /metrics
endpoint:
Metric | Description | Labels |
---|---|---|
vm_deployer_vms_created_total |
Total VMs created | node , base_template , vm_template |
vm_deployer_vms_deleted_total |
Total VMs deleted | node |
vm_deployer_handler_errors_total |
Total handler errors | handler |
- LOG_LEVEL=0: Debug (verbose output)
- LOG_LEVEL=1: Info (default)
- LOG_LEVEL=2: Error only
Sentry integration for error reporting and monitoring.
Issue | Solution |
---|---|
SSL Certificate Errors | Set VERIFY_SSL=false for self-signed certificates |
Proxmox API Permissions | Ensure token has VM management permissions |
Network Connectivity | Verify VMs can reach Talos control plane |
IP Discovery Fails | Check qemu-guest-agent is enabled and running in VM |
Talos Registration Fails | Validate machine config template and cluster credentials |
-
Enable Debug Logging
export LOG_LEVEL=0 export DEBUG=true
-
Test Guest Agent
# From Proxmox node qm guest cmd <vmid> network-get-interfaces
-
Verify Talos Config
talosctl validate --config your-machine-config.yaml
- Logs: Check service logs with
LOG_LEVEL=0
- Metrics: Monitor
/metrics
endpoint for errors - Sentry: Review error tracking dashboard
- ✅ Automated VM Deployment - Clone and configure VMs from templates
- ✅ Smart Node Selection - Weighted distribution across Proxmox nodes
- ✅ NUMA Optimization - Proper topology configuration for performance
- ✅ CPU Affinity - Pin VMs to specific physical/hyperthreaded cores
- ✅ Bulk Operations - Create multiple VMs in single API call
- ✅ Template Placeholders - Dynamic configuration replacement
- ✅ IP Discovery - QEMU Guest Agent integration
- ✅ Cluster Integration - Automatic Talos cluster registration
- ✅ Monitoring - Prometheus metrics and health checks
- ✅ Error Tracking - Sentry integration for observability
MIT License - see LICENSE file for details.