Skip to content

Conversation

@TheAifam5
Copy link
Contributor

Closes: #81

@TheAifam5
Copy link
Contributor Author

TheAifam5 commented Oct 16, 2025

I created this as draft first since I had no chance to test if NGINX still works. This adds additional options so it does require an update in the README too.

Would be nice if someone could test the NGINX part :)

I have tested the NodePort and LoadBalancer approach, here is my config:

variable "hcloud_token" {
  type = string
  description = "Hetzner Cloud API Token"
  sensitive = true
}

module "kubernetes" {
  source  = "./terraform-hcloud-kubernetes"
  # version = "~> 3.7"

  cluster_name = "XXXXXXXXXXXXX"
  hcloud_token = var.hcloud_token

  # Export configs for Talos and Kube API access
  cluster_kubeconfig_path  = "kubeconfig"
  cluster_talosconfig_path = "talosconfig"

  # Configure firewall
  firewall_use_current_ipv4 = false
  firewall_use_current_ipv6 = false
  firewall_api_source = ["0.0.0.0/0", "::/0"]

  # Network configuration
  network_native_routing_ipv4_cidr = "10.0.0.0/8"

  # Define control plane node pools
  control_plane_nodepools = [
    { name = "control", type = "cax21", location = "fsn1", count = 3 }
  ]

  control_plane_public_vip_ipv4_enabled  = true

  # KubeAPI configuration
  kube_api_hostname = "api.XXXXXXXXXXXXX.XXXXXXXXXXXXX.XXXXXXXXXXXXX"

  # Configure cluster autoscaler
  cluster_autoscaler_nodepools = [
    {
      name     = "autoscaler"
      type     = "cax11"
      location = "fsn1"
      min      = 0
      max      = 3
      labels   = { "autoscaler-node" = "true" }
    }
  ]
  cluster_autoscaler_helm_values = {
    extraArgs = {
      enforce-node-group-min-size   = true
      scale-down-delay-after-add    = "45m"
      scale-down-delay-after-delete = "4m"
      scale-down-unneeded-time      = "5m"
    }
  }
  cluster_autoscaler_discovery_enabled = true

  # Configure Cert-Manager
  cert_manager_enabled = true

  # Configure Longhorn
  longhorn_enabled = true
  longhorn_default_storage_class  = true

  # Configure backup
  talos_backup_s3_enabled = false

  # Configure ingress
  ingress_load_balancer_pools = [
    {
      name     = "lb-fsn"
      location = "fsn1"
      type     = "lb11"
      local_traffic = true
    }
  ]

  # Configure ingress controller
  ingress_controller_enabled = true
  ingress_controller_provider = "cilium"

  # Configure Cilium
  cilium_bpf_datapath_mode = "netkit-l2"
  cilium_routing_mode = "native"
  cilium_hubble_enabled = true
  cilium_hubble_relay_enabled = true
  cilium_hubble_ui_enabled = true
  cilium_service_monitor_enabled = true

  # Extra routes for Talos nodes
  talos_extra_routes = ["10.0.0.0/8"]
}

This configuration uses NodePorts. Just remove ingress_load_balancer_pools to use LoadBalancer approach.

In order to get healthchecks up on Hetzner side, you need to deploy a service:

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/kubernetes/servicemesh/envoy/client-helloworld.yaml

and then add Ingress rule :) I haven't found a workaround, not sure how it looks like on NGINX side.

@TheAifam5 TheAifam5 marked this pull request as ready for review October 16, 2025 16:48
Signed-off-by: Mateusz Paluszkiewicz <theaifam5@gmail.com>
Signed-off-by: Mateusz Paluszkiewicz <theaifam5@gmail.com>
Signed-off-by: Mateusz Paluszkiewicz <theaifam5@gmail.com>
Signed-off-by: Mateusz Paluszkiewicz <theaifam5@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support cilium ingress

2 participants