-
Notifications
You must be signed in to change notification settings - Fork 28
Add support for cilium ingress #214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I created this as draft first since I had no chance to test if NGINX still works. This adds additional options so it does require an update in the README too. Would be nice if someone could test the NGINX part :) I have tested the NodePort and LoadBalancer approach, here is my config: variable "hcloud_token" {
type = string
description = "Hetzner Cloud API Token"
sensitive = true
}
module "kubernetes" {
source = "./terraform-hcloud-kubernetes"
# version = "~> 3.7"
cluster_name = "XXXXXXXXXXXXX"
hcloud_token = var.hcloud_token
# Export configs for Talos and Kube API access
cluster_kubeconfig_path = "kubeconfig"
cluster_talosconfig_path = "talosconfig"
# Configure firewall
firewall_use_current_ipv4 = false
firewall_use_current_ipv6 = false
firewall_api_source = ["0.0.0.0/0", "::/0"]
# Network configuration
network_native_routing_ipv4_cidr = "10.0.0.0/8"
# Define control plane node pools
control_plane_nodepools = [
{ name = "control", type = "cax21", location = "fsn1", count = 3 }
]
control_plane_public_vip_ipv4_enabled = true
# KubeAPI configuration
kube_api_hostname = "api.XXXXXXXXXXXXX.XXXXXXXXXXXXX.XXXXXXXXXXXXX"
# Configure cluster autoscaler
cluster_autoscaler_nodepools = [
{
name = "autoscaler"
type = "cax11"
location = "fsn1"
min = 0
max = 3
labels = { "autoscaler-node" = "true" }
}
]
cluster_autoscaler_helm_values = {
extraArgs = {
enforce-node-group-min-size = true
scale-down-delay-after-add = "45m"
scale-down-delay-after-delete = "4m"
scale-down-unneeded-time = "5m"
}
}
cluster_autoscaler_discovery_enabled = true
# Configure Cert-Manager
cert_manager_enabled = true
# Configure Longhorn
longhorn_enabled = true
longhorn_default_storage_class = true
# Configure backup
talos_backup_s3_enabled = false
# Configure ingress
ingress_load_balancer_pools = [
{
name = "lb-fsn"
location = "fsn1"
type = "lb11"
local_traffic = true
}
]
# Configure ingress controller
ingress_controller_enabled = true
ingress_controller_provider = "cilium"
# Configure Cilium
cilium_bpf_datapath_mode = "netkit-l2"
cilium_routing_mode = "native"
cilium_hubble_enabled = true
cilium_hubble_relay_enabled = true
cilium_hubble_ui_enabled = true
cilium_service_monitor_enabled = true
# Extra routes for Talos nodes
talos_extra_routes = ["10.0.0.0/8"]
}This configuration uses In order to get healthchecks up on Hetzner side, you need to deploy a service: and then add Ingress rule :) I haven't found a workaround, not sure how it looks like on NGINX side. |
a0aa12a to
8bb78be
Compare
Signed-off-by: Mateusz Paluszkiewicz <theaifam5@gmail.com>
Signed-off-by: Mateusz Paluszkiewicz <theaifam5@gmail.com>
Signed-off-by: Mateusz Paluszkiewicz <theaifam5@gmail.com>
Signed-off-by: Mateusz Paluszkiewicz <theaifam5@gmail.com>
eaf959d to
d4cd456
Compare
Closes: #81