Skip to content

"HCloud mode" install fails with unhelpful errors when (probably?) misconfigured #6

@0xf1e

Description

@0xf1e

Hello dear kubeaid devs!

This morning I decided to play around a little with kubeaid's native hcloud support, but I've stumbled into some issues that I'd like to share.

First of all, one tiny issue I encountered at the start: Because I copied my secrets.yaml file from a different project, it lacked the hetzner.apiToken value. Of course, an error is to be expected here.
However, not providing this value caused the program to exit in a segmentation fault:

❯ kubeaid-cli cluster bootstrap
(12:27) INFO : Parsing and validating config files
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x5c34dbd]

goroutine 1 [running]:
github.com/Obmondo/kubeaid-bootstrap-script/pkg/cloud/hetzner.NewHetznerCloudProvider()
	/github/workspace/pkg/cloud/hetzner/hetzner.go:23 +0x1d
github.com/Obmondo/kubeaid-bootstrap-script/pkg/config/parser.setCloudProvider()
	/github/workspace/pkg/config/parser/parse.go:223 +0xe7
github.com/Obmondo/kubeaid-bootstrap-script/pkg/config/parser.ParseConfigFiles({0xa68fd60, 0xe07eda0}, {0xe07eda0?, 0x0?})
	/github/workspace/pkg/config/parser/parse.go:102 +0x4aa
main.proxyRun(0xde01060, {0x969f193?, 0x4?, 0x969f197?})
	/github/workspace/cmd/kubeaid-cli/main.go:93 +0xac
github.com/spf13/cobra.(*Command).execute(0xde01060, {0xe07eda0, 0x0, 0x0})
	/go/pkg/mod/github.com/spf13/cobra@v1.9.1/command.go:993 +0x949
github.com/spf13/cobra.(*Command).ExecuteC(0xde00820)
	/go/pkg/mod/github.com/spf13/cobra@v1.9.1/command.go:1148 +0x465
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/cobra@v1.9.1/command.go:1071
main.main()
	/github/workspace/cmd/kubeaid-cli/main.go:45 +0x25

This led me to waste some time trying to debug my docker install, which I thought was the cause at first.

More importantly however, after I resolved this issue, I'm still not succeeding in my install, this time with the following error:

❯ kubeaid-cli cluster bootstrap
(12:14) INFO : Parsing and validating config files
(12:14) INFO : Fetching latest stable K8s version URL=https://dl.k8s.io/release/stable.txt
(12:14) ERROR : HCloud specific details not provided

This is really confusing to me, because I tried to keep the general.yaml configuration as close to the default as possible:

forkURLs:
  # KubeAid repository URL (in HTTPs syntax).
  # Defaults to Obmondo's KubeAid repository.
  kubeaid: https://github.com/Obmondo/KubeAid

  # Your KubeAid config repository URL (in HTTPs/SSH syntax).
  kubeaidConfig: https://github.com/0xf1e/kubeaid-config

# Kubernetes cluster and control-plane specific configurations.
cluster:
  # Kubernetes cluster name.
  name: kubeaid-hardened

  # Kubernetes version to use.
  k8sVersion: v1.31.0

  # Kubeaid version to use.
  kubeaidVersion: 19.1.0

  # Kubernetes API server specific configurations.
  # REFER : https://github.com/kubernetes-sigs/cluster-api/blob/main/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanes.yaml.
  #
  # NOTE : Generally, refer to the KubeadmControlPlane CRD instead of the corresponding GoLang
  #        source types linked below.
  #        There are some configuration options which appear in the corresponding GoLang source type,
  #        but not in the CRD. If you set those fields, then they get removed by the Kubeadm
  #        control-plane provider. This causes the capi-cluster ArgoCD App to always be in an
  #        OutOfSync state, resulting to the KubeAid Bootstrap Script not making any progress!
  # apiServer:
  #
  #   extraArgs: {}
  #
  #   # REFER : "sigs.k8s.io/cluster-api/bootstrap/kubeadm/api/v1beta1".HostPathMount
  #   #
  #   # NOTE : If you want a mount to be read-only, then set extraVolume.readOnly to true.
  #   #        Otherwise, omit setting that field. It gets removed by the Kubeadm control-plane
  #   #        provider component, which results to the capi-cluster ArgoCD App always being in
  #   #        OutOfSync state.
  #   extraVolumes: []
  #
  #   # REFER : "sigs.k8s.io/cluster-api/bootstrap/kubeadm/api/v1beta1".File
  #   files: []

  # Uncomment, if you just want audit-logging to work out of the box! KubeAid Bootstrap Script will
  # set necessary configuration options in cluster.apiServer.
  # enableAuditLogging: True

  # Any additional users you want to be setup for each Kubernetes node.
  # additionalUsers:
  #  - name: <username>
  #    sshPublicKey: xxxxxxxxxx

cloud:
  hetzner:
    mode: hcloud

    # You can view all valid Hetzner zones and regions here :
    # https://docs.hetzner.com/cloud/general/locations/.
    zone: eu-central

    hcloudSSHKeyPairName: home

    rescueHCloudSSHKeyPair:
      name: kubeaid
      publicKeyFilePath: ./outputs/configs/ssh-key
      privateKeyFilePath: ./outputs/configs/ssh-key.pub

    # If true, creates a Hetzner private network and spins up the HCloud servers there.
    # The servers can then communicate to each other directly, using private IPs.
    networkEnabled: true

    imageName: ubuntu-24.04

    controlPlane:
      # HCloud machine type to be used for control-plane nodes.
      machineType: cax11

      # Number of control-plane nodes you want.
      replicas: 3

      # Servers will be spread across these HCloud regions.
      regions:
        - fsn1
        - nbg1
        - hel1

      loadBalancer:
        # Whether you want a loadbalancer which loadbalances traffic across the control-plane
        # node(s).
        # If you want a single control-plane node, you can disable this.
        enabled: true

        # HCloud region where the loadbalancer will be created.
        region: hel1

    nodeGroups:
      hcloud:
        - name: bootstrapper
          machineType: cax11
          minSize: 1
          maxSize: 3

          # A label should meet one of the following criterias to propagate to each of the nodes :
          #
          # (1) Has node-role.kubernetes.io as prefix.
          # (2) Belongs to node-restriction.kubernetes.io domain.
          # (3) Belongs to node.cluster.x-k8s.io domain.
          #
          # REFER : https://cluster-api.sigs.k8s.io/developer/architecture/controllers/metadata-propagation#machine
          labels:
            node-role.kubernetes.io/bootstrapper: ""
            node.cluster.x-k8s.io/nodegroup: bootstrapper
          taints: []

git:
  # Use SSH Agent authentication to authenticate against the Git platform.
  useSSHAgentAuth: false

  # Use SSH private key authentication to authenticate against the Git platform.
  # If set to true, ensure privateKeyFilePath under this git section is set to the path of the private key file on the host machine.
  useSSHPrivateKeyAuth: false

  # Path to the SSH private key file on the host machine.
  # privateKeyFilePath: 'path/to/private/key'

argocd:
  # Use SSH private key authentication to authenticate against the Git platform.
  # If set to true, ensure privateKeyFilePath under git section is set to the path of the private key file on the host machine.
  # If set to false, then HTTPS authentication will be used.
  useSSHPrivateKeyAuth: false

  # # ArgoCD kubeaid repository URL (in HTTPs/SSH syntax).
  # # If not set, defaults to forkURLs.kubeaid
  kubeaidURL: https://github.com/Obmondo/KubeAid

  # # ArgoCD KubeAid config repository URL (in HTTPs/SSH syntax).
  # # If not set, defaults to forkURLs.kubeaidConfig
  kubeaidConfigURL: https://github.com/0xf1e/kubeaid-config

I'm unsure how to move forward from here, because as far as I can tell, I provided all "HCloud specific details" that were asked from me.

Maybe it has something to do with the way I am providing SSH keys? It says in the documentation

Ensure that you don't already have an HCloud SSH KeyPair with the SSH key-pair you'll be using. Otherwise, ClusterAPI Provider Hetzner (CAPH) will error out.
So I removed all pre-existing SSH keys from my hetzner project. That said, I encountered the same error irrespective of whether the SSH keys were there or not.

Have a nice rest of your day!

  • Fie

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions