-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
This works fine if I'm using a kubeaid-config repo that's hosted on github, but if it is on codeberg, this happens:
❯ kubeaid-cli cluster bootstrap --skip-pr-workflow
(16:43) INFO : Proxying command execution to KubeAid Core container
v0.15.1: Pulling from obmondo/kubeaid-core
Digest: sha256:388b6a90a9408a09b065b21dcde213aee4ebbac0177d3e197e13fd40be5ecf9a
Status: Image is up to date for ghcr.io/obmondo/kubeaid-core:v0.15.1
(16:43) INFO : Spinning up KubeAid Core container
(14:43) INFO : Fetching latest stable K8s version URL=https://dl.k8s.io/release/stable.txt
(14:43) INFO : Created temp dir path=/tmp/kubeaid-core
(14:43) INFO : Determining git auth method
(14:43) INFO : Using username and password
(14:43) INFO : Determining git auth method
(14:43) INFO : Using username and password
(14:43) INFO : Creating the K3D management cluster cluster-name=kubeaid-bootstrapper
INFO[0000] Using config file outputs/k3d.config.yaml (k3d.io/v1alpha5#simple)
INFO[0000] Prep: Network
INFO[0000] Re-using existing network 'k3d-kubeaid-bootstrapper' (f36389731e619f25d0914d7f088719a751802ef312a62bd9eb6c763304ca7ed9)
INFO[0000] Created image volume k3d-kubeaid-bootstrapper-images
INFO[0000] Creating node 'kubeaid-bootstrapper'
INFO[0000] Successfully created registry 'kubeaid-bootstrapper'
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-kubeaid-bootstrapper-tools'
INFO[0001] Creating node 'k3d-kubeaid-bootstrapper-server-0'
INFO[0001] Creating node 'k3d-kubeaid-bootstrapper-agent-0'
INFO[0001] Creating LoadBalancer 'k3d-kubeaid-bootstrapper-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 192.168.80.1 address
INFO[0001] Starting cluster 'kubeaid-bootstrapper'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-kubeaid-bootstrapper-server-0'
INFO[0009] Starting agents...
INFO[0009] Starting node 'k3d-kubeaid-bootstrapper-agent-0'
INFO[0013] Starting helpers...
INFO[0013] Starting node 'kubeaid-bootstrapper'
INFO[0013] Starting node 'k3d-kubeaid-bootstrapper-serverlb'
INFO[0020] Injecting records for hostAliases (incl. host.k3d.internal) and for 5 network members into CoreDNS configmap...
INFO[0023] Cluster 'kubeaid-bootstrapper' created successfully!
INFO[0023] You can now use it like this:
kubectl cluster-info
(14:44) INFO : Executing command command=cp outputs/kubeconfigs/clusters/management/host.yaml outputs/kubeconfigs/clusters/management/container.yaml && KUBECONFIG=outputs/kubeconfigs/clusters/management/container.yaml kubectl config set-cluster k3d-kubeaid-bootstrapper --server=https://k3d-kubeaid-bootstrapper-server-0:6443
Cluster "k3d-kubeaid-bootstrapper" set.
(14:44) INFO : Executing command command=
master_nodes=$(kubectl get nodes -l node-role.kubernetes.io/control-plane=true -o name)
for node in $master_nodes; do
kubectl label $node node-role.kubernetes.io/control-plane-
kubectl label $node node-role.kubernetes.io/control-plane=""
done
node/k3d-kubeaid-bootstrapper-server-0 unlabeled
node/k3d-kubeaid-bootstrapper-server-0 labeled
(14:44) INFO : Cloning repo repo=https://codeberg.org/0xf1e/kubeaid-on-debian-config dir=/tmp/kubeaid-core/kubeaid-config
(14:44) INFO : Setting up cluster.... cluster-type=management
(14:44) INFO : Cloning repo repo=https://github.com/Obmondo/KubeAid dir=/tmp/kubeaid-core/KubeAid
(14:44) INFO : Hard resetting repo to tag tag=18.0.0
(14:44) INFO : Installing Helm chart.... release-name=sealed-secrets
(14:44) INFO : Setting up KubeAid config repo
(14:44) INFO : Detected default branch name branch=main
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/kubeaid-on-debian-vars.jsonnet
(14:44) INFO : Running KubePrometheus build script...
(14:44) INFO : Executing command command=/tmp/kubeaid-core/KubeAid/build/kube-prometheus/build.sh /tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian
INFO: compiling jsonnet files into '/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/kube-prometheus' from sources at /tmp/kubeaid-core/KubeAid/build/kube-prometheus/libraries/v0.15.0/vendor
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/kubeaid-bootstrap-script.general.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/argocd.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/values-argocd.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/Chart.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/root.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/cert-manager.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/values-cert-manager.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/sealed-secrets.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/values-sealed-secrets.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/secrets.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/kubeone/kubeone-cluster.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/cilium.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/values-cilium.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/localpv-provisioner.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/values-localpv-provisioner.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/kube-prometheus.yaml
(14:44) INFO : Created file in KubeAid config fork path=/tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/sealed-secrets/argocd/kubeaid-config.yaml
(14:44) INFO : Determined git status git-status=M k8s/kubeaid-on-debian/sealed-secrets/argocd/kubeaid-config.yaml
(14:44) INFO : Added, committed and pushed changes commit-hash=6994f06a403b2ae7d2143d0f80e2791c84afceaf
(14:44) INFO : Detected default branch name branch=main
(14:44) INFO : Installing and setting up ArgoCD
(14:44) INFO : Executing command command=
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/refs/heads/master/manifests/crds/appproject-crd.yaml
kubectl label crd appprojects.argoproj.io app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate crd appprojects.argoproj.io meta.helm.sh/release-name=argocd --overwrite
kubectl annotate crd appprojects.argoproj.io meta.helm.sh/release-namespace=argocd --overwrite
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io labeled
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io annotated
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io annotated
(14:44) INFO : Installing Helm chart.... release-name=argocd
(14:45) INFO : Creating ArgoCD client
(14:45) INFO : Executing command command=kubectl apply -f /tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/sealed-secrets/argocd/kubeaid-config.yaml
sealedsecret.bitnami.com/kubeaid-config created
(14:45) INFO : Executing command command=kubectl apply -f /tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/argocd-apps/templates/root.yaml
application.argoproj.io/root created
(14:45) INFO : Created root ArgoCD app
(14:45) INFO : Syncing ArgoCD application app-name=root
(14:45) INFO : Sleeping for 10 seconds, waiting for the child ArgoCD Apps to be created app-name=root
(14:45) INFO : Syncing ArgoCD application app-name=cert-manager
(14:45) INFO : Waiting for ArgoCD App to be synced app-name=cert-manager
(14:45) INFO : Syncing ArgoCD application app-name=secrets
(14:45) INFO : Waiting for ArgoCD App to be synced app-name=secrets
Finished setting up management cluster.
To access the ArgoCD admin dashboard :
(1) In your host machine's terminal, navigate to the directory from where you executed the
script (you'll notice the outputs/ directory there). Do :
export KUBECONFIG=outputs/kubeconfigs/clusters/management/host.yaml
(2) Retrieve the ArgoCD admin password :
echo "ArgoCD admin password : "
kubectl get secret argocd-initial-admin-secret --namespace argocd \
-o jsonpath="{.data.password}" | base64 -d
(3) Port forward ArgoCD server :
kubectl port-forward svc/argocd-server --namespace argocd 8080:443
(4) Visit https://localhost:8080 in a browser and login to ArgoCD as admin.
(14:46) INFO : Provisioning main cluster using Kubermatic KubeOne
INFO[14:46:02 UTC] Determine hostname...
INFO[14:46:04 UTC] Determine operating system...
INFO[14:46:04 UTC] Running host probes...
The following actions will be taken:
Run with --verbose flag for more information.
+ initialize control plane node "1-controlplane" (46.62.151.197) using 1.31.13
+ join static worker node "1-worker" (37.27.191.224)
+ apply embedded addons
INFO[14:46:05 UTC] Determine hostname...
INFO[14:46:05 UTC] Determine operating system...
INFO[14:46:05 UTC] Running host probes...
INFO[14:46:06 UTC] Installing prerequisites...
INFO[14:46:06 UTC] Creating environment file... node=46.62.151.197 os=debian
INFO[14:46:06 UTC] Configuring proxy... node=46.62.151.197 os=debian
INFO[14:46:06 UTC] Installing kubeadm... node=46.62.151.197 os=debian
INFO[14:47:01 UTC] Creating environment file... node=37.27.191.224 os=debian
INFO[14:47:01 UTC] Configuring proxy... node=37.27.191.224 os=debian
INFO[14:47:01 UTC] Installing kubeadm... node=37.27.191.224 os=debian
INFO[14:47:49 UTC] Generating kubeadm config file...
INFO[14:47:49 UTC] Determining Kubernetes pause image...
INFO[14:47:51 UTC] Uploading config files... node=46.62.151.197
INFO[14:47:54 UTC] Uploading config files... node=37.27.191.224
INFO[14:47:55 UTC] Running kubeadm preflight checks...
INFO[14:47:55 UTC] preflight... node=46.62.151.197
INFO[14:48:11 UTC] Pre-pull images node=46.62.151.197
INFO[14:48:12 UTC] Configuring certs and etcd on control plane node...
INFO[14:48:12 UTC] Ensuring Certificates... node=46.62.151.197
INFO[14:48:15 UTC] Downloading PKI...
INFO[14:48:15 UTC] Creating local backup... node=46.62.151.197
INFO[14:48:15 UTC] Uploading PKI...
INFO[14:48:15 UTC] Configuring certs and etcd on consecutive control plane node...
INFO[14:48:15 UTC] Initializing Kubernetes on leader...
INFO[14:48:15 UTC] Running kubeadm... node=46.62.151.197
INFO[14:48:26 UTC] Building Kubernetes clientset...
INFO[14:48:26 UTC] Waiting 20s for CSRs to approve... node=46.62.151.197
INFO[14:48:46 UTC] Approve pending CSR "csr-2fsq9" for username "system:node:1-controlplane" node=46.62.151.197
INFO[14:48:46 UTC] Approve pending CSR "csr-qfnmz" for username "system:node:1-controlplane" node=46.62.151.197
INFO[14:48:46 UTC] Check if cluster needs any repairs...
INFO[14:48:47 UTC] Joining controlplane node...
INFO[14:48:47 UTC] Restarting unhealthy API servers if needed...
INFO[14:48:47 UTC] Determining Kubernetes pause image...
INFO[14:48:48 UTC] Patching static pods...
INFO[14:48:49 UTC] Downloading kubeconfig...
INFO[14:48:49 UTC] Removing /etc/kubernetes/super-admin.conf...
INFO[14:48:49 UTC] Downloading PKI...
INFO[14:48:50 UTC] Creating local backup... node=46.62.151.197
INFO[14:48:50 UTC] Activating additional features...
INFO[14:48:50 UTC] Patching CoreDNS...
INFO[14:48:50 UTC] Skipping creating credentials secret because cloud provider is none.
INFO[14:48:50 UTC] Labeling nodes...
INFO[14:48:50 UTC] Annotating nodes...
INFO[14:48:50 UTC] Cleanup stale objects...
INFO[14:48:50 UTC] CSI driver for "none" not yet supported, skipping
INFO[14:48:51 UTC] CSI driver for "none" not yet supported, skipping
INFO[14:48:51 UTC] Applying addon coredns-pdb...
INFO[14:48:53 UTC] CSI driver for "none" not yet supported, skipping
INFO[14:48:53 UTC] Applying addon metrics-server...
INFO[14:48:55 UTC] CSI driver for "none" not yet supported, skipping
INFO[14:48:56 UTC] Applying addon nodelocaldns...
INFO[14:48:58 UTC] CSI driver for "none" not yet supported, skipping
INFO[14:48:58 UTC] Applying user provided addons...
INFO[14:48:59 UTC] Deploying helm chart cilium as release cilium
INFO[14:49:21 UTC] Joining worker node node=37.27.191.224
INFO[14:49:23 UTC] Waiting 20s for CSRs to approve... node=37.27.191.224
INFO[14:49:43 UTC] Approve pending CSR "csr-v7bjn" for username "system:node:1-worker" node=37.27.191.224
INFO[14:49:43 UTC] Labeling nodes...
INFO[14:49:44 UTC] Annotating nodes...
INFO[14:49:44 UTC] Fixing permissions of the kubernetes system files...
(14:49) INFO : Main cluster has been provisioned successfully 🎉🎉 ! kubeconfig=outputs/kubeconfigs/clusters/main.yaml
(14:49) INFO : Waiting for the provisioned cluster's Kubernetes API server to be reachable and atleast 1 worker node to be initialized....
(14:49) INFO : Setting up cluster.... cluster-type=main
(14:49) ERROR : Failed listing refs for 'origin' remote error=authentication required: {"auth_status":"auth_error","body":"Invalid username or token. Password authentication is not supported for Git operations."}
repo=https://github.com/Obmondo/KubeAid dir=/tmp/kubeaid-core/KubeAid
I.e. the cluster bootstrap works fine, and it'll gladly interact with my repo until then, but once the argocd apps are supposed to be installed, it's giving me an authentication error.
Here are some experiments that might help triangulate the issue:
- As mentioned, running the exact same configuration, but with github as backend for the kubeaid-config error, works fine
- I can see the kubeaid-cli script successfully make commits on my codeberg repo during setup
- Rerunning the script after kubeadm setup has succeeded results in the same error.
- If I delete
/tmp/kubeaid-corebefore re-running the script, kubeaid-cli re-runs part of the kubeone install, before quitting with the same 'authentication required' error above. - If I delete the
/tmp/kubeaid-corefolder, then run the script again, and then delete the/tmp/kubeaid-corefolder again during the kubeaid install process, the script (unsurprisingly, I guess) complains about missing files at the end of the kubeaid install step:INFO[15:03:03 UTC] Fixing permissions of the kubernetes system files... (15:03) ERROR : Failed deleting main cluster's PKI infrastructure backup error=remove /tmp/kubeaid-core/kubeaid-config/k8s/kubeaid-on-debian/kubeone/kubeaid-on-debian.tar.gz: no such file or directory
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels