Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operation cannot be fulfilled on nodes: the object has been modified; #4559

Open
killua-eu opened this issue Jul 1, 2024 · 1 comment
Open

Comments

@killua-eu
Copy link

Summary

During normal operation on a 2 month fresh 3 machines cluster installed on ubuntu 24.04, portainer suddenly went offline. Upon inspecting, one of the nodes was not responding, possibly due to unrelated issues (thermal issue). Upon reboot, the cluster was out of sync and all services were offline. Trying to narrow down, servers were stopped, then started again one by one. Eventually, I got to

microk8s kubectl get nodes
NAME                            STATUS   ROLES    AGE   VERSION
potato01.kai.senbonzakura.net   Ready    <none>   57d   v1.30.1
potato02.kai.senbonzakura.net   Ready    <none>   57d   v1.30.1
potato03.kai.senbonzakura.net   Ready    <none>   57d   v1.30.1

microk8s kubectl get services
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
fallback-service         ClusterIP   10.152.183.77    <none>        80/TCP     14d
keycloak                 ClusterIP   10.152.183.191   <none>        80/TCP     37d
keycloak-headless        ClusterIP   None             <none>        8080/TCP   37d
keycloak-postgresql      ClusterIP   10.152.183.25    <none>        5432/TCP   37d
keycloak-postgresql-hl   ClusterIP   None             <none>        5432/TCP   37d
kubernetes               ClusterIP   10.152.183.1     <none>        443/TCP    57d

microk8s kubectl get pods
NAME                                   READY   STATUS        RESTARTS         AGE
fallback-deployment-7c8f6894c4-pk44f   0/1     Terminating   66 (3m17s ago)   14d
keycloak-postgresql-0                  1/1     Terminating   1 (177m ago)     37d

But even so, portainer wouldnt be working.

root@potato02:/home/killua# microk8s inspect
Inspecting system
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
^[[H  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-kubelite is running
  Service snap.microk8s.daemon-k8s-dqlite is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy openSSL information to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy asnycio usage and limits to the final report tarball
  Copy inotify max_user_instances and max_user_watches to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting dqlite
  Inspect dqlite
^[[Hcp: cannot stat '/var/snap/microk8s/6876/var/kubernetes/backend/localnode.yaml': No such file or directory

Building the report tarball
  Report tarball is at /var/snap/microk8s/6876/inspection-report-20240701_211849.tar.gz

The localnode.yaml isn't on any of the nodes, so I'm not sure if this is a false positive or not. #4361 pointed me to

$ sudo journalctl -f -u snap.microk8s.daemon-kubelite | grep err
Journal file /var/log/journal/79a14e6e83674086ad2c264df4a022ad/user-1000@fc3dd5b7f65c4546a8cee4f8ec4b0683-0000000000000000-0000000000000000.journal corrupted, ignoring file.
Jul 01 21:56:31 potato02.kai.senbonzakura.net microk8s.daemon-kubelite[70661]: E0701 21:56:31.638145   70661 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jul 01 21:56:40 potato02.kai.senbonzakura.net microk8s.daemon-kubelite[70661]: E0701 21:56:40.600431   70661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-ingress-microk8s\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-ingress-microk8s pod=nginx-ingress-microk8s-controller-r4577_ingress(9f512f46-93cb-46c0-be51-b61f310a616a)\"" pod="ingress/nginx-ingress-microk8s-controller-r4577" podUID="9f512f46-93cb-46c0-be51-b61f310a616a"
Jul 01 21:56:48 potato02.kai.senbonzakura.net microk8s.daemon-kubelite[70661]: E0701 21:56:48.541676   70661 node_lifecycle_controller.go:973] "Error updating node" err="Operation cannot be fulfilled on nodes \"potato03.kai.senbonzakura.net\": the object has been modified; please apply your changes to the latest version and try again" node="potato03.kai.senbonzakura.net"
Jul 01 21:56:48 potato02.kai.senbonzakura.net microk8s.daemon-kubelite[70661]: E0701 21:56:48.541689   70661 node_lifecycle_controller.go:973] "Error updating node" err="Operation cannot be fulfilled on nodes \"potato01.kai.senbonzakura.net\": the object has been modified; please apply your changes to the latest version and try again" node="potato01.kai.senbonzakura.net"
Jul 01 21:56:55 potato02.kai.senbonzakura.net microk8s.daemon-kubelite[70661]: E0701 21:56:55.599907   70661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-ingress-microk8s\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-ingress-microk8s pod=nginx-ingress-microk8s-controller-r4577_ingress(9f512f46-93cb-46c0-be51-b61f310a616a)\"" pod="ingress/nginx-ingress-microk8s-controller-r4577" podUID="9f512f46-93cb-46c0-be51-b61f310a616a"
Jul 01 21:56:57 potato02.kai.senbonzakura.net microk8s.daemon-kubelite[70661]: I0701 21:56:57.720357   70661 garbagecollector.go:826] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
Jul 01 21:57:09 potato02.kai.senbonzakura.net microk8s.daemon-kubelite[70661]: E0701 21:57:09.600313   70661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx-ingress-microk8s\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=nginx-ingress-microk8s pod=nginx-ingress-microk8s-controller-r4577_ingress(9f512f46-93cb-46c0-be51-b61f310a616a)\"" pod="ingress/nginx-ingress-microk8s-controller-r4577" podUID="9f512f46-93cb-46c0-be51-b61f310a616a"

kubectl get all -A
NAMESPACE      NAME                                          READY   STATUS             RESTARTS         AGE
cert-manager   pod/cert-manager-cainjector-dc95f9d66-dlkxz   0/1     Pending            0                139m
cert-manager   pod/cert-manager-d5fcf78bc-lc42z              1/1     Terminating        2 (3h2m ago)     14d
cert-manager   pod/cert-manager-webhook-74f996695-5fqqv      0/1     Terminating        40 (5m16s ago)   41d
default        pod/fallback-deployment-7c8f6894c4-pk44f      0/1     Terminating        70 (22s ago)     14d
default        pod/keycloak-postgresql-0                     1/1     Terminating        1 (3h5m ago)     37d
ingress        pod/nginx-ingress-microk8s-controller-r4577   0/1     Running            64 (12s ago)     57d
ingress        pod/nginx-ingress-microk8s-controller-vvgtc   0/1     CrashLoopBackOff   62 (2m ago)      57d
ingress        pod/nginx-ingress-microk8s-controller-wnnqv   0/1     CrashLoopBackOff   64 (22s ago)     57d
kube-system    pod/calico-kube-controllers-796fb75cc-rzfwk   0/1     Pending            0                139m
kube-system    pod/calico-node-67cw2                         1/1     Running            1 (3h5m ago)     32d
kube-system    pod/calico-node-s2ctj                         1/1     Running            2 (3h5m ago)     33d
kube-system    pod/calico-node-xrc2g                         1/1     Running            2 (138m ago)     33d
kube-system    pod/coredns-5986966c54-s874n                  0/1     Pending            0                139m
kube-system    pod/hostpath-provisioner-7c8bdf94b8-qgsmp     0/1     Pending            0                139m
kube-system    pod/metrics-server-7cff7889bd-pmds9           0/1     Terminating        45 (59s ago)     57d
portainer      pod/portainer-58667b7b8f-dppgm                0/1     Pending            0                139m

NAMESPACE                NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                     AGE
cert-manager             service/cert-manager             ClusterIP   10.152.183.125   <none>        9402/TCP                                                                                                    52d
cert-manager             service/cert-manager-webhook     ClusterIP   10.152.183.85    <none>        443/TCP                                                                                                     52d
default                  service/fallback-service         ClusterIP   10.152.183.77    <none>        80/TCP                                                                                                      14d
default                  service/keycloak                 ClusterIP   10.152.183.191   <none>        80/TCP                                                                                                      37d
default                  service/keycloak-headless        ClusterIP   None             <none>        8080/TCP                                                                                                    37d
default                  service/keycloak-postgresql      ClusterIP   10.152.183.25    <none>        5432/TCP                                                                                                    37d
default                  service/keycloak-postgresql-hl   ClusterIP   None             <none>        5432/TCP                                                                                                    37d
default                  service/kubernetes               ClusterIP   10.152.183.1     <none>        443/TCP                                                                                                     57d
kube-system              service/kube-dns                 ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP                                                                                      57d
kube-system              service/metrics-server           ClusterIP   10.152.183.209   <none>        443/TCP                                                                                                     57d
nfs-server-provisioner   service/nfs-server-provisioner   ClusterIP   10.152.183.208   <none>        2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP   57d
portainer                service/portainer                NodePort    10.152.183.201   <none>        9000:30777/TCP,9443:30779/TCP,30776:30776/TCP                                                               57d

NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
ingress       daemonset.apps/nginx-ingress-microk8s-controller   2         2         0       2            0           <none>                   57d
kube-system   daemonset.apps/calico-node                         3         3         2       3            2           kubernetes.io/os=linux   57d

NAMESPACE      NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
cert-manager   deployment.apps/cert-manager              1/1     1            1           52d
cert-manager   deployment.apps/cert-manager-cainjector   1/1     1            1           52d
cert-manager   deployment.apps/cert-manager-webhook      1/1     1            1           52d
default        deployment.apps/fallback-deployment       1/1     1            1           14d
kube-system    deployment.apps/calico-kube-controllers   1/1     1            1           57d
kube-system    deployment.apps/coredns                   1/1     1            1           57d
kube-system    deployment.apps/hostpath-provisioner      1/1     1            1           57d
kube-system    deployment.apps/metrics-server            1/1     1            1           57d
portainer      deployment.apps/portainer                 0/1     1            0           57d

NAMESPACE      NAME                                                DESIRED   CURRENT   READY   AGE
cert-manager   replicaset.apps/cert-manager-cainjector-dc95f9d66   1         0         0       52d
cert-manager   replicaset.apps/cert-manager-d5fcf78bc              1         1         1       52d
cert-manager   replicaset.apps/cert-manager-webhook-74f996695      1         1         0       52d
default        replicaset.apps/fallback-deployment-7c8f6894c4      1         1         0       14d
kube-system    replicaset.apps/calico-kube-controllers-796fb75cc   1         0         0       57d
kube-system    replicaset.apps/coredns-5986966c54                  1         0         0       57d
kube-system    replicaset.apps/hostpath-provisioner-7c8bdf94b8     1         0         0       57d
kube-system    replicaset.apps/metrics-server-7cff7889bd           1         1         0       57d
portainer      replicaset.apps/portainer-58667b7b8f                1         0         0       57d

NAMESPACE                NAME                                      READY   AGE
default                  statefulset.apps/keycloak                 0/1     37d
default                  statefulset.apps/keycloak-postgresql      1/1     37d
nfs-server-provisioner   statefulset.apps/nfs-server-provisioner   1/1     57d

any ideas on how to the cluster into a working state?

What Should Happen Instead?

Reproduction Steps

unfortunately no idea how

Introspection Report

inspection-report-20240701_211849.tar.gz

Can you suggest a fix?

really no idea

Are you interested in contributing with a fix?

yes

@berjanb
Copy link

berjanb commented Aug 27, 2024

@killua-eu did you fix this issue ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants