-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Description
Sealos Version
5.1.1
How to reproduce the bug?
- install wsl 24.04 on windows
- run "sealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.30.14 registry.cn-shanghai.aliyuncs.com/labring/helm:v4.0.1 registry.cn-shanghai.aliyuncs.com/labring/calico:v3.27.4 --single"
- Error: failed to init masters: init master0 failed, error: signal: killed. Please clean and reinstall
What is the expected behavior?
install kubernetes success
What do you see instead?
- apiserver connect etcd timeout
Operating environment
- Sealos version:
buildDate: "2025-11-17T04:16:18Z"
compiler: gc
gitCommit: 1e312ad2c
gitVersion: 5.1.1
goVersion: go1.23.12
platform: linux/amd64
- Operating system:
WSL ubuntu 24.04Additional information
install logs
$ sealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.30.14 registry.cn-shanghai.aliyuncs.com/labring/helm:v4.0.1 registry.cn-shanghai.aliyuncs.com/labring/calico:v3.27.4 --single
Flag --single has been deprecated, it defaults to running cluster in single mode when there are no master and node
2026-01-24T16:11:22 info Start to create a new cluster: master [10.255.255.254], worker [], registry 10.255.255.254
2026-01-24T16:11:22 info Executing pipeline Check in CreateProcessor.
2026-01-24T16:11:22 info checker:hostname [10.255.255.254:22]
2026-01-24T16:11:22 info checker:timeSync [10.255.255.254:22]
2026-01-24T16:11:22 info checker:containerd [10.255.255.254:22]
2026-01-24T16:11:22 info Executing pipeline PreProcess in CreateProcessor.
2026-01-24T16:11:22 info Executing pipeline RunConfig in CreateProcessor.
2026-01-24T16:11:22 info Executing pipeline MountRootfs in CreateProcessor.
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
2026-01-24T16:11:23 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
2026-01-24T16:11:23 info Executing pipeline MirrorRegistry in CreateProcessor.
2026-01-24T16:11:23 info trying default http mode to sync images to hosts [10.255.255.254:22]
2026-01-24T16:11:12 info Executing pipeline Bootstrap in CreateProcessor
INFO [2026-01-24 16:11:12] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
INFO [2026-01-24 16:11:12] >> check root,port,cri success
2026-01-24T16:11:12 info domain sealos.hub:10.255.255.254 append success
Created symlink /etc/systemd/system/multi-user.target.wants/registry.service → /etc/systemd/system/registry.service.
INFO [2026-01-24 16:11:13] >> Health check registry!
INFO [2026-01-24 16:11:13] >> registry is running
INFO [2026-01-24 16:11:13] >> init registry success
2026-01-24T16:11:13 info domain apiserver.cluster.local:10.255.255.254 append success
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
INFO [2026-01-24 16:11:14] >> Health check containerd!
INFO [2026-01-24 16:11:14] >> containerd is running
INFO [2026-01-24 16:11:14] >> init containerd success
Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
INFO [2026-01-24 16:11:36] >> Health check image-cri-shim!
INFO [2026-01-24 16:11:36] >> image-cri-shim is running
INFO [2026-01-24 16:11:36] >> init shim success
127.0.0.1 localhost
::1 ip6-localhost ip6-loopback
- Applying /usr/lib/sysctl.d/10-apparmor.conf ...
- Applying /etc/sysctl.d/10-bufferbloat.conf ...
- Applying /etc/sysctl.d/10-console-messages.conf ...
- Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
- Applying /etc/sysctl.d/10-kernel-hardening.conf ...
- Applying /etc/sysctl.d/10-magic-sysrq.conf ...
- Applying /etc/sysctl.d/10-map-count.conf ...
- Applying /etc/sysctl.d/10-network-security.conf ...
- Applying /etc/sysctl.d/10-ptrace.conf ...
- Applying /etc/sysctl.d/10-zeropage.conf ...
- Applying /usr/lib/sysctl.d/50-pid-max.conf ...
- Applying /usr/lib/sysctl.d/99-protect-links.conf ...
- Applying /etc/sysctl.d/99-sysctl.conf ...
- Applying /run/sysctl.d/wsl-networking.conf ...
- Applying /etc/sysctl.conf ...
net.core.default_qdisc = fq_codel
kernel.printk = 4 4 1 7
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
kernel.kptr_restrict = 1
kernel.sysrq = 176
vm.max_map_count = 1048576
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
kernel.yama.ptrace_scope = 1
vm.mmap_min_addr = 65536
kernel.pid_max = 4194304
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
fs.file-max = 1048576 # sealos
net.bridge.bridge-nf-call-ip6tables = 1 # sealos
net.bridge.bridge-nf-call-iptables = 1 # sealos
net.core.somaxconn = 65535 # sealos
net.ipv4.conf.all.rp_filter = 0 # sealos
net.ipv4.ip_forward = 1 # sealos
net.ipv4.ip_local_port_range = 1024 65535 # sealos
net.ipv4.tcp_keepalive_intvl = 30 # sealos
net.ipv4.tcp_keepalive_time = 600 # sealos
net.ipv4.vs.conn_reuse_mode = 0 # sealos
net.ipv4.vs.conntrack = 1 # sealos
net.ipv6.conf.all.forwarding = 1 # sealos
vm.max_map_count = 2147483642 # sealos
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.loopback0.rp_filter = 0
fs.file-max = 1048576 # sealos
net.bridge.bridge-nf-call-ip6tables = 1 # sealos
net.bridge.bridge-nf-call-iptables = 1 # sealos
net.core.somaxconn = 65535 # sealos
net.ipv4.conf.all.rp_filter = 0 # sealos
net.ipv4.ip_forward = 1 # sealos
net.ipv4.ip_local_port_range = 1024 65535 # sealos
net.ipv4.tcp_keepalive_intvl = 30 # sealos
net.ipv4.tcp_keepalive_time = 600 # sealos
net.ipv4.vs.conn_reuse_mode = 0 # sealos
net.ipv4.vs.conntrack = 1 # sealos
net.ipv6.conf.all.forwarding = 1 # sealos
vm.max_map_count = 2147483642 # sealos
INFO [2026-01-24 16:11:15] >> pull pause image sealos.hub:5000/pause:3.9
Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
INFO [2026-01-24 16:11:15] >> init kubelet success
INFO [2026-01-24 16:11:15] >> init rootfs success
2026-01-24T16:11:15 info Executing pipeline Init in CreateProcessor.
2026-01-24T16:11:15 info using v1beta3 kubeadm config
2026-01-24T16:11:15 info Copying kubeadm config to master0
2026-01-24T16:11:15 info start to generate cert and kubeConfig...
2026-01-24T16:11:15 info start to generate and copy certs to masters...
2026-01-24T16:11:15 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost pc-202505171610:pc-202505171610] map[10.103.97.2:10.103.97.2 10.255.255.254:10.255.255.254 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1]}
2026-01-24T16:11:15 info Etcd altnames : {map[localhost:localhost pc-202505171610:pc-202505171610] map[10.255.255.254:10.255.255.254 127.0.0.1:127.0.0.1 ::1:::1]}, commonName : pc-202505171610
2026-01-24T16:11:16 info start to copy etc pki files to masters
2026-01-24T16:11:16 info start to create kubeconfig...
2026-01-24T16:11:16 info start to copy kubeconfig files to masters
2026-01-24T16:11:16 info start to copy static files to masters
2026-01-24T16:11:16 info start to init master0...
2026-01-24T16:11:17 info start to pull images: sealos.hub:5000/kube-apiserver:v1.30.14, sealos.hub:5000/kube-controller-manager:v1.30.14, sealos.hub:5000/kube-scheduler:v1.30.14, sealos.hub:5000/kube-proxy:v1.30.14, sealos.hub:5000/coredns/coredns:v1.11.3, sealos.hub:5000/pause:3.9, sealos.hub:5000/etcd:3.5.15-0
Image is up to date for sha256:07d562355fedaef7fbd58b2e1d0cf7dc430b5c8e0e6acbc091c06498351ad3ca
2026-01-24T16:11:18 info succeeded in pulling image sealos.hub:5000/kube-apiserver:v1.30.14
Image is up to date for sha256:097a9f9514c71f7044f80ff67aae20eb04a0a056cfbf6e350741a8d27a840566
2026-01-24T16:11:19 info succeeded in pulling image sealos.hub:5000/kube-controller-manager:v1.30.14
Image is up to date for sha256:c1f0d1cc8af40c1938bf6c289a84c0ffd0adbd57bfcc582113956d3d8550fa71
2026-01-24T16:11:41 info succeeded in pulling image sealos.hub:5000/kube-scheduler:v1.30.14
Image is up to date for sha256:709bcab73020c733016d8633356c3d7a38db0c83d66a7199709edb94c6469d60
2026-01-24T16:11:42 info succeeded in pulling image sealos.hub:5000/kube-proxy:v1.30.14
Image is up to date for sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
2026-01-24T16:11:43 info succeeded in pulling image sealos.hub:5000/coredns/coredns:v1.11.3
Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
2026-01-24T16:11:43 info succeeded in pulling image sealos.hub:5000/pause:3.9
Image is up to date for sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
2026-01-24T16:11:45 info succeeded in pulling image sealos.hub:5000/etcd:3.5.15-0
W0124 16:11:45.753902 1470 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
[init] Using Kubernetes version: v1.30.14
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0124 16:11:46.217558 1470 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://10.255.255.254:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0124 16:11:46.246352 1470 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://10.255.255.254:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://0.0.0.0:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 2m30.009294713s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
2026-01-24T16:16:45 error Applied to cluster error: failed to init masters: init master0 failed, error: signal: killed. Please clean and reinstall
Error: failed to init masters: init master0 failed, error: signal: killed. Please clean and reinstall
crictl
$ crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
21392b2f6e586 2e96e5913fc06 3 minutes ago Running etcd 32 6292e389937fc etcd-pc-202505171610
850bf0579c6f6 07d562355feda 4 minutes ago Exited kube-apiserver 49 5db9bb6fdd4bf kube-apiserver-pc-202505171610
31f2b2773596a 2e96e5913fc06 9 minutes ago Exited etcd 31 6292e389937fc etcd-pc-202505171610
482cd90cdd9b8 c1f0d1cc8af40 12 minutes ago Running kube-scheduler 4 f065484a76a7e kube-scheduler-pc-202505171610
f48bb24876558 097a9f9514c71 12 minutes ago Running kube-controller-manager 4 648b9befe47c7 kube-controller-manager-pc-202505171610
etcd logs
crictl logs 31f2b2773596a
{"level":"warn","ts":"2026-01-24T08:14:51.387856Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2026-01-24T08:14:51.387917Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://10.255.255.254:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://10.255.255.254:2380","--initial-cluster=pc-202505171610=https://10.255.255.254:2380","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://10.255.255.254:2379","--listen-metrics-urls=http://0.0.0.0:2381","--listen-peer-urls=https://10.255.255.254:2380","--name=pc-202505171610","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
{"level":"info","ts":"2026-01-24T08:14:51.387971Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"}
{"level":"warn","ts":"2026-01-24T08:14:51.387987Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2026-01-24T08:14:51.387996Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://10.255.255.254:2380"]}
{"level":"info","ts":"2026-01-24T08:14:51.388178Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2026-01-24T08:14:51.398735Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://10.255.255.254:2379","https://127.0.0.1:2379"]}
{"level":"info","ts":"2026-01-24T08:14:51.420359Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":20,"max-cpu-available":20,"member-initialized":true,"name":"pc-202505171610","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://10.255.255.254:2380"],"listen-peer-urls":["https://10.255.255.254:2380"],"advertise-client-urls":["https://10.255.255.254:2379"],"listen-client-urls":["https://10.255.255.254:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://0.0.0.0:2381"],"cors":[""],"host-whitelist":[""],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2026-01-24T08:14:51.425949Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"5.43191ms"}
{"level":"info","ts":"2026-01-24T08:14:51.426253Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2026-01-24T08:14:51.426689Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"abb28b7077aa2c62","local-member-id":"9d779b810e25f38e","commit-index":4}
{"level":"info","ts":"2026-01-24T08:14:51.426737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e switched to configuration voters=()"}
{"level":"info","ts":"2026-01-24T08:14:51.426764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e became follower at term 2"}
{"level":"info","ts":"2026-01-24T08:14:51.426780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9d779b810e25f38e [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]"}
{"level":"warn","ts":"2026-01-24T08:14:51.430168Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2026-01-24T08:14:51.433392Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2026-01-24T08:14:51.437138Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2026-01-24T08:14:51.441699Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"9d779b810e25f38e","timeout":"7s"}
{"level":"info","ts":"2026-01-24T08:14:51.441727Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"9d779b810e25f38e"}
{"level":"info","ts":"2026-01-24T08:14:51.441742Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"9d779b810e25f38e","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
{"level":"info","ts":"2026-01-24T08:14:51.441883Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2026-01-24T08:14:51.441926Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2026-01-24T08:14:51.441968Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2026-01-24T08:14:51.441983Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2026-01-24T08:14:51.441996Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2026-01-24T08:14:51.442020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e switched to configuration voters=(11346708764773708686)"}
{"level":"info","ts":"2026-01-24T08:14:51.442064Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"abb28b7077aa2c62","local-member-id":"9d779b810e25f38e","added-peer-id":"9d779b810e25f38e","added-peer-peer-urls":["https://10.255.255.254:2380"]}
{"level":"info","ts":"2026-01-24T08:14:51.442374Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"abb28b7077aa2c62","local-member-id":"9d779b810e25f38e","cluster-version":"3.5"}
{"level":"info","ts":"2026-01-24T08:14:51.442431Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2026-01-24T08:14:51.443347Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2026-01-24T08:14:51.443421Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.255.255.254:2380"}
{"level":"info","ts":"2026-01-24T08:14:51.443438Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.255.255.254:2380"}
{"level":"info","ts":"2026-01-24T08:14:51.464974Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9d779b810e25f38e","initial-advertise-peer-urls":["https://10.255.255.254:2380"],"listen-peer-urls":["https://10.255.255.254:2380"],"advertise-client-urls":["https://10.255.255.254:2379"],"listen-client-urls":["https://10.255.255.254:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://0.0.0.0:2381"]}
{"level":"info","ts":"2026-01-24T08:14:51.465007Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://0.0.0.0:2381"}
{"level":"info","ts":"2026-01-24T08:14:52.726882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e is starting a new election at term 2"}
{"level":"info","ts":"2026-01-24T08:14:52.726926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e became pre-candidate at term 2"}
{"level":"info","ts":"2026-01-24T08:14:52.726957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e received MsgPreVoteResp from 9d779b810e25f38e at term 2"}
{"level":"info","ts":"2026-01-24T08:14:52.726964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e became candidate at term 3"}
{"level":"info","ts":"2026-01-24T08:14:52.726967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e received MsgVoteResp from 9d779b810e25f38e at term 3"}
{"level":"info","ts":"2026-01-24T08:14:52.726971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d779b810e25f38e became leader at term 3"}
{"level":"info","ts":"2026-01-24T08:14:52.726974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9d779b810e25f38e elected leader 9d779b810e25f38e at term 3"}
{"level":"info","ts":"2026-01-24T08:14:52.734162Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9d779b810e25f38e","local-member-attributes":"{Name:pc-202505171610 ClientURLs:[https://10.255.255.254:2379]}","request-path":"/0/members/9d779b810e25f38e/attributes","cluster-id":"abb28b7077aa2c62","publish-timeout":"7s"}
{"level":"info","ts":"2026-01-24T08:14:52.734190Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2026-01-24T08:14:52.734252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2026-01-24T08:14:52.734417Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2026-01-24T08:14:52.734430Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2026-01-24T08:14:52.734627Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2026-01-24T08:14:52.734671Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2026-01-24T08:14:52.735047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2026-01-24T08:14:52.735058Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.255.255.254:2379"}
{"level":"info","ts":"2026-01-24T08:21:01.175053Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2026-01-24T08:21:01.175117Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pc-202505171610","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://10.255.255.254:2380"],"advertise-client-urls":["https://10.255.255.254:2379"]}
{"level":"warn","ts":"2026-01-24T08:21:01.175232Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 10.255.255.254:2379: use of closed network connection"}
{"level":"warn","ts":"2026-01-24T08:21:01.175266Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 10.255.255.254:2379: use of closed network connection"}
{"level":"warn","ts":"2026-01-24T08:21:01.183263Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2026-01-24T08:21:01.183286Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"info","ts":"2026-01-24T08:21:01.183325Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9d779b810e25f38e","current-leader-member-id":"9d779b810e25f38e"}
{"level":"info","ts":"2026-01-24T08:21:01.190368Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"10.255.255.254:2380"}
{"level":"info","ts":"2026-01-24T08:21:01.190496Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"10.255.255.254:2380"}
{"level":"info","ts":"2026-01-24T08:21:01.190516Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pc-202505171610","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://10.255.255.254:2380"],"advertise-client-urls":["https://10.255.255.254:2379"]}
api-server logs
$ crictl logs fdf5e9058f07e
I0124 16:24:51.126233 1 options.go:221] external host was not specified, using 10.255.255.254
I0124 16:24:51.126678 1 server.go:148] Version: v1.30.14
I0124 16:24:51.126703 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0124 16:24:51.360965 1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
I0124 16:24:51.363593 1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0124 16:24:51.365174 1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0124 16:24:51.365192 1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0124 16:24:51.365267 1 instance.go:299] Using reconciler: lease
W0124 16:25:11.361835 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: i/o timeout"
W0124 16:25:11.362146 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
F0124 16:25:11.365637 1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
W0124 16:25:11.365710 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: i/o timeout"