You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description:
We are running a 3-pod etcd cluster without persistent storage, relying on emptyDir. However, after the recent change introduced in commit 1aff4e2, the etcd upgrade is failing.
Steps to Reproduce:
Deploy a 3-pod etcd cluster with persistent storage disabled (emptyDir used instead).
Attempt to perform a rolling upgrade.
Observe that the upgrade fails.
Root Cause:
The issue lies in the is_new_etcd_cluster function. To determine if the cluster is new or existing, this function executes:
is_new_etcd_cluster() {
local -a extra_flags
read -r -a extra_flags <<<"$(etcdctl_auth_flags)"
is_boolean_yes "$ETCD_ON_K8S"&& extra_flags+=("--endpoints=$(etcdctl_get_endpoints)")
! debug_execute etcdctl endpoint status --cluster "${extra_flags[@]}"
}
During a rolling upgrade, not all endpoints will be responsive. When etcdctl_get_endpoints includes its own endpoint, the command:
! debug_execute etcdctl endpoint status --cluster "${extra_flags[@]}"
fails, leading to the upgrade issue.
Proposed Fix:
Modify etcdctl_get_endpoints to exclude the pod's own endpoint before executing etcdctl endpoint status --cluster. This will prevent failures when checking the cluster status during a rolling upgrade.
Expected Behavior:
The etcd cluster should successfully upgrade even when persistent storage is disabled, allowing rolling upgrades to complete without failure.
Thank you for bringing this issue to our attention. We appreciate your involvement! If you're interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.
Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.
Name and Version
bitnami/etcd:3.5.18
What architecture are you using?
amd64
What steps will reproduce the bug?
Description:
We are running a 3-pod etcd cluster without persistent storage, relying on
emptyDir
. However, after the recent change introduced in commit 1aff4e2, the etcd upgrade is failing.Steps to Reproduce:
emptyDir
used instead).Root Cause:
The issue lies in the
is_new_etcd_cluster
function. To determine if the cluster is new or existing, this function executes:During a rolling upgrade, not all endpoints will be responsive. When
etcdctl_get_endpoints
includes its own endpoint, the command:fails, leading to the upgrade issue.
Proposed Fix:
Modify
etcdctl_get_endpoints
to exclude the pod's own endpoint before executingetcdctl endpoint status --cluster
. This will prevent failures when checking the cluster status during a rolling upgrade.Expected Behavior:
The etcd cluster should successfully upgrade even when persistent storage is disabled, allowing rolling upgrades to complete without failure.
Environment Details:
emptyDir
What is the expected behavior?
The etcd cluster should successfully upgrade even when persistent storage is disabled, allowing rolling upgrades to complete without failure.
What do you see instead?
emptyDir
for ephemeral etcd storage.The text was updated successfully, but these errors were encountered: