Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure domain re-assignment on Cloudstack machine deploy failures #352

Open
vignesh-goutham opened this issue Mar 22, 2024 · 2 comments · May be fixed by #353
Open

Failure domain re-assignment on Cloudstack machine deploy failures #352

vignesh-goutham opened this issue Mar 22, 2024 · 2 comments · May be fixed by #353
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. release:must-have
Milestone

Comments

@vignesh-goutham
Copy link
Contributor

/kind feature

Describe the solution you'd like
[A clear and concise description of what you want to happen.]
CAPC chooses a random failure domain to deploy worker machines. When this VM deploy fails, irrespective of the type of error, CAPC will keep re-attempting to deploy a VM until CAPI replaces the owner machine.

The proposed feature is to check for the failure errors when a VM deploy occurs, and if identified to be a terminal error(errors that are not transient or that wont go away with a re-try), choose another failure domain and deploy. Choosing the failure domain could be random as it is now, or can be based on resources available such as free IPs available.

This is only for worker machines, as control-plane failure domain assignment is handled by KCP.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. release:must-have
Projects
None yet
5 participants