Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(cloud-provider): add single node known issue #667

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

FrankYang0529
Copy link
Member

Signed-off-by: PoAn Yang <poan.yang@suse.com>
Copy link

Name Link
🔨 Latest commit 4b05aca
😎 Deploy Preview https://6732ca6989a1ed25dbc9e09f--harvester-preview.netlify.app

Copy link
Member

@starbops starbops left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

@jillian-maroket
Copy link
Contributor

@asettle The new content is for Prime customers. How do you want to handle this?

@asettle
Copy link
Contributor

asettle commented Nov 12, 2024

@asettle The new content is for Prime customers. How do you want to handle this?

This release note should only go into prime docs. As a community user, you shouldn't be worried about upgrading from community to Prime. However, Prime users will be as they are already looking to purchase and migrate from Prime. This should be kept separate and what we want to be saying is, "When upgrading your Rancher version from the community version to the SUSE supported Prime version, the harvester-cloud-provider pod won't be ready in a single-node cluster."

@jillian-maroket
Copy link
Contributor

@FrankYang0529 Below is the edited version. Please check if the technical details are correct. I will add the content to the correct repo tomorrow.

You can upgrade Rancher from the community version to the Rancher Prime version in a single-node SUSE Virtualization cluster. However, the harvester-cloud-provider pod will not be ready after the upgrade is completed.

The default container registry used for images (system-default-registry) changes during the upgrade. All pods in the guest cluster receive the updated value for system-default-registry, but the harvester-cloud-provider pod (in single-node clusters) is unable to reach the ready state. This occurs because the rolling update strategy (.spec.strategy.rollingUpdate.maxUnavailable) is set to 25%, so the pod deployment update becomes stuck.

The issue was fixed in Rancher 2.8.x (103.0.3+up0.2.6) and Rancher 2.9.x (104.0.2+up0.2.6). In earlier versions, the workaround is to manually remove the old harvester-cloud-provider pod.

cc: @bk201

@FrankYang0529
Copy link
Member Author

Hi @jillian-maroket, the content LGTM. Thanks for updating it 👍 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants