Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[backport 2.8] Add vGPU allocatable warning banner #11600

Merged
merged 1 commit into from
Oct 9, 2024

Conversation

torchiaf
Copy link
Member

@torchiaf torchiaf commented Aug 7, 2024

This is a backport for #11017

Fixes #11009

@torchiaf torchiaf requested a review from a110605 August 7, 2024 16:32
@torchiaf torchiaf added this to the v2.8.next1 milestone Aug 7, 2024
@@ -1279,6 +1279,8 @@ cluster:
macAddress: Mac Address
macFormat: 'Invalid MAC address format.'
vGpus:
warnings:
minimumAllocatable: It's highly recommended to select a vGPU with a number of allocatable devices greater then the number of nodes (Machine Count) to avoid "un-schedulable" errors after cluster updates.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to feedback on the 2.10 & 2.9 related PRs:

  • then should be than
  • vGPU should be used with that capitalization for consistency

@@ -1279,6 +1279,8 @@ cluster:
macAddress: Mac Address
macFormat: 'Invalid MAC address format.'
vGpus:
warnings:
minimumAllocatable: It's highly recommended to select a vGPU with a number of allocatable devices greater then the number of nodes (Machine Count) to avoid "un-schedulable" errors after cluster updates.
errors:
notAllocatable: '"VGPUs" not allocatable. There are not enough [{vGpus}] devices to be allocated to each node in machine pool [{pool}]'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vGPU not VGPU

@gaktive
Copy link
Member

gaktive commented Aug 14, 2024

@a110605 let us know if there's anything else for you to look at here since with Francesco out, we'll need someone else to merge. If you feel everything's OK based on your own concerns, then you can approve & merge.

@a110605
Copy link
Contributor

a110605 commented Aug 15, 2024

@a110605 let us know if there's anything else for you to look at here since with Francesco out, we'll need someone else to merge. If you feel everything's OK based on your own concerns, then you can approve & merge.

Hi @gaktive , the vGPU allocatable number in dropdown is introduced in PR #11399.

Without #11399, the banner in this PR is meanless.

As I synced with Franesco last time, this PR should wait for #11399 merge first. (correct me if I'm wrong)

@nwmac nwmac modified the milestones: v2.8.7, v2.8.next1, v2.8.next2 Aug 15, 2024
@nwmac
Copy link
Member

nwmac commented Aug 15, 2024

@gaktive I spoke with @a110605 and this is awaiting backend changes- moving to 2.8.next1

@nwmac nwmac modified the milestones: v2.8.next2, v2.8.next1 Aug 15, 2024
@gaktive gaktive modified the milestones: v2.8.8, v2.8.9 Sep 23, 2024
@torchiaf torchiaf changed the title [backport v2.8.next1] Add vGPU allocatable warning banner [backport] Add vGPU allocatable warning banner Oct 1, 2024
@torchiaf torchiaf changed the title [backport] Add vGPU allocatable warning banner [backport 2.8] Add vGPU allocatable warning banner Oct 4, 2024
Copy link
Contributor

@a110605 a110605 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need rebase after #12136 merge

- Add vGPU allocatable warning banner
- Update shell/assets/translations/en-us.yaml
- Fix wording
- Show allocation warning only when allocation info are available

Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
@torchiaf torchiaf merged commit 645d6b5 into rancher:release-2.8 Oct 9, 2024
18 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants