-
Notifications
You must be signed in to change notification settings - Fork 0
Extend hypervisor crd for cortex filtering #217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
c5677f5 to
5c49ce7
Compare
74831c6 to
d99e352
Compare
## Background For virtual machines spawned on the kvm hypervisor, we want to no longer use nova and placement as source of truth. Instead, filters should use the hypervisor crd exposed by the [hypervisor operator](github.com/cobaltcore-dev/openstack-hypervisor-operator) and populated by the [node agent](https://github.com/cobaltcore-dev/kvm-node-agent). This contribution replaces the implementation of all filters that were originally ported from nova accordingly. Afterward, we can disable filters in nova one-by-one, moving the compute placement logic over to cortex. > [!TIP] > You can use the newly added [mirror tool](93fdcc0) to mirror hypervisor resources from our compute cluster over to the local cluster. ## Completion - [x] ~internal/scheduling/decisions/nova/plugins/filters/filter_compute_capabilities.go~ (REMOVED) - [x] internal/scheduling/decisions/nova/plugins/filters/filter_capabilities.go (NEW) - [x] internal/scheduling/decisions/nova/plugins/filters/filter_correct_az.go - [x] internal/scheduling/decisions/nova/plugins/filters/filter_external_customer.go - [x] internal/scheduling/decisions/nova/plugins/filters/filter_has_accelerators.go - [x] internal/scheduling/decisions/nova/plugins/filters/filter_has_enough_capacity.go - [x] internal/scheduling/decisions/nova/plugins/filters/filter_has_requested_traits.go - [x] internal/scheduling/decisions/nova/plugins/filters/filter_host_instructions.go - [x] internal/scheduling/decisions/nova/plugins/filters/filter_maintenance.go (NEW) - [x] internal/scheduling/decisions/nova/plugins/filters/filter_packed_virtqueue.go - [x] ~internal/scheduling/decisions/nova/plugins/filters/filter_project_aggregates.go~ (REMOVED) - [x] internal/scheduling/decisions/nova/plugins/filters/filter_allowed_projects.go (NEW) - [x] ~internal/scheduling/decisions/nova/plugins/filters/filter_disabled.go~ (REMOVED) - [x] internal/scheduling/decisions/nova/plugins/filters/filter_status_conditions.go (NEW) ## Dependencies > [!NOTE] > The scope of this PR is to establish a minimum viable scheduling pipeline with the current state. Extensive refactorings, for example of the filter for requested traits, are out of scope. Hypervisor operator PR: cobaltcore-dev/openstack-hypervisor-operator#217 KVM node agent PR: cobaltcore-dev/kvm-node-agent#40
This comment was marked as resolved.
This comment was marked as resolved.
notandy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you remove the annotation like // +kubebuilder:validation:Optional from status fields? I don't think threre is anything that enforces them and kubectl cannot set status fields.
Merging this branch will not change overall coverage
Coverage by fileChanged files (no unit tests)
Please note that the "Total", "Covered", and "Missed" counts above refer to code statements instead of lines of code. The value in brackets refers to the test coverage of that file in the old version of the code. |
I would like to stay backwards compatible and not introduce breaking changes. |
In [this pull request](cobaltcore-dev/cortex#441) we implemented a cortex filtering pipeline for KVM. This pipeline uses the hypervisor CRD as single source of truth to find out on which hypervisors a vm can be scheduled. To complete this implementation, we extended the hypervisor crd in [this pull request](cobaltcore-dev/openstack-hypervisor-operator#217). The hypervisor crd pull request added additional fields and removed outdated ones, which need to be autodiscovered in the kvm node agent. The following fields are now populated: Support filtering based on hypervisor type and other capabilities: - [x] Export the hypervisor type, architecture, supported devices, supported cpu modes, and supported features Capacity filtering: - [x] Aggregate the allocated and total available capacity and populate the corresponding fields (Bonus) - [x] Add numa cell capacity & allocation information so we can implement numa sensitive initial placement When done: - [x] Test with ssh-forwarded libvirt socket > [!NOTE] > The scope of this PR is to establish a minimum viable scheduling pipeline in cortex, with the least amount of changes possible. Refactorings of the hypervisor crd spec can follow if needed.
Background
In this pull request we implemented a cortex filtering pipeline for KVM. This pipeline uses the hypervisor CRD as single source of truth to find out on which hypervisors a vm can be scheduled. To complete this implementation, we need to extend the hypervisor CRD.
Tasks
Support filtering based on hypervisor type and other capabilities:
Capacity filtering:
Pinned projects:
(Bonus)
When finished:
Dependencies
Note
The scope of this PR is to establish a minimum viable scheduling pipeline in cortex, with the least amount of changes possible. Refactorings of the hypervisor crd spec can follow if needed.
KVM node agent PR: cobaltcore-dev/kvm-node-agent#40