Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add 'Parallel SR-IOV configuration' design document #479

Closed
wants to merge 1 commit into from

Conversation

e0ne
Copy link
Collaborator

@e0ne e0ne commented Jul 19, 2023

No description provided.

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@coveralls
Copy link

coveralls commented Jul 19, 2023

Pull Request Test Coverage Report for Build 7611451473

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 24.394%

Totals Coverage Status
Change from base Build 7570704762: 0.0%
Covered Lines: 2426
Relevant Lines: 9945

💛 - Coveralls

README.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Show resolved Hide resolved

### API Extensions

#### Option 1: extend existing CR SriovNetworkPoolConfig
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on this option from me.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@e0ne if you are OK with it, lets remove "Option 1: " string here

and place option2 and 3 under a new subsection: "Alternative APIs"

unless you feel that we should go with a different approach in this design doc.

also others feedback is appreciated on which approach is preferable.
i think @SchSeba also prefers this option.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

Copy link
Collaborator

@SchSeba SchSeba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice to see this work!
I am really expecting for this feature

AnnoDraining = "Draining_Complete"
```

Drain controller will watch for node annotation changes, `SriovNetworkPoolConfig` and `SriovNetworkNodeState` changes:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1
I think we should stop using the node this way we can also reduce the RBAC for the config-daemon to only watch for sriovNetworkNodeState

doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Show resolved Hide resolved
doc/design/parallel-node-config.md Show resolved Hide resolved
// OvsHardwareOffloadConfig describes the OVS HWOL configuration for selected Nodes
OvsHardwareOffloadConfig OvsHardwareOffloadConfig `json:"ovsHardwareOffloadConfig,omitempty"`
// NodeSelectorTerms is a list of node selectors to apply SriovNetworkPoolConfig
NodeSelectorTerms *v1.NodeSelector `json:"nodeSelectorTerms,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe just

nodeSelector:
      matchLabels:
        node-role.kubernetes.io/worker: ""

without the Terms

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
* introduce new API changes to support pool of nodes to be drained in a parallel:
at this phase we introduce new API from one of the proposed options above and modify Drain controller to watch for
the specified CRs and proceed drain in a parallel per node pool configuration
* drop `NodeDrainAnnotation` usage and move this annotation values into the `SriovNetworkNodeState` object
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree we should move this as part of the first change.

we much check how this will work in case the operator upgrade is done in the middle of the configuration.

All phases should be implemented one-by-one in a separate PRs in the order above.

### Upgrade & Downgrade considerations
This feature introduces changes to CRDs which means CRDs should be applied during upgrade or downgrade
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have a support for this as we have users that runs the sriov operator.
we need to be sure a case where the operator gets updated in the middle of a drain is something we support and are able to recover from it

@SchSeba
Copy link
Collaborator

SchSeba commented Jul 27, 2023

One more comment is we need to see how the machine config will work with parallel node draining for openshift

@github-actions
Copy link

github-actions bot commented Aug 1, 2023

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
@e0ne e0ne mentioned this pull request Aug 9, 2023
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved

## Proposal

Introduce nodes pool drain configuration to meet following requirements:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i really prefer not to copy the goals into proposal.

maybe something like:

Introduce nodes pool drain configuration and controller to meet Goals targets.

doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
// OvsHardwareOffloadConfig describes the OVS HWOL configuration for selected Nodes
OvsHardwareOffloadConfig OvsHardwareOffloadConfig `json:"ovsHardwareOffloadConfig,omitempty"`
// NodeSelectorTerms is a list of node selectors to apply SriovNetworkPoolConfig
NodeSelectorTerms *v1.NodeSelector `json:"nodeSelectorTerms,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@github-actions
Copy link

github-actions bot commented Sep 8, 2023

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

Copy link
Collaborator

@adrianchiris adrianchiris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@e0ne gave this one another look.

once my remaining comments are addressed, consider it LGTM from my side.

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@e0ne e0ne requested a review from SchSeba September 12, 2023 11:41
Copy link
Collaborator

@SchSeba SchSeba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great work I am really waiting for this feature to go in!

I left some small comments

doc/design/parallel-node-config.md Show resolved Hide resolved
// +kubebuilder:rbac:groups="",resources=SriovNetworkPoolConfig,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups="",resources=SriovNetworkNodeState,verbs=get;list;watch;update;patch
```

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to have it numerated

doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
`SriovNetworkNodeStates` update.

Config daemon will be responsible for setting `SriovNetworkNodeState.DrainStatus=Drain_Required` and
`SriovNetworkNodeState.DrainStatus=DrainComplete` only. It will simplify its implementation.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the DrainComplete will be set by the config-daemon

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How controllel will know that drain finished? It's less complicated to to it in a config daemon

type SriovNetworkNodeStateStatus struct {
Interfaces InterfaceExts `json:"interfaces,omitempty"`
// +kubebuilder:validation:Enum=Idle;Draining,Draining_MCP_Paused,Succeeded,Failed,InProgress
DrainStatus string `json:"drainStatus,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we have this as a struct? that is a string but just for easier connection

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

doc/design/parallel-node-config.md Outdated Show resolved Hide resolved
doc/design/parallel-node-config.md Show resolved Hide resolved
// OvsHardwareOffloadConfig describes the OVS HWOL configuration for selected Nodes
OvsHardwareOffloadConfig OvsHardwareOffloadConfig `json:"ovsHardwareOffloadConfig,omitempty"`
// NodeSelectorTerms is a list of node selectors to apply SriovNetworkPoolConfig
NodeSelectorTerms *v1.NodeSelectorTerm `json:"nodeSelectorTerms,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, NodeSelectorTerm is more adjustable

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to sticking with standard k8s NodeSelectorTerms.
these can be translated to machineconfig selector if needed.

WDYT?

OvsHardwareOffloadConfig OvsHardwareOffloadConfig `json:"ovsHardwareOffloadConfig,omitempty"`
// NodeSelectorTerms is a list of node selectors to apply SriovNetworkPoolConfig
NodeSelectorTerms *v1.NodeSelectorTerm `json:"nodeSelectorTerms,omitempty"`
DrainConfig DrainConfigSpec `json:"drainConfig,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need another struct can we have something like

maxUnavailable integer-or-string maxUnavailable specifies the percentage or constant number of machines that can be updating at any given time. default is 1.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's introduced in case if we decide add more drain-related config options (e.g. false=true) so we won't need to change API in the future

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on having this a stuct for future extensability.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please change to something like what we have in the PR?

// SriovNetworkPoolConfigSpec defines the desired state of SriovNetworkPoolConfig
type SriovNetworkPoolConfigSpec struct {
	// OvsHardwareOffloadConfig describes the OVS HWOL configuration for selected Nodes
	OvsHardwareOffloadConfig OvsHardwareOffloadConfig `json:"ovsHardwareOffloadConfig,omitempty"`

	// nodeSelector specifies a label selector for Nodes
	NodeSelector *metav1.LabelSelector `json:"nodeSelector,omitempty"`

	// maxUnavailable defines either an integer number or percentage
	// of nodes in the pool that can go Unavailable during an update.
	//
	// A value larger than 1 will mean multiple nodes going unavailable during
	// the update, which may affect your workload stress on the remaining nodes.
	// Drain will respect Pod Disruption Budgets (PDBs) such as etcd quorum guards,
	// even if maxUnavailable is greater than one.
	MaxUnavailable *intstr.IntOrString `json:"maxUnavailable,omitempty"`
}

Operator. To not introduce breaking changes we have to split this effort to several phases:
* implement new Drain controller without user-facing API changes:
it will proceed only one node configuration at the same time and doesn't require API changes
* drop `SriovNetworkNodeState` usage and move this annotation values into the `SriovNetworkNodeState` object
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here is to drop the annotation from the node object

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

operator: "Exists"
drainConfig:
maxUnavailable: 5
ovsHardwareOffloadConfig: {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the indentation here is not right I think

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, drainConfig is already defined for this resource. Right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea i believe this needs to be removed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

name: default
namespace: network-operator
spec:
priority: 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the indentation here is not right I think

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@zeeke
Copy link
Member

zeeke commented Oct 20, 2023

@e0ne , @SchSeba, I was thinking of the following scenario:

Given a cluster with 5 worker nodes (A, B, C, D, E)
And two SriovNetworkPoolConfigs:
  pool1 that targets A,B,C, priority 1, maxParallelNodeConfiguration 1
  pool2 that targets C,D,E, priority 99, maxParallelNodeConfiguration 2

When the user creates a policy that applies to C,D,E
Then C, D, E starts configuring immediately, because:
- C belongs to pool1 (as it has the highest priority), where there is no other node configuring ATM
- D and E belongs to pool2 and it has maxParalleNodeConfiguration = 2

If my considerations are right, please add this scenario to a "Examples" section.

@adrianchiris
Copy link
Collaborator

@e0ne , @SchSeba, I was thinking of the following scenario:

Given a cluster with 5 worker nodes (A, B, C, D, E)
And two SriovNetworkPoolConfigs:
  pool1 that targets A,B,C, priority 1, maxParallelNodeConfiguration 1
  pool2 that targets C,D,E, priority 99, maxParallelNodeConfiguration 2

When the user creates a policy that applies to C,D,E
Then C, D, E starts configuring immediately, because:
- C belongs to pool1 (as it has the highest priority), where there is no other node configuring ATM
- D and E belongs to pool2 and it has maxParalleNodeConfiguration = 2

If my considerations are right, please add this scenario to a "Examples" section.

+1 on emphasizing that a node may belong to at most a single pool (the one with the most important priority)

@SchSeba
Copy link
Collaborator

SchSeba commented Oct 22, 2023

yep great example!

@SchSeba
Copy link
Collaborator

SchSeba commented Oct 29, 2023

also, another point is if the node selector is good for 2 pools and the priority is equal we will select the first pool in alphabetic order

@SchSeba SchSeba mentioned this pull request Nov 25, 2023
@SchSeba SchSeba mentioned this pull request Dec 5, 2023
## Motivation
SR-IOV Network Operator configures SR-IOV one node at a time and one nic at a same time. That means we’ll need to wait
hours or even days to configure all NICs on large cluster deployments. With multi-NICs deployments Operator configures
NICs one by one on each node which leads to a lot of unnecessary node drain calls during SR-IOV configuration. Also
Copy link
Collaborator

@adrianchiris adrianchiris Jan 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we really saving up on drain calls ?

even currently sriov-network-config-daemon will configure all relevant nic in the node. it just does it serially after draining.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


#### Extend existing CR SriovNetworkPoolConfig
SriovNetworkPoolConfig is used only for OpenShift to provide configuration for
OVS Hardware Offloading. We can extend it to add configuration for the drain
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add some info that SriovNetworkPoolConfig will now be relevant for all cluster types (kubernetes and openshift)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


### Upgrade & Downgrade considerations
After operator upgrade we have to support `sriovnetwork.openshift.io/state` node annotation to
`SriovNetworkNodeState.Status.DrainStatus` migration. This logic will be implemented in a Drain controller and should be
Copy link
Collaborator

@adrianchiris adrianchiris Jan 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldnt we implement the logic in config daemon as it is the one who triggers drain flow ?
just so we dont have race condition where config daemon updates drain status AND controller tries to migrate the annotation

Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

drain and use `drain lock` in config daemon anymore. The overall drain process will be covered by the following states:

```golang
DrainIdle = "Idle"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @e0ne can you please change this to what we have in the PR?

should be

	NodeDrainAnnotation             = "sriovnetwork.openshift.io/state"
	NodeStateDrainAnnotation        = "sriovnetwork.openshift.io/desired-state"
	NodeStateDrainAnnotationCurrent = "sriovnetwork.openshift.io/current-state"
	DrainIdle                       = "Idle"
	DrainRequired                   = "Drain_Required"
	RebootRequired                  = "Reboot_Required"
	Draining                        = "Draining"
	DrainComplete                   = "DrainComplete"

OvsHardwareOffloadConfig OvsHardwareOffloadConfig `json:"ovsHardwareOffloadConfig,omitempty"`
// NodeSelectorTerms is a list of node selectors to apply SriovNetworkPoolConfig
NodeSelectorTerms *v1.NodeSelectorTerm `json:"nodeSelectorTerms,omitempty"`
DrainConfig DrainConfigSpec `json:"drainConfig,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please change to something like what we have in the PR?

// SriovNetworkPoolConfigSpec defines the desired state of SriovNetworkPoolConfig
type SriovNetworkPoolConfigSpec struct {
	// OvsHardwareOffloadConfig describes the OVS HWOL configuration for selected Nodes
	OvsHardwareOffloadConfig OvsHardwareOffloadConfig `json:"ovsHardwareOffloadConfig,omitempty"`

	// nodeSelector specifies a label selector for Nodes
	NodeSelector *metav1.LabelSelector `json:"nodeSelector,omitempty"`

	// maxUnavailable defines either an integer number or percentage
	// of nodes in the pool that can go Unavailable during an update.
	//
	// A value larger than 1 will mean multiple nodes going unavailable during
	// the update, which may affect your workload stress on the remaining nodes.
	// Drain will respect Pod Disruption Budgets (PDBs) such as etcd quorum guards,
	// even if maxUnavailable is greater than one.
	MaxUnavailable *intstr.IntOrString `json:"maxUnavailable,omitempty"`
}

@SchSeba
Copy link
Collaborator

SchSeba commented Feb 7, 2024

I spoke with @e0ne I am taking the work on the design doc for the parallel drain so please continue the collaboration here #626

@SchSeba SchSeba closed this Feb 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants