-
Notifications
You must be signed in to change notification settings - Fork 946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No ability to set dynamic value for kubeReserved cpu #6665
Comments
This probably should've been opened under karpenter-core instead so I've created kubernetes-sigs/karpenter#1518 and can close whichever one makes most sense to keep. |
Reopening as this may be doable within the aws provider. This function and/or the logic above it could be adjusted to consider each resource. It currently just checks whether kubelet.kubereserved is non-null and requires everything to be set but if it instead checked each resource that could give the desired result. Looking further that should be happening already it seems so maybe we just need to explicitly call each kubereserved flag?
|
As discussed in the upstream issue, looking to discuss this in just the provider. Can you give more details on why you want to override one part of the kube-reserved resources, but not all? Please correct me if that's not your request. |
As you mentioned in kubernetes-sigs/karpenter#1518 (comment) memory scales with pod density and that's what I'd like to change as the default kubelet.kubeReserved.memory value isn't applied based on kubelet.maxPods but instead is applied by bootstrap.sh which always uses the max_pods value in Ideally what I'm asking for is to have setting maxPods lead to a properly evaluated value for kubeReserved.memory. I think the easiest way for this to be implemented would be for Karpenter to always provide |
I think this is a duplicate of #1803. |
Slightly different but depending on the solution could solve both. Technically my issue is that I'm its current form kube-reserves config is all or nothing, and I want karpenter to provide computed defaults for any empty fields. I don't think solving this through userdata or changes in bootstrap.sh is ideal as there should be a common way to configure across cloud providers or even when using custom Ami's without bootstrap.sh. Additionally, having it computed by Karpenter is preferable to handle node scheduling and make sure the perceived allocatable resources matches actual. That's a separate discussion but I think the vm overhead calculation logic could be tweaked as well. Karpenter already has the logic for calculating kube-reserved so I think it should simply always provide these to kubelet args. |
@jukie kube reserved is a kubelet setting so you have to integrate it via user data. |
Correct, and when a user provides kube reserved values these are already handled that way by Karpenter. What I'm suggesting is that when partial fields are provided the empty ones should have defaults injected or take it further to always provide them. |
Description
What problem are you trying to solve?
I'd like the ability to set a common kubelet configuration value for maxPods and kubeReserved.memory but would like to retain the dynamic cpu configuration that's used in the eks bootstrap.sh
. This is because settings kubelet.maxPods alone doesn't lead to any changes in kubelet.kubeReserved as those values will still be generated based on
/etc/eks/eni-max-pods.txt
If I only set kubeReserved.memory kubelet will fail to start due to an invalid config and if I override the value in bootstrap.sh during userdata execution Karpenter is then unaware of the actual values when making scheduling decisions.
Is there a way to configure a NodePool in a way that would allow for setting a static value for kubeReserved.memory while also allowing for the dynamic CPU configuration from bootstrap.sh?
How important is this feature to you?
Very important and would be willing to contribute this feature
The text was updated successfully, but these errors were encountered: