You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each provider release adds more coverage and more CRDs. This increases the memory footprint of the kube-apiserver.
It's not obvious what the impact of a new provider or upgraded provider is going to have on an existing apiserver since we don't publish memory footprints.
How could Upjet help solve your problem?
When new providers are released they should include a captured kube-apiserver memory footprint (i.e., kubectl top pods). This came out of an issue seen with upbound/provider-aws where changing provider versions exhausted the memory on an undersized apiserver.
v0.27.0
kubectl top pod kube-apiserver-kubecontroller-01 -n kube-system
NAME CPU(cores) MEMORY(bytes)
kube-apiserver-kubecontroller-01 70m 3120Mi
v0.17.0
kubectl top pod kube-apiserver-kubecontroller-01 -n kube-system
NAME CPU(cores) MEMORY(bytes)
kube-apiserver-kubecontroller-01 89m 1668Mi
The text was updated successfully, but these errors were encountered:
Hey @plumbis we are facing something similar when we are trying to run multiple crossplane pods in vcluster environments. Is there a way we can have accountability on why the memory consumption is so high ? Do let me know your thoughts ? Thanks.
The use of the large monolithic provider packages for AWS, GCP and Azure is discouraged as the Kubernetes API server does not handle the large number of CRDs efficiently yet.
What problem are you facing?
Each provider release adds more coverage and more CRDs. This increases the memory footprint of the kube-apiserver.
It's not obvious what the impact of a new provider or upgraded provider is going to have on an existing apiserver since we don't publish memory footprints.
How could Upjet help solve your problem?
When new providers are released they should include a captured kube-apiserver memory footprint (i.e.,
kubectl top pods
). This came out of an issue seen with upbound/provider-aws where changing provider versions exhausted the memory on an undersized apiserver.v0.27.0
v0.17.0
The text was updated successfully, but these errors were encountered: