-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
APIservice configuring incorrectly #495
Comments
Is this a Helm issue or KEDA deployment itself? Always happy to review PRs. |
I think it's a Keda issue as the Kubernetes manifest specifies the name wrong, which is then picked up by Helm in my case. I'll create a PR, I just want to be very sure that this is an issue for someone other then me. |
Yes, but the main question is do we need to update the Helm chart and/or KEDA core which generates the manifests. @zroubalik can you take a look? I keep forgetting where we annotate this. |
I am having this same issue for myself too |
Hello, Could you share your values to test them? @tomkerkhove , this only can happen with helm because otherwise, e2e test wouldn't pass as the metrics server hadn't been reachable, so I move this issue to charts repo |
Hi @JorTurFer, I use Terraform to deploy a helm_release, but it uses all default values, besides the ones below.
|
recapping: |
That's correct, with the context of also switching from vanilla manifests, to now using Helm via Terraform. |
It'd be nice ❤️ I'll try to reproduce your scenario exactly |
The below is what I'm running to delete all the required resources, then tag and annotate CRDs so that the helm deploy can take them over. Sidenote, the only reason i'm doing it this way is I will have several ScaledObjects running in production and I don't want to effect/have to re-deploy those (which I think happens if I just run k delete -f keda-manifest) Is this accurate?
|
I'm going to test it soon (today or tomorrow max), but I have a question in the meantime, why didn't you just upgrade the chart using |
That would be ideal. But currently we're also moving management of helm to Terraform, using the helm_release provider. |
AFAIK, that provider supports upgrading out-of-the-box, so you just need to change the |
Hmm, that might work. I tried that previously but it wouldn't work as it was missing all the labels and annotations, maybe I can just adjust my process, I won't delete any resources but i'll do all labelling/annotating and attempt to using Terraform to deploy. I'll add a comment once I try that. Thanks for your help so far @JorTurFer! |
experienced similar issue with version kubectl port-forward $(kubectl get pods -l app=keda-operator-metrics-apiserver -n keda -o name) 8080:8080 -n keda the connection crash. image:
metricsApiServer:
tag: "2.10.1" all seems to be working properly |
hi @ArieLevs , In any case, I think that the problems here are different because the APIService uses the port 6443 |
Issue
v1beta1.external.metrics.k8s.io
keda-operator*
in this case that includeskeda-operator-metrics-apiserver
./keda/config/metrics-server/api_service.yaml
contains spec.Service.Name forkeda-metrics-apiserver
. ^ IssueFix
keda-operator-metrics-apiserver
.kubectl edit apiservice v1beta1.external.metrics.k8s.io
keda-metrics-apiserver
-->keda-operator-metrics-apiserver
.Steps to reproduce
kubectl get pods
returns the below error (also returns pods)Notes
Originally posted by @dmcstravick7 in kedacore/keda#4769
The text was updated successfully, but these errors were encountered: