-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add scaler for temporal #6191
base: main
Are you sure you want to change the base?
add scaler for temporal #6191
Conversation
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
See comment at temporalio/temporal#33 (comment). Temporal's KEDA approach may slightly differ. Will have the engineers review, but we may suggest slight differences. |
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
any suggestion/comments @cretz |
We're currently discussing which use cases we would like the scaler to support, we'll be in a position to give some feedback/direction on Friday 4th. |
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
DeleteKubernetesResources(t, testNamespace, data, templates) | ||
} | ||
|
||
func testActivation(t *testing.T, kc *kubernetes.Clientset, data templateData) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will work on some test cases here, but I suspect that the Temporal scaler is not going to be able to provide accurate activation (specifically 1->0). It does not have sight of activity levels that are low enough not to cause a backlog. We may need to couple the Temporal scaler with the prometheus scaler for the activation aspect for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to add any other test cases? @robholland
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we'll need a build ID test, where a new deployment comes up with a new build ID and the two deployments are shown to scale separately (older one down as tasks on old version complete, newer one up as they start on that queue). I'm also not convinced that activation based on backlog is safe when scaling to 0, it won't account for tasks that are in-flight or low enough throughput to not cause a backlog. Scaling to 0 and then having to scale back up to clear the queue when things get killed and re-scheduled isn't ideal. Not sure yet the best way to test that, or whether we should just remove the activation altogether and recommend users couple the Temporal scaler with a Prometheus scaler, where Prometheus handles the activation and Temporal the scaling. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding activation based on backlog, I believe we should keep this option rather than removing it, as it can be partially controlled by customising scale-down behaviour in the HPA. This feature would be useful for workloads with very short execution times. @robholland
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an option in omes to specify a build ID for the test cases? @robholland
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using the new worker versioning system you don't update a deployment to a new build ID, rather you create a new deployment for the new build ID. The two deployments will both be present until the workflows on the older build ID have completed, then you can tear the old one down. So we'd have a scaled object for each deployment/build id.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jhecking this is the use case we are currently exploring, I'll be investigating prometheus metrics we might be able to use to tide us over until we can provide some equivalent of tasks_dispatch_rate
that covers in-flight (if that's going to be possible).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@robholland added a test case for BuildID. Can you please verify?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll be investigating prometheus metrics we might be able to use to tide us over until we can provide some equivalent of
tasks_dispatch_rate
that covers in-flight (if that's going to be possible).
I know you've been out for a week, but was wondering whether you've had any time to look into this? Maybe I'm missing something, but wouldn't tasks_dispatch_rate
also only be useful for 1->0 scaling, but not for 0->1 scaling? I.e. if all the workers have been shut down, then no new tasks can be dispatched, so the task dispatch rate cannot go back above zero.
I still think a variant of the approximate_backlog_count
metric that includes both pending and in-flight tasks would be best suited for 1->0 and 0->1 scaling.
I'm new to KEDA myself. Is it possible to use different metrics for 1->0/0->1 and 1->n/n->1 scaling decisions? Maybe by using two separate triggers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes task dispatch rate would only be useful for de-activation (but not reliable, due to it's caveats as discussed).. approximate_backlog_count
should be useable for activation. You can mix different triggers together, I believe you'd use https://keda.sh/docs/2.14/reference/scaledobject-spec/#scalingmodifiers for that.
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
The issue with the high CPU utilization only happens when Keda is connecting to the Temporal server via the Consul sidecar proxy. In another cluster, without Consul, the Temporal scaler is running fine and Keda is using minimal CPU. When the Consul sidecar is injected, the proxy itself shows minimal CPU usage; only the keda-operator container itself uses up all its allocated CPU resources. Neither the Consul logs nor the Keda logs show any connection errors. Also, Keda and the Temporal scaler are able to connect to the Temporal server and are activating/deactivating the targeted pod based on those metrics. But one other thing I could observe is that at times the Temporal scaler's
Before the |
I sniffed the network traffic between the Keda operator and the Consul proxy today, and that finally offered a clue as to the cause of the high CPU utilisation, and – more importantly – a solution. So in order to route the queries from the Temporal scaler to the Temporal server via the Consul proxy, I had configured the Somehow, and I don't quite understand this yet, this is causing Keda to try to connect to the proxy on port 7233 using both 127.0.0.1 as well as using the IPv6 address Changing the |
pkg/scalers/temporal.go
Outdated
APIKey string `keda:"name=apiKey, order=authParams;resolvedEnv;triggerMetadata, optional"` | ||
|
||
UnsafeSsl bool `keda:"name=unsafeSsl, order=triggerMetadata, optional"` | ||
Cert string `keda:"name=cert, order=authParams;resolvedEnv, optional"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's more standard in Temporal deployments to have env vars that point to files for the key/cert rather than have them present in the environment directly. It would be good to have CertPath/KeyPath/CaPath or similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@robholland That's currently not possible with KEDA. KEDA can only read from environment variables or secrets. Ideally, in k8s, certificates and keys are stored in secrets, so this can be easily managed using TriggerAuthentication. @JorTurFer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file paths are normally pointed to certs that are mounted into the pods from secrets, so presumably users can just point at the secrets directly for KEDA use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it can be done with TriggerAuthentication
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we must read them from Kubernetes API and not from file. This is because we can't restart KEDA to include new certs. Take into account that KEDA provides a self-service approach where admins can set up KEDA and users can deploy their own resources, so assuming that certs are part of the KEDA's filesystem isn't an option. If the SDK only support reading them from files, the scaler must pull the secret and store in a temporal file within the container
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our mTLS client certificates and keys auto rotate, live on the filesystem, at least every hour. Our worker code handles the rotation without restarting.
I think any code using mTLS to connect to Temporal needs to be able to handle certificates and keys that are rotated frequently. Getting the certificates and keys from the filesystem seems natural and normal, if there is a way for KEDA to support that it would be good.
We use a sidecar to write/rotate the certificates and keys. But there are other Kubernetes systems for mTLS certificate management that also use filesystems and not Kubernetes Secret resources, like cert-manager csi-driver-spiffe.
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
/run-e2e temporal |
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es> Signed-off-by: Prajith <prajithpalakkuda@gmail.com>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es> Signed-off-by: Prajith <prajithpalakkuda@gmail.com>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es> Signed-off-by: Prajith <prajithpalakkuda@gmail.com>
@JorTurFer build is failing. Should I include the vendor directory? |
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
One question, is this PR ready to review? I see development activity on it yet, could we move it to draft? |
I would move to draft, I owe some Temporal feedback but I won’t have a
chance for that till next week.
…On Mon, 21 Oct 2024 at 20:03, Jorge Turrado Ferrero < ***@***.***> wrote:
One question, is this PR ready to review? I see development activity on it
yet, could we move it to draft?
—
Reply to this email directly, view it on GitHub
<#6191 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAACKW6WM26Z7NIVKLLWXMTZ4VFXDAVCNFSM6AAAAABO4OAP2SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRXGQ4TKOJTGI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@robholland Can you please review? |
On this again today, please can you |
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
Done @robholland @JorTurFer Can you please trigger CI? |
t.Log("--- testing worker versioning ---") | ||
|
||
data.BuildID = "1.1.1" | ||
KubectlApplyWithTemplate(t, data, "jobUpdateBuildID", jobUpdateBuildIDTemplate) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe wrap these in a helper so the intent is clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean the upgrade step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, to reduce cognitive load for something which isn't part of the what we want to test.
Co-authored-by: Rob Holland <rob.holland@gmail.com> Signed-off-by: Prajith <prajithpalakkuda@gmail.com>
Co-authored-by: Rob Holland <rob.holland@gmail.com> Signed-off-by: Prajith <prajithpalakkuda@gmail.com>
The scale out test fails for me locally:
Waiting to see how it fairs on CI. |
/run-e2e temporal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I marked this PR as draft, please change this once it is ready for a review from KEDA maintainers.
Thanks for the great collaboration everyone!
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
@robholland are we good with this? can we change the status to "Ready for Review" ? |
Signed-off-by: Prajithp <prajithpalakkuda@gmail.com>
The tests are failing on CI, those should be fixed before its marked for review. |
It should be fine now. |
Cool, @Prajithp wanna mark as ready for review now? |
Yes. Can you please review? @jhecking @zroubalik |
Implement a temporal scaler
Checklist
Fixes #4724