Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KafkaNamespaced for KafkaSources #4118

Open
Zurvarian opened this issue Sep 25, 2024 · 6 comments
Open

KafkaNamespaced for KafkaSources #4118

Zurvarian opened this issue Sep 25, 2024 · 6 comments

Comments

@Zurvarian
Copy link

Problem
Right now we have different namespaces that represents different usages and each of them could require different needs when it comes to configuring the Kafka consumer.

Persona:
Which persona is this feature for?
System Operator

Exit Criteria
I would like to be able to configure my KafkaSource CRDs to allow the control plane to create independent Data Planes per namespace where we could have overrides in the Kafka Consumer config.
Ideally I should also be able to configure the Mem/CPU requests and limits of these other instances.

Time Estimate (optional):
How many developer-days do you think this may take to resolve?
I can't say, but given the source code is shared between Broker and KafkaSource, the implementation should not be that costly.

Additional context (optional)
Nothing else to add.

@pierDipi
Copy link
Member

pierDipi commented Sep 25, 2024

@Zurvarian would a different solution outlined here #1412 work for you? It would still keep the shared data plane but allow to configure more consumer and producer configurations

@Zurvarian
Copy link
Author

It would for the time being, but at some point later we plan to have independent NodePools per client and then we will want to have different StatefulSets using the resources of said NodePools only. So we can isolate resource consumption in-between clients.

@pierDipi
Copy link
Member

Can you describe in more details why is that level of isolation in your use case necessary ?

@Zurvarian
Copy link
Author

Can't give much details because of the clients list is private, but the overall idea is that the clients could be charged based on how much resources they use in our system, so the final goal is to split the CPU/Memory usage across different NodePools so the K8S resources used per client is easily identifiable.

A valid alternative for us would be to deploy independent Knative Eventing deployments in each of the NodePools / namespaces, but then the control plane resources being shared is not an issue and it will just require more maintenance to keep all versions aligned.
Though, AFAIK, it is not possible to restrict which KafkaSource definitions are handled by the Controller, isn't it? Something like a list of Namespaces to look for would do.

@pierDipi
Copy link
Member

I think cost modeling can be done on a shared data plane if we export the right metrics and build a cost model on top of it (maybe we would lose the overhead)

Currently, we expose the detailed Consumer and Producer metrics and we tag them with the Knative resources:

In addition to the Eventing-specific metrics (Knative Kafka names are a little different) https://knative.dev/docs/eventing/observability/metrics/eventing-metrics/

We've been re-evaluating the Namespaced approach as it's revealed to be harder to maintain than we have anticipated

@Zurvarian
Copy link
Author

Hi @pierDipi,

I understand your point about complexity.

There is a simple alternative that could work for us: To allow to filter which namespaces or labelled objects a given Control Plane should handle. That way we could have multiple additional installations of Knative Eventing in our system when required.
It should not be that difficult as in the end is providing and entry point to a list of namespaces/labels that then will be used internally by the Control Plane to query the KafkaSources and other items.

WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants