Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for multiple k8s clusters #95

Open
9strands opened this issue May 29, 2024 · 4 comments
Open

Support for multiple k8s clusters #95

9strands opened this issue May 29, 2024 · 4 comments

Comments

@9strands
Copy link

So far the initial service looks very interesting and I'm tinkering with it a bit.

However, I think most users are going to wind up needing to monitor multiple clusters - a dev one, a test one, and probably 2 production clusters. I can't figure out how this can currently work with it - since it needs a specific KUBERNETES_MASTER env var defined to operate.

My guess is that this would also need some changes to the SQL schema, potentially to support with potentially multiple agents running on different nodes polling different clusters - but updating the same backend DB.

@ngoeddel-openi
Copy link

Oh, I was just browsing through the issues and saw this one.
We also want to have one IcingaWeb and multiple Kubernetes clusters to be included in the UI. It seems there is no way right now to somehow workaround that.
I hope this can be added soon. I really would appreciate that.

@9strands
Copy link
Author

9strands commented Aug 2, 2024

I run about 18 K8s clusters, and would love to use Icinga-k8s in order to implement the monitoring (using these: https://github.com/redhat-cop/rhdp-monitoring-scripts/tree/main/openshift/ mostly)

Right now I have custom scripts reaching out from Icinga to each cluster, but having a full-and-proper monitoring solution for each cluster would make my life infinitely easier from a monitoring perspective.

@PeterLustig1337
Copy link

I also got Issues running icinga-kubernetes to monitor a cluster.
The systemd service is unable to complete start somehow and it always times out.
API request are all being made and I got all cluster information on the web module,
but when it reaches the timeout for service start it just exits.
And everytime this happens the database is somehow corrupt and I need to drop the database and recreate it.
Otherwise i get a lot of "connection reset by peer" errors on the next start.

@lokidaibel
Copy link

I also got Issues running icinga-kubernetes to monitor a cluster. The systemd service is unable to complete start somehow and it always times out. API request are all being made and I got all cluster information on the web module, but when it reaches the timeout for service start it just exits. And everytime this happens the database is somehow corrupt and I need to drop the database and recreate it. Otherwise i get a lot of "connection reset by peer" errors on the next start.

Do you try V0.2 Had the same Issues with v0.1.

I would also like multi Cluster Support ! PUSH !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants