Prometheus High Availability proxy
even though application is pretty stable, keep in mind it is still in active development, we use it in production, but it doesn't necessary mean it is ready for you.
Normal Prometheus upgrade procedure looks like this:
- Switch traffic to fallback server (automatic with HA proxy, or manually)
- Upgrade primary server
- Wait for it to become available
- Do 1-3 steps for secondary server
Even if you did everything right, it doesn't mean you will not have gaps on Grafana.
In order to prevent gaps on Grafana, we query the same request from several servers. The resulting responses are merged into one unified response using, pretty naive, json merging techniques.
$ docker run -d -p 9090:9090 anchorfree/prometheus-ha-proxy:master http://server1:9090 http://server2:9090
In Kubernetes using Helm:
$ helm repo add afcharts https://anchorfree.github.io/helm-charts
$ helm install afcharts/prometheus-ha-proxy --name prom-ha-proxy --set=extraArgs='{http://server1:9090,http://server2:9090}'
This project is simple and pretty dumb Prometheus output merger. With this project you can:
- Use one endpoint in Grafana instead of several servers
- Have truly highly available Prometheus setup for Grafana
- Startup this proxy, and do upgrades/restarts without any problems.
- It doesn't do any way of values aggregations over datasets, e.g. you can not calculate total bandwidth for all of your datacenters, you need Federation Server for this.
- It doesn't solve any HA problems for AlertManager or any querying issues.
We would like to make a proof of concept and share the answer to Grafana drops problem we had with Prometheus at Anchorfree.
Evolution of this tool would be:
- Intergare Prometheus WEB GUI, in order to make look and feel like real Prometheus (currently only Grafana and curl queries are possible)
- Use remote reades + local TSDB storage to provide single point of contact for our Ops team
- if you count() something over specific timeframe, and that timeframe happened to be time of one Prometheus server restart, you will have incorrect calculations (gap in data). In order to prevent this from happening, you can use Federation server, and then make count() over aggregated data, this may not be applicable to any case.
topk
can show more than expected amount of values.vector
output is merged naively without deduplication, which means double the results.
- Fork it!
- Create your feature branch:
git checkout -b my-new-feature
- Commit your changes:
git commit -m 'Add some feature'
- Push to the branch:
git push origin my-new-feature
- Submit a pull request
- Make sure tests are passing