-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add documentation for using TLS on the metric endpoint #1376
feat: add documentation for using TLS on the metric endpoint #1376
Conversation
✅ Deploy Preview for cert-manager-website ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
f09b19a
to
b3d3e8b
Compare
Signed-off-by: Adam Talbot <adam.talbot@venafi.com>
b3d3e8b
to
f46317d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried it out, but I'm unsure about how easy it is going to be to set up Prometheus with the CA certificate it needs to connect to the TLS metrics endpoint.
Let's talk about it more in the standup tomorrow.
- cert-manager-metrics | ||
- cert-manager-metrics.cert-manager | ||
- cert-manager-metrics.cert-manager.svc | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This worked, but how should I configure my client with the CA certificate?
The problem is that the Secret resource contains the CA certificate and the private key so I don't really want to mount that Secret into my prometheus client Pod.
I could implement some other script that copies the CA certificate to another ConfigMap, but the CA is only valid for 1 year, so I'd need to schedule the script to run at the time the CA certificate gets renewed.
Also, I'm unsure about whether these DNS names are useful.
We recommend users to run two replicas of the controller, and
I suppose that my prometheus client needs to collect metrics from each of the replicas.
These names would resolve to the loadbalancer service IP which would round robin between the Pods.
Seems like the IP addresses of the Pods should also be added to the serving certificate too,
so that the prometheus client can connect to the Pod by IP address.
Is that how Prometheus will operate if using a PodMonitor resource (as recommended in the section above this)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are using prometheus operator, you can point reference a Secret as part of a PodMonitor or ServiceMonitor object https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.TLSConfig
The way Prometheus operator implements this is to copy the referenced secret keys into the Prometheus secret, since only the keys specified are copied, the private key would not be present inside the prometheus pod without a mis-configuration (see https://github.com/prometheus-operator/prometheus-operator/blob/547fea903e3df86272573e20c3b1479d0557a1a3/pkg/assets/store.go for implementation).
PodMonitors also let you set the serverName you wish to use when validating, which means you don't strictly need the IP address, but I can also see the benefit in including it in the cert
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added an example on how PodMonitor can be used to trust the CA and validate the serverName
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding that example and thanks for helping me get prometheus set up to test this.
I finally got it working and can see the cert-manager targets in the Prometheus Web UI,
using HTTPS URLs.
In that case I patched the ServiceMonitor
resource which is installed by the Helm chart:
$ git diff
diff --git a/service-monitor.live.yaml b/service-monitor.live.yaml
index b77ffb52f..1aa6a3992 100644
--- a/service-monitor.live.yaml
+++ b/service-monitor.live.yaml
@@ -22,5 +22,12 @@ spec:
- targetPort: 9402
path: /metrics
interval: 60s
+ scheme: https
scrapeTimeout: 30s
honorLabels: false
+ tlsConfig:
+ ca:
+ secret:
+ key: tls.crt
+ name: cert-manager-metrics-ca
+ serverName: cert-manager-metrics
filesystem: | ||
certFile: "/path/to/cert.pem" | ||
keyFile: "/path/to/key.pem" | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tried this yet, but how would you get these files into the Pod filesystem?
Using the .volumes
and .mounts
Helm values?
And if so, how would the user manage those Certifcates and keys?
It seems like the answer should be to use a cert-manager itself to create a Secret and mount that,
but there will be a bootstrapping problem.
So maybe we should explain that here in the docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can indeed pre-create them and use volumes and mounts to add to the filesystem.
I also worked out that if you can use cert-manager itself without having a bootstrap problem (kind of, the metrics endpoint will not work for a short while)
In Kubernetes you can mark a secret volume as optional, this means the pod is not blocked from starting if the secret is missing. Then cert manager can be used to provision the secret, kubelet would eventually notice the secret now exists and mount it in the correct place where cert manager would pick it up.
Not sure we want to recommend that way though
I can certainly add a comment saying you would need the certs mounted via a volume, with a "see .volumes and .mounts helm values for information on how to do this" line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a note that the certificate and key must be mounted into the pod
Signed-off-by: Adam Talbot <adam.talbot@venafi.com>
Signed-off-by: Adam Talbot <adam.talbot@venafi.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also add a note about this new feature to the release notes, linking to this new section for more information:
- cert-manager-metrics | ||
- cert-manager-metrics.cert-manager | ||
- cert-manager-metrics.cert-manager.svc | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding that example and thanks for helping me get prometheus set up to test this.
I finally got it working and can see the cert-manager targets in the Prometheus Web UI,
using HTTPS URLs.
In that case I patched the ServiceMonitor
resource which is installed by the Helm chart:
$ git diff
diff --git a/service-monitor.live.yaml b/service-monitor.live.yaml
index b77ffb52f..1aa6a3992 100644
--- a/service-monitor.live.yaml
+++ b/service-monitor.live.yaml
@@ -22,5 +22,12 @@ spec:
- targetPort: 9402
path: /metrics
interval: 60s
+ scheme: https
scrapeTimeout: 30s
honorLabels: false
+ tlsConfig:
+ ca:
+ secret:
+ key: tls.crt
+ name: cert-manager-metrics-ca
+ serverName: cert-manager-metrics
app.kubernetes.io/instance: cert-manager | ||
app.kubernetes.io/component: "controller" | ||
podMetricsEndpoints: | ||
- port: http |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this port name should be "http-metrics", here and in the example above.
$ kubectl -n cert-manager get deploy cert-manager -o yaml | grep -A6 ports
ports:
- containerPort: 9402
name: http-metrics
protocol: TCP
- containerPort: 9403
name: http-healthz
protocol: TCP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, thats what the helm chart names the port. I've updated docs to reflect this.
app.kubernetes.io/component: "controller" | ||
podMetricsEndpoints: | ||
- port: http | ||
honorLabels: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also added scheme: https
because the API docs say that the default is http
....but perhaps the default changes based on the existence of a tlsConfig
stanza?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yea, added that to the docs as its probably required
Signed-off-by: Adam Talbot <adam.talbot@venafi.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
I'll add the release note when I do next alpha.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wallrj The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Should this not be namned https-metrics in the docs? :) |
No description provided.