-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Processor/k8sattributes] Pod/Node attributes are not attaching with metrics generated by Kubeletstats receiver #34075
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Having the same issue, pasting here a minimal configuration.
|
The common reason this happens is because the Pod IP that the k8sattributesprocessor is getting from the api doesn't match the incoming request's IP. If these 2 values don't match, then IP cannot be used as the association source. |
Thank you for chiming in @TylerHelmuth. |
Hey @marcoboi, you can increase the verbosity of the debug:
verbosity: detailed In general I would suggest to consulting what the official Helm Chart define for this processor: https://github.com/open-telemetry/opentelemetry-helm-charts/blob/main/charts/opentelemetry-collector/templates/_config.tpl#L195 Since this Helm Chart is widely used I assume its config would cover you as well. |
Thank you @ChrsMark for the guidance. As for the configuration, I took a look and my configuration seems reasonable. An excerpt of the logs down below.
|
@marcoboi could you try defining more The configuration you have provided is the following: k8sattributes:
filter:
node_from_env_var: KUBE_NODE_NAME
auth_type: serviceAccount
passthrough: false While the one I suggested from the Helm Chart is: k8sattributes:
filter:
node_from_env_var: K8S_NODE_NAME
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
extract:
metadata:
- "k8s.namespace.name"
- "k8s.deployment.name"
- "k8s.statefulset.name"
- "k8s.daemonset.name"
- "k8s.cronjob.name"
- "k8s.job.name"
- "k8s.node.name"
- "k8s.pod.name"
- "k8s.pod.uid"
- "k8s.pod.start_time"
labels:
- tag_name: $$1
key_regex: (.*)
from: pod
annotations:
- tag_name: $$1
key_regex: (.*)
from: pod Also could you define what you expect to see here and what you actually see?
|
Hi @ChrsMark, you're right, let me clearly state the problem. Setup
Expectation Current status
Thank you again for your help and please let me know if I can provide any further details. |
Following up on my last message, after adding the
So it seems (correct me if I'm wrong there) that the attributes are sourced but not attached to the metrics. |
It seems that not even the attributes already provided with It seems there's an open issue attended by @TylerHelmuth that points in the same direction. |
Thank's @marcoboi. I'm not 100% sure how you could ensure if that's an issue with the exporter or if something else happens at ingest time in the back-end's side. What I would try here is to send data to another collector and export them in the console using the |
Thanks @ChrsMark , I'll try export the data and capture them again as you suggest. |
As suggested by @ChrsMark , I've tried chaining two collectors (a source and a target collector) to:
The attributes in the source collector are passed on to the target collector: source collector
target collector
Considerations I'm attaching the configuration used for this experiment. Source collector
Target collector
|
Am I maybe missing something about the relationship between attributes and labels? |
I think that's a question for the Prometheus project, to figure out what is the proposed way to ship data from a Collector to Prometheus and what is supported there. You can create a separate issue if needed to avoid loading the current one (the issue is unrelated to k8sattributes processor :)). |
Thanks, I think it's clear we're not dealing with an issue related to |
For the chronicles, opened an issue on the prometheus repo to understand this issue better. |
I finally managed to get those attributes to attach to the metrics and show as labels in Prometheus and Grafana.
From what I understand, while it is true that attributes in Otel metrics will end up as labels in Prometheus, some restrictions exists in terms of what those attributes' name should be like. I'm pasting here the full manifest of a working Collector that:
I'd be happy to contribute to the documentation to clarify this point. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hey @dmitryax @rmfitzpatrick @fatsheep9146 @TylerHelmuth, sorry to trouble y'all, but I wanted to share that I think this issue is still relevant. I've hit the same issue in a recent deployment. |
As I've continued working on an The only
For example (source), exporters:
prometheusremotewrite:
endpoint: "https://my-cortex:7900/api/v1/push"
resource_to_telemetry_conversion:
enabled: true # Convert resource attributes to metric labels With that configuration in place, I get similar behavior to using the I'm sure there are scenarios this solution doesn't cover, but for my setup, this is a silver bullet. @lazyboson, your example was using the |
Component(s)
processor/k8sattributes
What happened?
Description
I am running the otel collector on the k8s cluster as daemonset. I am generating node and pod metrics using
kubeletstatsreceiver
. I can't see any label attached to metrics on grafana.Steps to Reproduce
Expected Result
Actual Result
Collector version
0.104.0
Environment information
Environment
OS: Ubuntu
Compiler(if manually compiled): NA
OpenTelemetry Collector configuration
Log output
Additional context
I have also tried with -
k8sattributes:
and
k8sattributes:
k8sattributes/2:
all RBAC permissions are given and i can't see any permision issue in the collector logs.
The text was updated successfully, but these errors were encountered: