Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Beginner Issues #113

Open
mgg1010 opened this issue Dec 30, 2022 · 3 comments
Open

Beginner Issues #113

mgg1010 opened this issue Dec 30, 2022 · 3 comments

Comments

@mgg1010
Copy link

mgg1010 commented Dec 30, 2022

Hi

First, I'm seeing a lot of go-related metrics in the output

go_gc_duration_seconds{quantile="0"} 3.2688e-05

I'm new to prometheus, but doesn't spamming it with 100s of extra metrics cause
some memory use - or should I configure prom to ignore anything starting with 'go'

Am I missing something?

Second, I'm sending messages like this:

topic: watchdog/esp-garage:
Payload: {"message": "Alive"}

I get this on the output:

received_messages{status="storeError",topic="watchdog/esp-garage"} 34

previously I sent non-JSON messages and it was happier, but I didn't see
a direct metric for my line.

My configuration is below - any thoughts?

 topic_path: watchdog/#
 device_id_regex: "(.*/)?(?P<deviceid>.*)"
 qos: 0
cache:
   timeout: 24h
json_parsing:
 - prom_name: watchdog
   mqtt_name: message
   help: Watchdog message
   type: gauge

Thanks

Martin Green

@wmoss
Copy link

wmoss commented Jan 1, 2023

Regarding the error, I think you want put the specific message configuration under metrics: not json_parsing:. If would be helpful if you posted the logs from the service though, as that would have more information.

Regarding the go... metrics, I was actually noticing the same thing @hikhvar. Thoughts on adding a configuration flag to remove those?

@hikhvar
Copy link
Owner

hikhvar commented Jan 2, 2023

Hey,
it's quite normal in the prometheus eco system to have those runtime specific go_... metrics, and I don't see any problems in having them. The canonical mechansim to drop them is via relabeling in the prometheus server: https://medium.com/quiq-blog/prometheus-relabeling-tricks-6ae62c56cbda

Regarding your initial problem:
The payload you presented, don't have a number as value. mqtt2prometheus assumes by default, that the values are numbers, because prometheus metrics can only have numeric values. If you want to map a string like here:

{"message": "Alive"}

you must define a mapping. Example:

 topic_path: watchdog/#
 device_id_regex: "(.*/)?(?P<deviceid>.*)"
 qos: 0
cache:
   timeout: 24h
json_parsing:
 - prom_name: watchdog
   mqtt_name: message
   help: Watchdog message
   type: gauge
   string_value_mapping:
     map:
       Alive: 1
       Dead: 0
      # Metric value to use if a match cannot be found in the map above.
      # If not specified, parsing error will occur.
     error_value: -1

@wmoss
Copy link

wmoss commented Jan 4, 2023

Yeah, I ended up doing something similar in my prometheus config and I guess that's just as good a solution as adding a flag, probably easier. If it's helpful, my config looks like,

scrape_configs:
  - job_name: mqtt2prometheus
    scrape_interval: 10s
    static_configs:
      - targets:
        - "localhost:9641"
    metric_relabel_configs:
      - source_labels: ["__name__"]
        regex: <metrics to keep here>
        action: keep

@mgg1010 seems like it wouldn't be a terrible entry in the FAQ of the main Readme if you wanted to add the answer to your question there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants