Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors in the collector when configuring Prometheus as the receiver and Data Prepper as the exporter. #4299

Open
sky9700 opened this issue Mar 19, 2024 · 4 comments
Labels
question Further information is requested

Comments

@sky9700
Copy link

sky9700 commented Mar 19, 2024

I configured the opentelemetry collector with Prometheus as the receiver and data prepper as the exporter. However, I'm encountering no error in the collector, and no data is showing up in the OpenSearch dashboard. What should I do? What might I have done wrong?
I'll attach the Data Prepper configmap YAML and the OpenTelemetry Collector configmap YAML below.

[data prepper configmap yaml]

    otel-metrics-pipeline-2:
      #      workers: 5
      delay: 10
      source:
        http_source:
          ssl: false
          port: 21891
      buffer:
        bounded_blocking:
          buffer_size: 12800
          batch_size: 1024
      sink:
        - opensearch:
            hosts: ["https://opensearch-cluster-master.opensearch.svc:9200"]
            insecure: true
            username: admin
            password: admin
            index_type: custom
            index: proms-%{yyyy.MM.dd}

[otel collector configmap yaml]

data:
  relay: |
    exporters:
      debug: {}
      logging: {}
      otlp/metrics:
        endpoint: data-prepper-headless.opensearch.svc.cluster.local:21891
        tls:
          insecure: true
    extensions:
      health_check:
        endpoint: ${env:K8S_POD_IP}:13133
    processors:
      memory_limiter:
        check_interval: 5s
        limit_percentage: 80
        spike_limit_percentage: 25
    receivers:
      prometheus/internal:
        config:
          scrape_configs:
          - job_name: apps
            kubernetes_sd_configs:
            - role: pod
              selectors:
              - role: pod
                # only scrape data from pods running on the same node as collector
                field: "spec.nodeName=${NODE_NAME}"
            relabel_configs:
            # scrape pods annotated with "prometheus.io/scrape: true"
            - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
              regex: "true"
              action: keep
              # read the port from "prometheus.io/port: <port>" annotation and update scraping address accordingly
            - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
              action: replace
              target_label: __address__
              regex: ([^:]+)(?::\d+)?;(\d+)
              # escaped $1:$2
              replacement: $$1:$$2
            - source_labels: [__meta_kubernetes_namespace]
              action: replace
              target_label: kubernetes_namespace
            - source_labels: [__meta_kubernetes_pod_name]
              action: replace
              target_label: kubernetes_pod_name
          - job_name: 'otel-collector'
            scrape_interval: 5s
            static_configs:
            - targets: ['prometheus-k8s.monitoring.svc:9090']
            metric_relabel_configs:
              - source_labels: [ __name__ ]
                regex: '.*grpc_io.*'
                action: drop
      otlp:
        protocols:
          grpc:
            endpoint: ${env:K8S_POD_IP}:4317
          http:
            endpoint: ${env:K8S_POD_IP}:4318
    service:
      extensions:
      - health_check
      pipelines:
        metrics/internal:
          exporters:
          - debug
          - logging
          - otlp/metrics
          processors:
          - memory_limiter
@wbeckler
Copy link

@opensearch-project/admin please redirect this to https://github.com/opensearch-project/data-prepper

@bbarani bbarani transferred this issue from opensearch-project/OpenSearch-Dashboards Mar 19, 2024
@dlvenable
Copy link
Member

@sky9700 ,

Your Data Prepper configuration is use the http source. You will need to use the otel_metrics_source instead. See https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/otel-metrics-source/ for the documentation.

@dlvenable dlvenable added question Further information is requested and removed untriaged labels Mar 19, 2024
@kkondaka
Copy link
Collaborator

Here is an example config

otel-metric-pipeline:                                                                                                                       
  source:                                                                                                                                   
    otel_metrics_source:                                                                                                                    
      ssl: false   
processor:                                                                                                                                
    - otel_metrics:
sink:
   - opensearch:

@sky9700
Copy link
Author

sky9700 commented Mar 20, 2024

Thank you for your response. Thanks to you, data is being retrieved from the OpenSearch dashboard.

Currently, all data is grouped under the name field for querying, but I need to distinguish fields for dashboard creation.
What should I configure in the otel collector for this?
I would appreciate your advice. I'm also attaching the current otel collector configmap yaml. Thank you so much.

[otel collector configmap yaml]
data:
relay: |
exporters:
otlp/metrics:
endpoint: data-prepper-headless.opensearch.svc:21891
tls:
insecure: true
extensions:
health_check:
endpoint: ${env:K8S_POD_IP}:13133
processors:
memory_limiter:
check_interval: 5s
limit_percentage: 80
spike_limit_percentage: 25
receivers:
prometheus/internal:
config:
scrape_configs:
- job_name: apps
kubernetes_sd_configs:
- role: pod
selectors:
- role: pod
# only scrape data from pods running on the same node as collector
field: "spec.nodeName=${NODE_NAME}"
relabel_configs:
# scrape pods annotated with "prometheus.io/scrape: true"
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
regex: "true"
action: keep
# read the port from "prometheus.io/port: " annotation and update scraping address accordingly
- source_labels: [address, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: address
regex: ([^:]+)(?::\d+)?;(\d+)
# escaped $1:$2
replacement: $$1:$$2
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'otel-collector'
scrape_interval: 5s
static_configs:
- targets: ['prometheus-k8s.monitoring.svc:9090']
metric_relabel_configs:
- source_labels: [ name ]
regex: '.grpc_io.'
action: drop
otlp:
protocols:
grpc:
endpoint: ${env:K8S_POD_IP}:4317
http:
endpoint: ${env:K8S_POD_IP}:4318
service:
extensions:
- health_check
pipelines:
metrics/internal:
exporters:
- otlp/metrics
processors:
- memory_limiter
receivers:
- prometheus/internal
- otlp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
Development

No branches or pull requests

5 participants