Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need some help with SigNoz and OTel collector setup #6750

Open
dhairya137 opened this issue Jan 4, 2025 · 2 comments
Open

Need some help with SigNoz and OTel collector setup #6750

dhairya137 opened this issue Jan 4, 2025 · 2 comments
Labels
community-edition-setup Issues related to installing & setting up community edition

Comments

@dhairya137
Copy link

Current situation:

  • I have a local OTel collector running on the server
  • On same server I have set up self-hosted SigNoz with Docker
  • Currently getting errors and no data flowing into SigNoz

Here's my local OTel collector config:

extensions:
  health_check: {}

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  hostmetrics:
    collection_interval: 60s
    scrapers:
      cpu: {}
      disk: {}
      load: {}
      filesystem: {}
      memory: {}
      network: {}
      process:
        mute_process_name_error: true
        mute_process_exe_error: true
        mute_process_io_error: true

processors:
  batch:
    send_batch_size: 500
    send_batch_max_size: 600
    timeout: 10s
  memory_limiter:
    check_interval: 1s
    limit_percentage: 75
    spike_limit_percentage: 15
  resourcedetection:
    detectors: [env, system]
    timeout: 2s

exporters:
  otlp:
    endpoint: 0.0.0.0:9000
    tls:
      insecure: true
  clickhouse:
    endpoint: tcp://localhost:9000
    database: signoz_traces
    username: default
    password: default123
    timeout: 10s

service:
  telemetry:
    metrics:
      address: 0.0.0.0:8888
    logs:
      level: info

  pipelines:
    traces:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [otlp, clickhouse]
    metrics:
      receivers: [otlp, hostmetrics]
      processors: [memory_limiter, resourcedetection, batch]
      exporters: [otlp, clickhouse]
    logs:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [otlp, clickhouse]

Main questions:

  1. What's the correct data flow path? Should it be:
    Option A: App → Local OTel collector → SigNoz
    OR
    Option B: App → Local OTel collector → SigNoz's Docker OTel collector → SigNoz

  2. If Option A is correct, what changes do I need in my local OTel collector config to send data directly to SigNoz's ClickHouse?

  3. If Option B is recommended, how should I configure the collectors to avoid port conflicts and ensure proper data flow?

Current errors I'm seeing:

       Exporting failed. Will retry the request after interval.        {"kind": "exporter", "data_type": "metrics", "name": "otlp", "error": "rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: http2: frame too large\"", "interval": "3.79851812s"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: panic: runtime error: invalid memory address or nil pointer dereference
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x625b9613ca3c]
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: goroutine 185 [running]:
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: go.opentelemetry.io/collector/pdata/pcommon.Value.Type(...)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]:         go.opentelemetry.io/collector/pdata@v1.22.0/pcommon/value.go:183
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: go.opentelemetry.io/collector/pdata/pcommon.Value.AsString({0x0?, 0xc002684380?})
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]:         go.opentelemetry.io/collector/pdata@v1.22.0/pcommon/value.go:370 +0x1c
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.(*sumMetrics).insert.func1(0x0?)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]:         github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/sum_metrics.go:129 +0x365
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.doWithTx({0x625ba27ea808?, 0xc0006cb8f0?}, 0x0?, 0xc0017baf40)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]:         github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/metrics_model.go:210 +0xcd
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.(*sumMetrics).insert(0xc0014a3b60, {0x625ba27ea808, 0xc0006cb8f0}, 0xc001655520)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]:         github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/sum_metrics.go:105 +0xc9
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.InsertMetrics.func1({0x625ba2797e98?, 0xc0014a3b60?}, 0xc002684390)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]:         github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/metrics_model.go:100 +0x43
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: created by github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.InsertMetrics in goroutine 193
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]:         github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/metrics_model.go:99 +0xcc
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC systemd[1]: otelcol-contrib.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC systemd[1]: otelcol-contrib.service: Failed with result 'exit-code'.
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC systemd[1]: otelcol-contrib.service: Scheduled restart job, restart counter is at 2.
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC systemd[1]: Started otelcol-contrib.service - OpenTelemetry Collector.
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.290+0530        info        service@v0.116.0/service.go:164        Setting up own telemetry...
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.290+0530        warn        service@v0.116.0/service.go:213        service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.290+0530        info        telemetry/metrics.go:70        Serving metrics        {"address": "0.0.0.0:8888", "metrics level": "Normal"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.291+0530        info        memorylimiter@v0.116.0/memorylimiter.go:151        Using percentage memory limiter        {"kind": "processor", "name": "memory_limiter", "pipeline": "logs", "total_memory_mib": 31459, "limit_percentage": 75, "spike_limit_percentage": 15}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.291+0530        info        memorylimiter@v0.116.0/memorylimiter.go:75        Memory limiter configured        {"kind": "processor", "name": "memory_limiter", "pipeline": "logs", "limit_mib": 23594, "spike_limit_mib": 4718, "check_interval": 1}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.293+0530        info        service@v0.116.0/service.go:230        Starting otelcol-contrib...        {"Version": "0.116.0", "NumCPU": 16}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.293+0530        info        extensions/extensions.go:39        Starting extensions...
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.295+0530        warn        grpc@v1.68.1/clientconn.go:1384        [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "0.0.0.0:9000", ServerName: "0.0.0.0:9000", }. Err: connection error: desc = "error reading server preface: http2: frame too large"        {"grpc_log": true}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.309+0530        info        internal/resourcedetection.go:126        began detecting resource information        {"kind": "processor", "name": "resourcedetection", "pipeline": "metrics"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530        info        internal/resourcedetection.go:140        detected resource information        {"kind": "processor", "name": "resourcedetection", "pipeline": "metrics", "resource": {"host.name":"abhiyanta-HP-285-Pro-G6-Microtower-PC","os.type":"linux"}}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530        warn        internal@v0.116.0/warning.go:40        Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks.        {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530        info        otlpreceiver@v0.116.0/otlp.go:112        Starting GRPC server        {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530        warn        internal@v0.116.0/warning.go:40        Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks.        {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530        info        otlpreceiver@v0.116.0/otlp.go:169        Starting HTTP server        {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.316+0530        warn        grpc@v1.68.1/clientconn.go:1384        [core] [Channel #6 SubChannel #7]grpc: addrConn.createTransport failed to connect to {Addr: "0.0.0.0:9000", ServerName: "0.0.0.0:9000", }. Err: connection error: desc = "error reading server preface: http2: frame too large"        {"grpc_log": true}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.317+0530        warn        grpc@v1.68.1/clientconn.go:1384        [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "0.0.0.0:9000", ServerName: "0.0.0.0:9000", }. Err: connection error: desc = "error reading server preface: http2: frame too large"        {"grpc_log": true}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.323+0530        info        service@v0.116.0/service.go:253        Everything is ready. Begin running and processing data.

Any guidance on the correct approach would be really helpful! 🙏

Copy link

welcome bot commented Jan 4, 2025

Thanks for opening this issue. A team member should give feedback soon. In the meantime, feel free to check out the contributing guidelines.

@grandwizard28 grandwizard28 added the community-edition-setup Issues related to installing & setting up community edition label Jan 6, 2025
@wanderer056
Copy link

wanderer056 commented Jan 10, 2025

If your app and signoz are in the same machine then i recommend not using another local otel collector. Why would you need that? Wouldn't this approach be simple:
App (OTLP)-> Signoz Otel Collector

You just need to send your telemetry data to the Signoz Otel Collector in the endpoint 0.0.0.0:4317 assuming you are using OTLP and GRPC. All other things like storing to clickhouse for query is handled by signoz otel collector. Just doing this you can see the data being collected and view from the signoz dashboard.
Also if you want to add things like process metrics for hostmetrics as given in the config above, you can add it to the signoz otel collector config and restart it. Or if you like to add any other thing to the otel collector then you and do that in the signoz otel collector itself.

But still if you want another local OTEL collector for some reason.
According to your query, the correct data flow path is:
Option B: App → Local OTel collector → SigNoz's Docker OTel collector -> (Handled by signoz collector itself) -> Signoz

To avoid the port conflict, i think you can change the port for OTLP grpc or http in the receiver. The config would look something like this:

  health_check: {}

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4545 #Changed to avoid conflict
      http:
        endpoint: 0.0.0.0:4546 #Changed to avoid conflict
  hostmetrics:
    collection_interval: 60s
    scrapers:
      cpu: {}
      disk: {}
      load: {}
      filesystem: {}
      memory: {}
      network: {}
      process:
        mute_process_name_error: true
        mute_process_exe_error: true
        mute_process_io_error: true

processors:
  batch:
    send_batch_size: 500
    send_batch_max_size: 600
    timeout: 10s
  memory_limiter:
    check_interval: 1s
    limit_percentage: 75
    spike_limit_percentage: 15
  resourcedetection:
    detectors: [env, system]
    timeout: 2s

exporters:
  otlp:
    endpoint: 0.0.0.0:4317
    tls:
      insecure: true

service:
  telemetry:
    metrics:
      address: 0.0.0.0:8888
    logs:
      level: info

  pipelines:
    traces:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [otlp]
    metrics:
      receivers: [otlp, hostmetrics]
      processors: [memory_limiter, resourcedetection, batch]
      exporters: [otlp]
    logs:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [otlp]

For the receiver OTLP grpc and http receiver port is changed to 4545 and 4546(Remember to send telemetry data to this endpoint from your application).
Now to export the collected data from local collector we need to export to 0.0.0.0:4317 which is the receiver endpoint for signoz otel collector OTLP grpc receiver. So changing exporter endpoint to that.
In the Pipeline we need to send telemetry data to signoz otel collector through OTLP only so removing clickhouse exporters which is not needed.
Also you first define the settings in the section above and add it to pipeline. You have not defined any config for clickhouse exporter but used in the service section.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community-edition-setup Issues related to installing & setting up community edition
Projects
None yet
Development

No branches or pull requests

3 participants