You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
---
cluster:
# -- The name for this cluster.# @section -- Clustername: "{{ requiredEnv "CLUSTER_NAME" }}"## Global settings#global:
# -- The specific platform for this cluster. Will enable compatibility for some platforms. Supported options: (empty) or "openshift".# @section -- Global Settingsplatform: ""# -- How frequently to scrape metrics.# @section -- Global SettingsscrapeInterval: 60s# -- Sets the max_cache_size for every prometheus.relabel component. ([docs](https://grafana.com/docs/alloy/latest/reference/components/prometheus.relabel/#arguments))# This should be at least 2x-5x your largest scrape target or samples appended rate.# @section -- Global SettingsmaxCacheSize: 100000## Destinations## -- The list of destinations where telemetry data will be sent.# See the [destinations documentation](https://github.com/grafana/k8s-monitoring-helm/blob/main/charts/k8s-monitoring/docs/destinations/README.md) for more information.# @section -- Destinationsdestinations:
- name: GrafanaCloudMetricstype: prometheusurl: "{{ requiredEnv "GRAFANA_CLOUD_MIMIR_HOST" }}/api/prom/push"auth:
type: basicusername: "{{ requiredEnv "GRAFANA_CLOUD_MIMIR_USER" }}"password: "{{ requiredEnv "GRAFANA_CLOUD_TOKEN" }}"
- name: GrafanaCloudLogstype: lokiurl: "{{ requiredEnv "GRAFANA_CLOUD_LOKI_HOST" }}/loki/api/v1/push"auth:
type: basicusername: "{{ requiredEnv "GRAFANA_CLOUD_LOKI_USER" }}"password: "{{ requiredEnv "GRAFANA_CLOUD_TOKEN" }}"#tenantId: "{{ requiredEnv "GRAFANA_CLOUD_INSTANCE_ID" }}"
- name: GrafanaCloudOTLPtype: otlpprotocol: httpurl: "{{ requiredEnv "GRAFANA_CLOUD_OTLP_HOST" }}/otlp"tenantId: "{{ requiredEnv "GRAFANA_CLOUD_INSTANCE_ID" }}"authMode: basic # see https://github.com/grafana/k8s-monitoring-helm/issues/844auth:
#type: bearerToken # or none or basic#bearerToken: "{{ requiredEnv "GRAFANA_CLOUD_TOKEN" }}"type: basicusername: "{{ requiredEnv "GRAFANA_CLOUD_INSTANCE_ID" }}"password: "{{ requiredEnv "GRAFANA_CLOUD_TOKEN" }}"metrics:
enabled: truelogs:
enabled: truetraces:
enabled: true## Features## -- Cluster Monitoring enables observability and monitoring for your Kubernetes Cluster itself.# Requires a destination that supports metrics.# To see the valid options, please see the [Cluster Monitoring feature documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-cluster-metrics).# @default -- Disabled# @section -- Features - Cluster MetricsclusterMetrics:
# -- Enable gathering Kubernetes Cluster metrics.# @section -- Features - Cluster Metricsenabled: true# -- The destinations where cluster metrics will be sent. If empty, all metrics-capable destinations will be used.# @section -- Features - Cluster Metricsdestinations: []# -- Which collector to assign this feature to. Do not change this unless you are sure of what you are doing.# @section -- Features - Cluster Metrics# @ignoredcollector: alloy-metrics# -- Cluster events.# Requires a destination that supports logs.# To see the valid options, please see the [Cluster Events feature documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-cluster-events).# @default -- Disabled# @section -- Features - Cluster EventsclusterEvents:
# -- Enable gathering Kubernetes Cluster events.# @section -- Features - Cluster Eventsenabled: true# -- The destinations where cluster events will be sent. If empty, all logs-capable destinations will be used.# @section -- Features - Cluster Eventsdestinations: []# -- Which collector to assign this feature to. Do not change this unless you are sure of what you are doing.# @section -- Features - Cluster Events# @ignoredcollector: alloy-singleton# -- Pod logs.# Requires a destination that supports logs.# To see the valid options, please see the [Pod Logs feature documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-pod-logs).# @default -- Disabled# @section -- Features - Pod LogspodLogs:
# -- Enable gathering Kubernetes Pod logs.# @section -- Features - Pod Logsenabled: true# -- The destinations where logs will be sent. If empty, all logs-capable destinations will be used.# @section -- Features - Pod Logsdestinations: []collector: alloy-logs# -- Application Observability.# Requires destinations that supports metrics, logs, and traces.# To see the valid options, please see the [Application Observability feature documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-application-observability).# @default -- Disabled# @section -- Features - Application ObservabilityapplicationObservability:
# -- Enable gathering Kubernetes Pod logs.# @section -- Features - Application Observabilityenabled: truereceivers:
http:
enabled: true# -- The destinations where application data will be sent. If empty, all capable destinations will be used.# @section -- Features - Application Observabilitydestinations: []# -- Which collector to assign this feature to. Do not change this unless you are sure of what you are doing.# @section -- Features - Application Observability# @ignoredcollector: alloy-receiver# -- Annotation Autodiscovery enables gathering metrics from Kubernetes Pods and Services discovered by special annotations.# Requires a destination that supports metrics.# To see the valid options, please see the [Annotation Autodiscovery feature documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-annotation-autodiscovery).# @default -- Disabled# @section -- Features - Annotation AutodiscoveryannotationAutodiscovery:
# -- Enable gathering metrics from Kubernetes Pods and Services discovered by special annotations.# @section -- Features - Annotation Autodiscoveryenabled: true# -- The destinations where cluster metrics will be sent. If empty, all metrics-capable destinations will be used.# @section -- Features - Annotation Autodiscoverydestinations: []# -- Which collector to assign this feature to. Do not change this unless you are sure of what you are doing.# @section -- Features - Annotation Autodiscovery# @ignoredcollector: alloy-metrics# -- Prometheus Operator Objects enables the gathering of metrics from objects like Probes, PodMonitors, and# ServiceMonitors. Requires a destination that supports metrics.# To see the valid options, please see the# [Prometheus Operator Objects feature documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-prometheus-operator-objects).# @default -- Disabled# @section -- Features - Prometheus Operator ObjectsprometheusOperatorObjects:
# -- Enable gathering metrics from Prometheus Operator Objects.# @section -- Features - Prometheus Operator Objectsenabled: false# -- The destinations where metrics will be sent. If empty, all metrics-capable destinations will be used.# @section -- Features - Prometheus Operator Objectsdestinations: []# -- Which collector to assign this feature to. Do not change this unless you are sure of what you are doing.# @section -- Features - Prometheus Operator Objects# @ignoredcollector: alloy-metrics# -- Profiling enables gathering profiles from applications.# Requires a destination that supports profiles.# To see the valid options, please see the [Profiling feature documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-profiling).# @default -- Disabled# @section -- Features - Profilingprofiling:
# -- Enable gathering profiles from applications.# @section -- Features - Profilingenabled: false# -- The destinations where profiles will be sent. If empty, all profiles-capable destinations will be used.# @section -- Features - Profilingdestinations: []# -- Which collector to assign this feature to. Do not change this unless you are sure of what you are doing.# @section -- Features - Profiling# @ignoredcollector: alloy-profiles# -- Service Integrations enables gathering telemetry data for common services and applications deployed to Kubernetes.# To see the valid options, please see the [Service Integrations documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-integrations).# @default -- No integrations enabled# @section -- Features - Service Integrationsintegrations:
# -- Enable Service Integrations.# @section -- Features - Service Integrationsenabled: true# -- The destinations where cluster events will be sent. If empty, all logs-capable destinations will be used.# @section -- Features - Service Integrationsdestinations: []alloy:
instances:
- name: alloy-receiverslabelSelectors:
app.kubernetes.io/name: alloy-receiver# -- Which collectors to assign this feature to. Do not change this unless you are sure of what you are doing.# @section -- Features - Service Integrations# @ignoredcollector: alloy-metrics# -- Self-reporting creates a single metric and log that reports anonymized information about how this Helm chart was# configured. It reports features enabled, destinations types used, and alloy instances enabled. It does not report any# actual telemetry data, credentials or configuration, or send any data to any destination other than the ones# configured above.# @section -- Features - Self-reportingselfReporting:
# -- Enable Self-reporting.# @section -- Features - Self-reportingenabled: true# -- How frequently to generate self-report metrics. This does utilize the global scrapeInterval setting.# @section -- Features - Self-reportingscrapeInterval: 5m## Collectors (Alloy instances)## An Alloy instance for collecting metrics.alloy-metrics:
# -- Deploy the Alloy instance for collecting metrics.# @section -- Collectors - Alloy Metricsenabled: true# -- Extra Alloy configuration to be added to the configuration file.# @section -- Collectors - Alloy MetricsextraConfig: ""# Remote configuration from a remote config server.remoteConfig:
# -- Enable fetching configuration from a remote config server.# @section -- Collectors - Alloy Metricsenabled: false# -- The URL of the remote config server.# @section -- Collectors - Alloy Metricsurl: ""auth:
# -- The type of authentication to use for the remote config server.# @section -- Collectors - Alloy Metricstype: "none"# -- The username to use for the remote config server.# @section -- Collectors - Alloy Metricsusername: ""# -- The key for storing the username in the secret.# @section -- Collectors - Alloy MetricsusernameKey: "username"# -- Raw config for accessing the password.# @section -- Collectors - Alloy MetricsusernameFrom: ""# -- The password to use for the remote config server.# @section -- Collectors - Alloy Metricspassword: ""# -- The key for storing the password in the secret.# @section -- Collectors - Alloy MetricspasswordKey: "password"# -- Raw config for accessing the password.# @section -- Collectors - Alloy MetricspasswordFrom: ""secret:
# -- Whether to create a secret for the remote config server.# @section -- Collectors - Alloy Metricscreate: true# -- If true, skip secret creation and embed the credentials directly into the configuration.# @section -- Collectors - Alloy Metricsembed: false# -- The name of the secret to create.# @section -- Collectors - Alloy Metricsname: ""# -- The namespace for the secret.# @section -- Collectors - Alloy Metricsnamespace: ""# -- (string) The unique identifier for this Alloy instance.# @default -- `<cluster>-<namespace>-<pod-name>`# @section -- Collectors - Alloy Metricsid: ""# -- The frequency at which to poll the remote config server for updates.# @section -- Collectors - Alloy MetricspollFrequency: 5m# -- Attributes to be added to this collector when requesting configuration.# @section -- Collectors - Alloy MetricsextraAttributes: {}logging:
# -- Level at which Alloy log lines should be written.# @section -- Collectors - Alloy Metricslevel: info# -- Format to use for writing Alloy log lines.# @section -- Collectors - Alloy Metricsformat: logfmtliveDebugging:
# -- Enable live debugging for the Alloy instance.# Requires stability level to be set to "experimental".# @section -- Collectors - Alloy Metricsenabled: false# @ignoredalloy:
configMap: {create: false}# Enable clustering to ensure that scraping is distributed across all instances.# @ignoredclustering:
name: alloy-metricsenabled: truesecurityContext:
allowPrivilegeEscalation: falsecapabilities:
drop: ["ALL"]add: ["CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID", "KILL", "SETGID", "SETUID", "SETPCAP", "NET_BIND_SERVICE", "NET_RAW", "SYS_CHROOT", "MKNOD", "AUDIT_WRITE", "SETFCAP"]seccompProfile:
type: "RuntimeDefault"controller:
# -- The type of controller to use for the Alloy Metrics instance.# @section -- Collectors - Alloy Metricstype: statefulset# -- The number of replicas for the Alloy Metrics instance.# @section -- Collectors - Alloy Metricsreplicas: 1# @ignorednodeSelector:
kubernetes.io/os: linux# @ignoredpodAnnotations:
k8s.grafana.com/logs.job: integrations/alloy# Skip installation of the Grafana Alloy CRDs, since we don't use them in this chart# @ignoredcrds: {create: false}# An Alloy instance for data sources required to be deployed on a single replica.alloy-singleton:
# -- Deploy the Alloy instance for data sources required to be deployed on a single replica.# @section -- Collectors - Alloy Singletonenabled: true# -- Extra Alloy configuration to be added to the configuration file.# @section -- Collectors - Alloy SingletonextraConfig: ""# Remote configuration from a remote config server.remoteConfig:
# -- Enable fetching configuration from a remote config server.# @section -- Collectors - Alloy Singletonenabled: false# -- The URL of the remote config server.# @section -- Collectors - Alloy Singletonurl: ""auth:
# -- The type of authentication to use for the remote config server.# @section -- Collectors - Alloy Singletontype: "none"# -- The username to use for the remote config server.# @section -- Collectors - Alloy Singletonusername: ""# -- The key for storing the username in the secret.# @section -- Collectors - Alloy SingletonusernameKey: "username"# -- Raw config for accessing the username.# @section -- Collectors - Alloy SingletonusernameFrom: ""# -- The password to use for the remote config server.# @section -- Collectors - Alloy Singletonpassword: ""# -- The key for storing the password in the secret.# @section -- Collectors - Alloy SingletonpasswordKey: "password"# -- Raw config for accessing the password.# @section -- Collectors - Alloy SingletonpasswordFrom: ""secret:
# -- Whether to create a secret for the remote config server.# @section -- Collectors - Alloy Singletoncreate: true# -- If true, skip secret creation and embed the credentials directly into the configuration.# @section -- Collectors - Alloy Singletonembed: false# -- The name of the secret to create.# @section -- Collectors - Alloy Singletonname: ""# -- The namespace for the secret.# @section -- Collectors - Alloy Singletonnamespace: ""logging:
# -- Level at which Alloy log lines should be written.# @section -- Collectors - Alloy Singletonlevel: info# -- Format to use for writing Alloy log lines.# @section -- Collectors - Alloy Singletonformat: logfmtliveDebugging:
# -- Enable live debugging for the Alloy instance.# Requires stability level to be set to "experimental".# @section -- Collectors - Alloy Singletonenabled: false# @ignoredalloy:
# This chart is creating the configuration, so the alloy chart does not need to.configMap: {create: false}securityContext:
allowPrivilegeEscalation: falsecapabilities:
drop: ["ALL"]add: ["CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID", "KILL", "SETGID", "SETUID", "SETPCAP", "NET_BIND_SERVICE", "NET_RAW", "SYS_CHROOT", "MKNOD", "AUDIT_WRITE", "SETFCAP"]seccompProfile:
type: "RuntimeDefault"controller:
# -- The type of controller to use for the Alloy Singleton instance.# @section -- Collectors - Alloy Singletontype: deployment# -- The number of replicas for the Alloy Singleton instance.# This should remain a single instance to avoid duplicate data.# @section -- Collectors - Alloy Singletonreplicas: 1# @ignorednodeSelector:
kubernetes.io/os: linux# @ignoredpodAnnotations:
k8s.grafana.com/logs.job: integrations/alloy# Skip installation of the Grafana Alloy CRDs, since we don't use them in this chart# @ignoredcrds: {create: false}# An Alloy instance for collecting log data.alloy-logs:
# -- Deploy the Alloy instance for collecting log data.# @section -- Collectors - Alloy Logsenabled: true# -- Extra Alloy configuration to be added to the configuration file.# @section -- Collectors - Alloy LogsextraConfig: ""# Remote configuration from a remote config server.remoteConfig:
# -- Enable fetching configuration from a remote config server.# @section -- Collectors - Alloy Logsenabled: false# -- The URL of the remote config server.# @section -- Collectors - Alloy Logsurl: ""auth:
# -- The type of authentication to use for the remote config server.# @section -- Collectors - Alloy Logstype: "none"# -- The username to use for the remote config server.# @section -- Collectors - Alloy Logsusername: ""# -- The key for storing the username in the secret.# @section -- Collectors - Alloy LogsusernameKey: "username"# -- Raw config for accessing the username.# @section -- Collectors - Alloy LogsusernameFrom: ""# -- The password to use for the remote config server.# @section -- Collectors - Alloy Logspassword: ""# -- The key for storing the username in the secret.# @section -- Collectors - Alloy LogspasswordKey: "password"# -- Raw config for accessing the password.# @section -- Collectors - Alloy LogspasswordFrom: ""secret:
# -- Whether to create a secret for the remote config server.# @section -- Collectors - Alloy Logscreate: true# -- If true, skip secret creation and embed the credentials directly into the configuration.# @section -- Collectors - Alloy Logsembed: false# -- The name of the secret to create.# @section -- Collectors - Alloy Logsname: ""# -- The namespace for the secret.# @section -- Collectors - Alloy Logsnamespace: ""logging:
# -- Level at which Alloy log lines should be written.# @section -- Collectors - Alloy Logslevel: info# -- Format to use for writing Alloy log lines.# @section -- Collectors - Alloy Logsformat: logfmtliveDebugging:
# -- Enable live debugging for the Alloy instance.# Requires stability level to be set to "experimental".# @section -- Collectors - Alloy Logsenabled: false# @ignoredalloy:
# This chart is creating the configuration, so the alloy chart does not need to.configMap: {create: false}# Disabling clustering by default, because the default log gathering format does not require clusters.clustering: {enabled: false}# @ignoredmounts:
# Mount /var/log from the host into the container for log collection.varlog: true# Mount /var/lib/docker/containers from the host into the container for log# collection. Set to true if your cluster puts log files inside this directory.dockercontainers: true# @ignoredsecurityContext:
allowPrivilegeEscalation: falsecapabilities:
drop: ["ALL"]add: ["CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID", "KILL", "SETGID", "SETUID", "SETPCAP", "NET_BIND_SERVICE", "NET_RAW", "SYS_CHROOT", "MKNOD", "AUDIT_WRITE", "SETFCAP"]seccompProfile:
type: "RuntimeDefault"controller:
# -- The type of controller to use for the Alloy Logs instance.# @section -- Collectors - Alloy Logstype: daemonset# @ignorednodeSelector:
kubernetes.io/os: linux# An Alloy instance for opening receivers to collect application data.alloy-receiver:
# -- Deploy the Alloy instance for opening receivers to collect application data.# @section -- Collectors - Alloy Receiverenabled: true# -- Extra Alloy configuration to be added to the configuration file.# @section -- Collectors - Alloy ReceiverextraConfig: ""# Remote configuration from a remote config server.remoteConfig:
# -- Enable fetching configuration from a remote config server.# @section -- Collectors - Alloy Receiverenabled: false# -- The URL of the remote config server.# @section -- Collectors - Alloy Receiverurl: ""auth:
# -- The type of authentication to use for the remote config server.# @section -- Collectors - Alloy Receivertype: "none"# -- The username to use for the remote config server.# @section -- Collectors - Alloy Receiverusername: ""# -- The key for storing the username in the secret.# @section -- Collectors - Alloy ReceiverusernameKey: "username"# -- Raw config for accessing the username.# @section -- Collectors - Alloy ReceiverusernameFrom: ""# -- The password to use for the remote config server.# @section -- Collectors - Alloy Receiverpassword: ""# -- The key for storing the password in the secret.# @section -- Collectors - Alloy ReceiverpasswordKey: "password"# -- Raw config for accessing the password.# @section -- Collectors - Alloy ReceiverpasswordFrom: ""secret:
# -- Whether to create a secret for the remote config server.# @section -- Collectors - Alloy Receivercreate: true# -- If true, skip secret creation and embed the credentials directly into the configuration.# @section -- Collectors - Alloy Receiverembed: false# -- The name of the secret to create.# @section -- Collectors - Alloy Receivername: ""# -- The namespace for the secret.# @section -- Collectors - Alloy Receivernamespace: ""logging:
# -- Level at which Alloy log lines should be written.# @section -- Collectors - Alloy Receiverlevel: info# -- Format to use for writing Alloy log lines.# @section -- Collectors - Alloy Receiverformat: logfmtliveDebugging:
# -- Enable live debugging for the Alloy instance.# Requires stability level to be set to "experimental".# @section -- Collectors - Alloy Receiverenabled: truealloy:
stabilityLevel: experimental# -- The ports to expose for the Alloy receiver.# @section -- Collectors - Alloy ReceiverextraPorts:
- name: otlp-httpport: 4318targetPort: 4318protocol: TCP# This chart is creating the configuration, so the alloy chart does not need to.# @ignoredconfigMap: {create: false}# @ignoredsecurityContext:
allowPrivilegeEscalation: falsecapabilities:
drop: ["ALL"]add: ["CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID", "KILL", "SETGID", "SETUID", "SETPCAP", "NET_BIND_SERVICE", "NET_RAW", "SYS_CHROOT", "MKNOD", "AUDIT_WRITE", "SETFCAP"]seccompProfile:
type: "RuntimeDefault"controller:
# -- The type of controller to use for the Alloy Receiver instance.# @section -- Collectors - Alloy Receivertype: daemonset# @ignorednodeSelector:
kubernetes.io/os: linux# An Alloy instance for gathering profiles.alloy-profiles:
# -- Deploy the Alloy instance for gathering profiles.# @section -- Collectors - Alloy Profilesenabled: false# -- Extra Alloy configuration to be added to the configuration file.# @section -- Collectors - Alloy ProfilesextraConfig: ""# Remote configuration from a remote config server.remoteConfig:
# -- Enable fetching configuration from a remote config server.# @section -- Collectors - Alloy Profilesenabled: false# -- The URL of the remote config server.# @section -- Collectors - Alloy Profilesurl: ""auth:
# -- The type of authentication to use for the remote config server.# @section -- Collectors - Alloy Profilestype: "none"# -- The username to use for the remote config server.# @section -- Collectors - Alloy Profilesusername: ""# -- The key for storing the username in the secret.# @section -- Collectors - Alloy ProfilesusernameKey: "username"# -- Raw config for accessing the username.# @section -- Collectors - Alloy ProfilesusernameFrom: ""# -- The password to use for the remote config server.# @section -- Collectors - Alloy Profilespassword: ""# -- The key for storing the password in the secret.# @section -- Collectors - Alloy ProfilespasswordKey: "password"# -- Raw config for accessing the password.# @section -- Collectors - Alloy ProfilespasswordFrom: ""secret:
# -- Whether to create a secret for the remote config server.# @section -- Collectors - Alloy Profilescreate: true# -- If true, skip secret creation and embed the credentials directly into the configuration.# @section -- Collectors - Alloy Profilesembed: false# -- The name of the secret to create.# @section -- Collectors - Alloy Profilesname: ""# -- The namespace for the secret.# @section -- Collectors - Alloy Profilesnamespace: ""logging:
# -- Level at which Alloy log lines should be written.# @section -- Collectors - Alloy Profileslevel: info# -- Format to use for writing Alloy log lines.# @section -- Collectors - Alloy Profilesformat: logfmtliveDebugging:
# -- Enable live debugging for the Alloy instance.# Requires stability level to be set to "experimental".# @section -- Collectors - Alloy Profilesenabled: false# @ignoredalloy:
# Pyroscope components are currently in public previewstabilityLevel: public-preview# This chart is creating the configuration, so the alloy chart does not need to.configMap: {create: false}# Disabling clustering because each instance will gather profiles for the workloads on the same node.clustering:
name: alloy-profilesenabled: falsesecurityContext:
privileged: truerunAsGroup: 0runAsUser: 0controller:
# -- The type of controller to use for the Alloy Profiles instance.# @section -- Collectors - Alloy Profilestype: daemonset# @ignoredhostPID: true# @ignorednodeSelector:
kubernetes.io/os: linux# @ignoredtolerations:
- effect: NoScheduleoperator: Exists# Skip installation of the Grafana Alloy CRDs, since we don't use them in this chart# @ignoredcrds: {create: false}# -- Deploy additional manifest objectsextraObjects: []
I get a singleton pod that doesn't start:
alloy Error: /etc/alloy/config.alloy:136:5: component "otelcol.receiver.prometheus.grafanacloudotlp.receiver" does not exist or is out of scope
alloy
alloy 135 | forward_to = [
alloy 136 | otelcol.receiver.prometheus.grafanacloudotlp.receiver,
alloy | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
alloy 137 | ]
alloy Error: could not perform the initial load successfully
Removing the prometheus destination makes the config valid.
The expectation is to be able to use all the destinations for every kind of telemetry data (even if it gets potentially duplicated but in no case having an invalid config).
The text was updated successfully, but these errors were encountered:
With this (quite basic) values file:
I get a singleton pod that doesn't start:
Removing the
prometheus
destination makes the config valid.The expectation is to be able to use all the destinations for every kind of telemetry data (even if it gets potentially duplicated but in no case having an invalid config).
The text was updated successfully, but these errors were encountered: