From 478695466bf8099ddace85536f4041ddbabdc761 Mon Sep 17 00:00:00 2001 From: Clayton Cornell Date: Wed, 28 Feb 2024 13:25:42 -0800 Subject: [PATCH] Update tutorials with new prod name and structure --- docs/sources/data-collection.md | 2 +- docs/sources/release-notes.md | 2 +- docs/sources/stability.md | 2 +- docs/sources/tutorials/_index.md | 9 +- docs/sources/tutorials/chaining.md | 29 ++--- .../collecting-prometheus-metrics.md | 45 +++----- docs/sources/tutorials/filtering-metrics.md | 23 ++-- .../tutorials/flow-by-example/_index.md | 9 +- .../first-components-and-stdlib/index.md | 108 +++++++++++------- .../tutorials/flow-by-example/get-started.md | 40 ++++--- .../logs-and-relabeling-basics/index.md | 82 +++++++------ .../flow-by-example/processing-logs/index.md | 50 ++++---- 12 files changed, 215 insertions(+), 186 deletions(-) diff --git a/docs/sources/data-collection.md b/docs/sources/data-collection.md index e90d9e63c0..a6b07c6f9d 100644 --- a/docs/sources/data-collection.md +++ b/docs/sources/data-collection.md @@ -1,7 +1,7 @@ --- aliases: - ./data-collection/ -canonical: https://grafana.com/docs/latest/data-collection/ +canonical: https://grafana.com/docs/alloy/latest/data-collection/ description: Grafana Alloy data collection menuTitle: Data collection title: Grafana Alloy data collection diff --git a/docs/sources/release-notes.md b/docs/sources/release-notes.md index 6491ec2e47..0665587298 100644 --- a/docs/sources/release-notes.md +++ b/docs/sources/release-notes.md @@ -1,7 +1,7 @@ --- aliases: - ./release-notes/ -canonical: https://grafana.com/docs/agent/latest/release-notes/ +canonical: https://grafana.com/docs/alloy/latest/release-notes/ description: Release notes for Grafana Alloy menuTitle: Release notes title: Release notes for Grafana Alloy diff --git a/docs/sources/stability.md b/docs/sources/stability.md index a038ea0eba..e96dc83ec2 100644 --- a/docs/sources/stability.md +++ b/docs/sources/stability.md @@ -1,6 +1,6 @@ --- aliases: -- /stability/ +- ./stability/ canonical: https://grafana.com/docs/alloy/latest/stability/ description: Grafana Alloy features fall into one of three stability categories, experimental, beta, or stable title: Stability diff --git a/docs/sources/tutorials/_index.md b/docs/sources/tutorials/_index.md index d695d7fb13..b0685e64b7 100644 --- a/docs/sources/tutorials/_index.md +++ b/docs/sources/tutorials/_index.md @@ -1,11 +1,8 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/ -description: Learn how to use Grafana Agent Flow +- ./tutorials/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/ +description: Learn how to use Grafana Alloy title: Tutorials weight: 300 --- diff --git a/docs/sources/tutorials/chaining.md b/docs/sources/tutorials/chaining.md index 9be20dbc3a..3578ec0ef3 100644 --- a/docs/sources/tutorials/chaining.md +++ b/docs/sources/tutorials/chaining.md @@ -1,11 +1,7 @@ --- aliases: -- ./chaining/ -- /docs/grafana-cloud/agent/flow/tutorials/chaining/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/chaining/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/chaining/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/chaining/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/chaining/ +- ./tutorials/chaining/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/chaining/ description: Learn how to chain Prometheus components menuTitle: Chain Prometheus components title: Chain Prometheus components @@ -16,7 +12,8 @@ weight: 400 This tutorial shows how to use [multiple-inputs.river][] to send data to several different locations. This tutorial uses the same base as [Filtering metrics][]. -A new concept introduced in Flow is chaining components together in a composable pipeline. This promotes the reusability of components while offering flexibility. +A new concept introduced in {{< param "PRODUCT_NAME" >}} is chaining components together in a composable pipeline. +This promotes the reusability of components while offering flexibility. ## Prerequisites @@ -33,10 +30,11 @@ curl https://raw.githubusercontent.com/grafana/agent/main/docs/sources/flow/tuto The `runt.sh` script does: 1. Downloads the configurations necessary for Mimir, Grafana, and {{< param "PRODUCT_ROOT_NAME" >}}. -2. Downloads the docker image for {{< param "PRODUCT_ROOT_NAME" >}} explicitly. -3. Runs the `docker-compose up` command to bring all the services up. +1. Downloads the docker image for {{< param "PRODUCT_ROOT_NAME" >}} explicitly. +1. Runs the `docker-compose up` command to bring all the services up. -Allow {{< param "PRODUCT_ROOT_NAME" >}} to run for two minutes, then navigate to [Grafana][] to see {{< param "PRODUCT_ROOT_NAME" >}} scrape metrics. The [node_exporter][] metrics also show up now. +Allow {{< param "PRODUCT_ROOT_NAME" >}} to run for two minutes, then navigate to [Grafana][] to see {{< param "PRODUCT_ROOT_NAME" >}} scrape metrics. +The [node_exporter][] metrics also show up now. There are two scrapes each sending metrics to one filter. Note the `job` label lists the full name of the scrape component. @@ -74,7 +72,8 @@ prometheus.remote_write "prom" { } ``` -In the Flow block, `prometheus.relabel.service` is being forwarded metrics from two sources `prometheus.scrape.agent` and `prometheus.exporter.unix.default`. This allows for a single relabel component to be used with any number of inputs. +In the {{< param "PRODUCT_ROOT_NAME" >}} block, `prometheus.relabel.service` is being forwarded metrics from two sources `prometheus.scrape.agent` and `prometheus.exporter.unix default`. +This allows for a single relabel component to be used with any number of inputs. ## Adding another relabel @@ -82,11 +81,7 @@ In `multiple-input.river` add a new `prometheus.relabel` component that adds a ` ![Add a new label with the value v2](/media/docs/agent/screenshot-grafana-agent-chaining-scrape-v2.png) -[multiple-inputs.river]: https://grafana.com/docs/agent//flow/tutorials/assets/flow_configs/multiple-inputs.river +[multiple-inputs.river]: ../assets/flow_configs/multiple-inputs.river +[Filtering metrics]: ../filtering-metrics/ [Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D [node_exporter]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22node_cpu_seconds_total%22%7D%5D - -{{% docs/reference %}} -[Filtering metrics]: "/docs/agent/ -> /docs/agent//flow/tutorials/filtering-metrics.md" -[Filtering metrics]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/tutorials/filtering-metrics.md" -{{% /docs/reference %}} diff --git a/docs/sources/tutorials/collecting-prometheus-metrics.md b/docs/sources/tutorials/collecting-prometheus-metrics.md index a665474190..55754bb178 100644 --- a/docs/sources/tutorials/collecting-prometheus-metrics.md +++ b/docs/sources/tutorials/collecting-prometheus-metrics.md @@ -1,11 +1,7 @@ --- aliases: -- ./collecting-prometheus-metrics/ -- /docs/grafana-cloud/agent/flow/tutorials/collecting-prometheus-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/collecting-prometheus-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/collecting-prometheus-metrics/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/collecting-prometheus-metrics/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/collecting-prometheus-metrics/ +- ./tutorials/collecting-prometheus-metrics/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/collecting-prometheus-metrics/ description: Learn how to collect Prometheus metrics menuTitle: Collect Prometheus metrics title: Collect Prometheus metrics @@ -14,7 +10,8 @@ weight: 200 # Collect Prometheus metrics -{{< param "PRODUCT_ROOT_NAME" >}} is a telemetry collector with the primary goal of moving telemetry data from one location to another. In this tutorial, you'll set up {{< param "PRODUCT_NAME" >}}. +{{< param "PRODUCT_ROOT_NAME" >}} is a telemetry collector with the primary goal of moving telemetry data from one location to another. +In this tutorial, you'll set up {{< param "PRODUCT_NAME" >}}. ## Prerequisites @@ -31,8 +28,8 @@ curl https://raw.githubusercontent.com/grafana/agent/main/docs/sources/flow/tuto The `runt.sh` script does: 1. Downloads the configurations necessary for Mimir, Grafana, and {{< param "PRODUCT_ROOT_NAME" >}}. -2. Downloads the docker image for {{< param "PRODUCT_ROOT_NAME" >}} explicitly. -3. Runs the docker-compose up command to bring all the services up. +1. Downloads the docker image for {{< param "PRODUCT_ROOT_NAME" >}} explicitly. +1. Runs the docker-compose up command to bring all the services up. Allow {{< param "PRODUCT_ROOT_NAME" >}} to run for two minutes, then navigate to [Grafana][]. @@ -44,7 +41,8 @@ Navigate to `http://localhost:12345/graph` to view the {{< param "PRODUCT_NAME" ![The User Interface](/media/docs/agent/screenshot-grafana-agent-collect-metrics-graph.png) -{{< param "PRODUCT_ROOT_NAME" >}} displays the component pipeline in a dependency graph. See [Scraping component](#scraping-component) and [Remote Write component](#remote-write-component) for details about the components used in this configuration. +{{< param "PRODUCT_ROOT_NAME" >}} displays the component pipeline in a dependency graph. +See [Scraping component](#scraping-component) and [Remote Write component](#remote-write-component) for details about the components used in this configuration. Click the nodes to navigate to the associated component page. There, you can view the state, health information, and, if applicable, the debug information. ![Component information](/media/docs/agent/screenshot-grafana-agent-collect-metrics-comp-info.png) @@ -67,11 +65,14 @@ prometheus.scrape "default" { } ``` -The `prometheus.scrape "default"` annotation indicates the name of the component, `prometheus.scrape`, and its label, `default`. All components must have a unique combination of name and if applicable label. +The `prometheus.scrape "default"` annotation indicates the name of the component, `prometheus.scrape`, and its label, `default`. +All components must have a unique combination of name and if applicable label. -The `targets` [attribute][] is an [argument][]. `targets` is a list of labels that specify the target via the special key `__address__`. The scraper is targeting the {{< param "PRODUCT_NAME" >}} `/metrics` endpoint. Both `http` and `/metrics` are implied but can be overridden. +The `targets` [attribute][] is an [argument][]. `targets` is a list of labels that specify the target via the special key `__address__`. +The scraper is targeting the {{< param "PRODUCT_NAME" >}} `/metrics` endpoint. Both `http` and `/metrics` are implied but can be overridden. -The `forward_to` attribute is an argument that references the [export][] of the `prometheus.remote_write.prom` component. This is where the scraper will send the metrics for further processing. +The `forward_to` attribute is an argument that references the [export][] of the `prometheus.remote_write.prom` component. +This is where the scraper will send the metrics for further processing. ## Remote Write component @@ -95,16 +96,8 @@ To try out {{< param "PRODUCT_ROOT_NAME" >}} without using Docker: [Docker]: https://www.docker.com/products/docker-desktop [Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D - -{{% docs/reference %}} -[prometheus.scrape]: "/docs/agent/ -> /docs/agent//flow/reference/components/prometheus.scrape.md" -[prometheus.scrape]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.scrape.md" -[attribute]: "/docs/agent/ -> /docs/agent//flow/concepts/config-language/#attributes" -[attribute]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/#attributes" -[argument]: "/docs/agent/ -> /docs/agent//flow/concepts/components" -[argument]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/concepts/components" -[export]: "/docs/agent/ -> /docs/agent//flow/concepts/components" -[export]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/concepts/components" -[prometheus.remote_write]: "/docs/agent/ -> /docs/agent//flow/reference/components/prometheus.remote_write.md" -[prometheus.remote_write]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.remote_write.md" -{{% /docs/reference %}} +[prometheus.scrape]: ../../reference/components/prometheus.scrape/ +[attribute]: ../../concepts/config-language/#attributes +[argument]: ../../concepts/components/ +[export]: ../../concepts/components/ +[prometheus.remote_write]: ../../reference/components/prometheus.remote_write/ diff --git a/docs/sources/tutorials/filtering-metrics.md b/docs/sources/tutorials/filtering-metrics.md index ec942124ec..ef4f01ff43 100644 --- a/docs/sources/tutorials/filtering-metrics.md +++ b/docs/sources/tutorials/filtering-metrics.md @@ -1,11 +1,7 @@ --- aliases: -- ./filtering-metrics/ -- /docs/grafana-cloud/agent/flow/tutorials/filtering-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/filtering-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/filtering-metrics/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/filtering-metrics/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/filtering-metrics/ +- ./tutorials/filtering-metrics/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/filtering-metrics/ description: Learn how to filter Prometheus metrics menuTitle: Filter Prometheus metrics title: Filter Prometheus metrics @@ -14,7 +10,8 @@ weight: 300 # Filter Prometheus metrics -In this tutorial, you'll add a new component [prometheus.relabel][] using [relabel.river][] to filter metrics. This tutorial uses the same base as [Collecting Prometheus metrics][]. +In this tutorial, you'll add a new component [prometheus.relabel][] using [relabel.river][] to filter metrics. +This tutorial uses the same base as [Collect Prometheus metrics][]. ## Prerequisites @@ -53,14 +50,8 @@ Open the `relabel.river` file that was downloaded and change the name of the ser ![Updated dashboard showing api_server_v2](/media/docs/agent/screenshot-grafana-agent-filtering-metrics-transition.png) - [Docker]: https://www.docker.com/products/docker-desktop [Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D -[relabel.river]: https://grafana.com/docs/agent//flow/tutorials/assets/flow_configs/relabel.river - -{{% docs/reference %}} -[prometheus.relabel]: "/docs/agent/ -> /docs/agent//flow/reference/components/prometheus.relabel.md" -[prometheus.relabel]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.relabel.md" -[Collecting Prometheus metrics]: "/docs/agent/ -> /docs/agent//flow/tutorials/collecting-prometheus-metrics.md" -[Collecting Prometheus metrics]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/tutorials/collecting-prometheus-metrics.md" -{{% /docs/reference %}} +[relabel.river]: ../assets/flow_configs/relabel.river/ +[prometheus.relabel]: ../../reference/components/prometheus.relabel/ +[Collect Prometheus metrics]: ../collecting-prometheus-metrics diff --git a/docs/sources/tutorials/flow-by-example/_index.md b/docs/sources/tutorials/flow-by-example/_index.md index d9b0373502..5a47e279e7 100644 --- a/docs/sources/tutorials/flow-by-example/_index.md +++ b/docs/sources/tutorials/flow-by-example/_index.md @@ -1,11 +1,8 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/flow-by-example/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/ -description: Learn how to use Grafana Agent Flow +- ./tutorials/flow-by-example/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/ +description: Learn how to use Grafana Alloy title: Flow by example weight: 100 --- diff --git a/docs/sources/tutorials/flow-by-example/first-components-and-stdlib/index.md b/docs/sources/tutorials/flow-by-example/first-components-and-stdlib/index.md index 59bc59c5d1..1e1aee2846 100644 --- a/docs/sources/tutorials/flow-by-example/first-components-and-stdlib/index.md +++ b/docs/sources/tutorials/flow-by-example/first-components-and-stdlib/index.md @@ -1,10 +1,7 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/first-components-and-stdlib/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/first-components-and-stdlib/ +- ./tutorials/flow-by-example/first-components-and-stdlib/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/first-components-and-stdlib/ description: Learn about the basics of River and the configuration language title: First components and introducing the standard library weight: 20 @@ -12,20 +9,18 @@ weight: 20 # First components and the standard library -This tutorial covers the basics of the River language and the standard library. It introduces a basic pipeline that collects metrics from the host and sends them to Prometheus. +This tutorial covers the basics of the River language and the standard library. +It introduces a basic pipeline that collects metrics from the host and sends them to Prometheus. ## River basics -[Configuration language]: https://grafana.com/docs/agent//flow/concepts/config-language/ -[Configuration language concepts]: https://grafana.com/docs/agent//flow/concepts/configuration_language/ -[Standard library documentation]: https://grafana.com/docs/agent//flow/reference/stdlib/ - **Recommended reading** - [Configuration language][] - [Configuration language concepts][] -[River](https://github.com/grafana/river) is an HCL-inspired configuration language used to configure {{< param "PRODUCT_NAME" >}}. A River file is comprised of three things: +[River][] is an HCL-inspired configuration language used to configure {{< param "PRODUCT_NAME" >}}. +A River file is comprised of three things: 1. **Attributes** @@ -37,11 +32,15 @@ This tutorial covers the basics of the River language and the standard library. 1. **Expressions** - Expressions are used to compute values. They can be constant values (for example, `"localhost:9090"`), or they can be more complex (for example, referencing a component's export: `prometheus.exporter.unix.targets`. They can also be a mathematical expression: `(1 + 2) * 3`, or a standard library function call: `env("HOME")`). We will use more expressions as we go along the examples. If you are curious, you can find a list of available standard library functions in the [Standard library documentation][]. + Expressions are used to compute values. + They can be constant values (for example, `"localhost:9090"`), or they can be more complex (for example, referencing a component's export: `prometheus.exporter.unix.targets`. + They can also be a mathematical expression: `(1 + 2) * 3`, or a standard library function call: `env("HOME")`). We will use more expressions as we go along the examples. + If you are curious, you can find a list of available standard library functions in the [Standard library documentation][]. 1. **Blocks** - Blocks are used to configure components with groups of attributes or nested blocks. The following example block can be used to configure the logging output of {{< param "PRODUCT_NAME" >}}: + Blocks are used to configure components with groups of attributes or nested blocks. + The following example block can be used to configure the logging output of {{< param "PRODUCT_NAME" >}}: ```river logging { @@ -64,11 +63,6 @@ Comments in River are prefixed with `//` and are single-line only. For example: ## Components -[Components]: https://grafana.com/docs/agent//flow/concepts/components/ -[Component controller]: https://grafana.com/docs/agent//flow/concepts/component_controller/ -[Components configuration language]: https://grafana.com/docs/agent//flow/concepts/config-language/components/ -[env]: https://grafana.com/docs/agent//flow/reference/stdlib/env/ - **Recommended reading** - [Components][] @@ -97,31 +91,34 @@ prometheus.remote_write "local_prom" { ``` {{< admonition type="note" >}} -[Component reference]: https://grafana.com/docs/agent//flow/reference/components/ +A list of all available components can be found in the [Component reference][]. +Each component has a link to its documentation, which contains a description of what the component does, its arguments, its exports, and examples. -A list of all available components can be found in the [Component reference][]. Each component has a link to its documentation, which contains a description of what the component does, its arguments, its exports, and examples. +[Component reference]: ../../../reference/components/ {{< /admonition >}} -This pipeline has two components: `local.file` and `prometheus.remote_write`. The `local.file` component is configured with a single argument, `path`, which is set by calling the [env][] standard library function to retrieve the value of the `HOME` environment variable and concatenating it with the string `"file.txt"`. The `local.file` component has a single export, `content`, which contains the contents of the file. +This pipeline has two components: `local.file` and `prometheus.remote_write`. +The `local.file` component is configured with a single argument, `path`, which is set by calling the [env][] standard library function to retrieve the value of the `HOME` environment variable and concatenating it with the string `"file.txt"`. +The `local.file` component has a single export, `content`, which contains the contents of the file. -The `prometheus.remote_write` component is configured with an `endpoint` block, containing the `url` attribute and a `basic_auth` block. The `url` attribute is set to the URL of the Prometheus remote write endpoint. The `basic_auth` block contains the `username` and `password` attributes, which are set to the string `"admin"` and the `content` export of the `local.file` component, respectively. The `content` export is referenced by using the syntax `local.file.example.content`, where `local.file.example` is the fully qualified name of the component (the component's type + its label) and `content` is the name of the export. +The `prometheus.remote_write` component is configured with an `endpoint` block, containing the `url` attribute and a `basic_auth` block. +The `url` attribute is set to the URL of the Prometheus remote write endpoint. +The `basic_auth` block contains the `username` and `password` attributes, which are set to the string `"admin"` and the `content` export of the `local.file` component, respectively. +The `content` export is referenced by using the syntax `local.file.example.content`, where `local.file.example` is the fully qualified name of the component (the component's type + its label) and `content` is the name of the export.

Flow of example pipeline with local.file and prometheus.remote_write components

{{< admonition type="note" >}} -The `local.file` component's label is set to `"example"`, so the fully qualified name of the component is `local.file.example`. The `prometheus.remote_write` component's label is set to `"local_prom"`, so the fully qualified name of the component is `prometheus.remote_write.local_prom`. +The `local.file` component's label is set to `"example"`, so the fully qualified name of the component is `local.file.example`. +The `prometheus.remote_write` component's label is set to `"local_prom"`, so the fully qualified name of the component is `prometheus.remote_write.local_prom`. {{< /admonition >}} This example pipeline still doesn't do anything, so let's add some more components to it. ## Shipping your first metrics -[prometheus.exporter.unix]: https://grafana.com/docs/agent//flow/reference/components/prometheus.exporter.unix/ -[prometheus.scrape]: https://grafana.com/docs/agent//flow/reference/components/prometheus.scrape/ -[prometheus.remote_write]: https://grafana.com/docs/agent//flow/reference/components/prometheus.remote_write/ - **Recommended reading** - Optional: [prometheus.exporter.unix][] @@ -158,7 +155,9 @@ Run {{< param "PRODUCT_NAME" >}} with: /path/to/agent run config.river ``` -Navigate to [http://localhost:3000/explore](http://localhost:3000/explore) in your browser. After ~15-20 seconds, you should be able to see the metrics from the `prometheus.exporter.unix` component! Try querying for `node_memory_Active_bytes` to see the active memory of your host. +Navigate to [http://localhost:3000/explore][] in your browser. +After ~15-20 seconds, you should be able to see the metrics from the `prometheus.exporter.unix` component. +Try querying for `node_memory_Active_bytes` to see the active memory of your host.

Screenshot of node_memory_Active_bytes query in Grafana @@ -175,17 +174,18 @@ The following diagram is an example pipeline: The preceding configuration defines three components: - `prometheus.scrape` - A component that scrapes metrics from components that export targets. -- `prometheus.exporter.unix` - A component that exports metrics from the host, built around [node_exporter](https://github.com/prometheus/node_exporter). +- `prometheus.exporter.unix` - A component that exports metrics from the host, built around [node_exporter][]. - `prometheus.remote_write` - A component that sends metrics to a Prometheus remote-write compatible endpoint. -The `prometheus.scrape` component references the `prometheus.exporter.unix` component's targets export, which is a list of scrape targets. The `prometheus.scrape` component then forwards the scraped metrics to the `prometheus.remote_write` component. +The `prometheus.scrape` component references the `prometheus.exporter.unix` component's targets export, which is a list of scrape targets. +The `prometheus.scrape` component then forwards the scraped metrics to the `prometheus.remote_write` component. -One rule is that components can't form a cycle. This means that a component can't reference itself directly or indirectly. This is to prevent infinite loops from forming in the pipeline. +One rule is that components can't form a cycle. +This means that a component can't reference itself directly or indirectly. +This is to prevent infinite loops from forming in the pipeline. ## Exercise for the reader -[prometheus.exporter.redis]: https://grafana.com/docs/agent//flow/reference/components/prometheus.exporter.redis/ - **Recommended Reading** - Optional: [prometheus.exporter.redis][] @@ -196,7 +196,8 @@ Let's start a container running Redis and configure {{< param "PRODUCT_NAME" >}} docker container run -d --name flow-redis -p 6379:6379 --rm redis ``` -Try modifying the pipeline to scrape metrics from the Redis exporter. You can refer to the [prometheus.exporter.redis][] component documentation for more information on how to configure it. +Try modifying the pipeline to scrape metrics from the Redis exporter. +You can refer to the [prometheus.exporter.redis][] component documentation for more information on how to configure it. To give a visual hint, you want to create a pipeline that looks like this: @@ -205,9 +206,9 @@ To give a visual hint, you want to create a pipeline that looks like this:

{{< admonition type="note" >}} -[concat]: https://grafana.com/docs/agent//flow/reference/stdlib/concat/ - You may find the [concat][] standard library function useful. + +[concat]: ../../../reference/stdlib/concat/ {{< /admonition >}} You can run {{< param "PRODUCT_NAME" >}} with the new configuration file by running: @@ -216,7 +217,8 @@ You can run {{< param "PRODUCT_NAME" >}} with the new configuration file by runn /path/to/agent run config.river ``` -Navigate to [http://localhost:3000/explore](http://localhost:3000/explore) in your browser. After the first scrape, you should be able to query for `redis` metrics as well as `node` metrics. +Navigate to [http://localhost:3000/explore][] in your browser. +After the first scrape, you should be able to query for `redis` metrics as well as `node` metrics. To shut down the Redis container, run: @@ -225,10 +227,11 @@ docker container stop flow-redis ``` If you get stuck, you can always view a solution here: + {{< collapse title="Solution" >}} ```river -// Configure your first components, learn about the standard library, and learn how to run Grafana Agent +// Configure your first components, learn about the standard library, and learn how to run Grafana Alloy // prometheus.exporter.redis collects information about Redis and exposes // targets for other components to use @@ -267,8 +270,27 @@ prometheus.remote_write "local_prom" { ## Finishing up and next steps -You might have noticed that running {{< param "PRODUCT_NAME" >}} with the configurations created a directory called `data-agent` in the directory you ran {{< param "PRODUCT_NAME" >}} from. This directory is where components can store data, such as the `prometheus.exporter.unix` component storing its WAL (Write Ahead Log). If you look in the directory, do you notice anything interesting? The directory for each component is the fully qualified name. - -If you'd like to store the data elsewhere, you can specify a different directory by supplying the `--storage.path` flag to {{< param "PRODUCT_ROOT_NAME" >}}'s run command, for example, `/path/to/agent run config.river --storage.path /etc/grafana-agent`. Generally, you can use a persistent directory for this, as some components may use the data stored in this directory to perform their function. - -In the next tutorial, you will look at how to configure {{< param "PRODUCT_NAME" >}} to collect logs from a file and send them to Loki. You will also look at using different components to process metrics and logs before sending them. +You might have noticed that running {{< param "PRODUCT_NAME" >}} with the configurations created a directory called `data-agent` in the directory you ran {{< param "PRODUCT_NAME" >}} from. +This directory is where components can store data, such as the `prometheus.exporter.unix` component storing its WAL (Write Ahead Log). +If you look in the directory, do you notice anything interesting? The directory for each component is the fully qualified name. + +If you'd like to store the data elsewhere, you can specify a different directory by supplying the `--storage.path` flag to {{< param "PRODUCT_ROOT_NAME" >}}'s run command, for example, `/path/to/agent run config.river --storage.path /etc/grafana-agent`. +Generally, you can use a persistent directory for this, as some components may use the data stored in this directory to perform their function. + +In the next tutorial, you will look at how to configure {{< param "PRODUCT_NAME" >}} to collect logs from a file and send them to Loki. +You will also look at using different components to process metrics and logs before sending them. + +[Configuration language]: ../../../concepts/config-language/ +[Configuration language concepts]: ../../../concepts/configuration_language/ +[Standard library documentation]: ../../../reference/stdlib/ +[node_exporter]: https://github.com/prometheus/node_exporter +[River]: https://github.com/grafana/river +[prometheus.exporter.redis]: ../../../reference/components/prometheus.exporter.redis/ +[http://localhost:3000/explore]: http://localhost:3000/explore +[prometheus.exporter.unix]: ../../../reference/components/prometheus.exporter.unix/ +[prometheus.scrape]: ../../../reference/components/prometheus.scrape/ +[prometheus.remote_write]: ../../../reference/components/prometheus.remote_write/ +[Components]: ../../../concepts/components/ +[Component controller]: ../../../concepts/component_controller/ +[Components configuration language]: ../../../concepts/config-language/components/ +[env]: ../../../reference/stdlib/env/ diff --git a/docs/sources/tutorials/flow-by-example/get-started.md b/docs/sources/tutorials/flow-by-example/get-started.md index 5fa1bbd5b5..d470984b98 100644 --- a/docs/sources/tutorials/flow-by-example/get-started.md +++ b/docs/sources/tutorials/flow-by-example/get-started.md @@ -1,10 +1,7 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/faq/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/faq/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/faq/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/flow-by-example/faq/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/faq/ +- ./tutorials/flow-by-example/get-started/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/get-started/ description: Getting started with Flow-by-Example Tutorials title: Get started weight: 10 @@ -12,23 +9,29 @@ weight: 10 ## Who is this for? -This set of tutorials contains a collection of examples that build on each other to demonstrate how to configure and use [{{< param "PRODUCT_NAME" >}}][flow]. It assumes you have a basic understanding of what {{< param "PRODUCT_ROOT_NAME" >}} is and telemetry collection in general. It also assumes a base level of familiarity with Prometheus and PromQL, Loki and LogQL, and basic Grafana navigation. It assumes no knowledge of {{< param "PRODUCT_NAME" >}} or River concepts. +This set of tutorials contains a collection of examples that build on each other to demonstrate how to configure and use [{{< param "PRODUCT_NAME" >}}][alloy]. +It assumes you have a basic understanding of what {{< param "PRODUCT_ROOT_NAME" >}} is and telemetry collection in general. +It also assumes a base level of familiarity with Prometheus and PromQL, Loki and LogQL, and basic Grafana navigation. +It assumes no knowledge of {{< param "PRODUCT_NAME" >}} or River concepts. -[flow]: https://grafana.com/docs/agent/latest/flow +## What is {{% param "PRODUCT_NAME" %}}? -## What is Flow? - -Flow is a new way to configure {{< param "PRODUCT_NAME" >}}. It is a declarative configuration language that allows you to define a pipeline of telemetry collection, processing, and output. It is built on top of the [River](https://github.com/grafana/river) configuration language, which is designed to be fast, simple, and debuggable. +{{< param "PRODUCT_NAME" >}} uses a declarative configuration language that allows you to define a pipeline of telemetry collection, processing, and output. +It is built on top of the [River][] configuration language, which is designed to be fast, simple, and debuggable. ## What do I need to get started? -You will need a Linux or Unix environment with Docker installed. The examples are designed to be run on a single host so that you can run them on your laptop or in a VM. You are encouraged to follow along with the examples using a `config.river` file and experiment with the examples yourself. +You will need a Linux or Unix environment with Docker installed. +The examples are designed to be run on a single host so that you can run them on your laptop or in a VM. +You are encouraged to follow along with the examples using a `config.river` file and experiment with the examples yourself. -To run the examples, you should have a Grafana Agent binary available. You can follow the instructions on how to [Install Grafana Agent as a Standalone Binary](https://grafana.com/docs/agent/latest/flow/setup/install/binary/#install-grafana-agent-in-flow-mode-as-a-standalone-binary) to get a binary. +To run the examples, you should have a {{< param "PRODUCT_NAME" >}} binary available. +You can follow the instructions on how to [Install {{< param "PRODUCT_NAME" >}} as a Standalone Binary][install] to get a binary. ## How should I follow along? -You can use this docker-compose file to set up a local Grafana instance alongside Loki and Prometheus pre-configured as datasources. The examples are designed to be run locally, so you can follow along and experiment with them yourself. +You can use this Docker-compose file to set up a local Grafana instance alongside Loki and Prometheus pre-configured as datasources. +The examples are designed to be run locally, so you can follow along and experiment with them yourself. ```yaml version: '3' @@ -84,6 +87,13 @@ services: After running `docker-compose up`, open [http://localhost:3000](http://localhost:3000) in your browser to view the Grafana UI. -The tutorials are designed to be followed in order and generally build on each other. Each example explains what it does and how it works. They are designed to be run locally, so you can follow along and experiment with them yourself. +The tutorials are designed to be followed in order and generally build on each other. +Each example explains what it does and how it works. +They are designed to be run locally, so you can follow along and experiment with them yourself. + +The Recommended Reading sections in each tutorial provide a list of documentation topics. +To help you understand the concepts used in the example, read the recommended topics in the order given. -The Recommended Reading sections in each tutorial provide a list of documentation topics. To help you understand the concepts used in the example, read the recommended topics in the order given. +[alloy]: https://grafana.com/docs/alloy/latest/ +[River]: https://github.com/grafana/river +[install]: ../../../setup/install/binary/#install-grafana-agent-in-flow-mode-as-a-standalone-binary diff --git a/docs/sources/tutorials/flow-by-example/logs-and-relabeling-basics/index.md b/docs/sources/tutorials/flow-by-example/logs-and-relabeling-basics/index.md index 02c7c3c138..9543a0e754 100644 --- a/docs/sources/tutorials/flow-by-example/logs-and-relabeling-basics/index.md +++ b/docs/sources/tutorials/flow-by-example/logs-and-relabeling-basics/index.md @@ -1,10 +1,7 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/logs-and-relabeling-basics/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ +- ./tutorials/flow-by-example/logs-and-relabeling-basics/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/logs-and-relabeling-basics/ description: Learn how to relabel metrics and collect logs title: Logs and relabeling basics weight: 30 @@ -12,17 +9,17 @@ weight: 30 # Logs and relabeling basics -This tutorial assumes you have completed the [First components and introducing the standard library](https://grafana.com/docs/agent//flow/tutorials/flow-by-example/first-components-and-stdlib/) tutorial, or are at least familiar with the concepts of components, attributes, and expressions and how to use them. You will cover some basic metric relabeling, followed by how to send logs to Loki. +This tutorial assumes you have completed the [First components and introducing the standard library][] tutorial, or are at least familiar with the concepts of components, attributes, and expressions and how to use them. +You will cover some basic metric relabeling, followed by how to send logs to Loki. ## Relabel metrics -[prometheus.relabel]: https://grafana.com/docs/agent//flow/reference/components/prometheus.relabel/ - **Recommended reading** - Optional: [prometheus.relabel][] -Before moving on to logs, let's look at how we can use the `prometheus.relabel` component to relabel metrics. The `prometheus.relabel` component allows you to perform Prometheus relabeling on metrics and is similar to the `relabel_configs` section of a Prometheus scrape config. +Before moving on to logs, let's look at how we can use the `prometheus.relabel` component to relabel metrics. +The `prometheus.relabel` component allows you to perform Prometheus relabeling on metrics and is similar to the `relabel_configs` section of a Prometheus scrape configuration. Let's add a `prometheus.relabel` component to a basic pipeline and see how to add labels. @@ -64,35 +61,37 @@ We have now created the following pipeline: This pipeline has a `prometheus.relabel` component that has a single rule. This rule has the `replace` action, which will replace the value of the `os` label with a special value: `constants.os`. This value is a special constant that is replaced with the OS of the host {{< param "PRODUCT_ROOT_NAME" >}} is running on. -You can see the other available constants in the [constants](https://grafana.com/docs/agent//flow/reference/stdlib/constants/) documentation. +You can see the other available constants in the [constants][] documentation. This example has one rule block, but you can have as many as you want. Each rule block is applied in order. -If you run {{< param "PRODUCT_ROOT_NAME" >}} and navigate to [localhost:3000/explore](http://localhost:3000/explore), you can see the `os` label on the metrics. Try querying for `node_context_switches_total` and look at the labels. +If you run {{< param "PRODUCT_ROOT_NAME" >}} and navigate to [localhost:3000/explore][], you can see the `os` label on the metrics. +Try querying for `node_context_switches_total` and look at the labels. -Relabeling uses the same rules as Prometheus. You can always refer to the [prometheus.relabel documentation](https://grafana.com/docs/agent//flow/reference/components/prometheus.relabel/#rule-block) for a full list of available options. +Relabeling uses the same rules as Prometheus. You can always refer to the [prometheus.relabel rule-block][] documentation for a full list of available options. {{< admonition type="note" >}} You can forward multiple components to one `prometheus.relabel` component. This allows you to apply the same relabeling rules to multiple pipelines. {{< /admonition >}} {{< admonition type="warning" >}} -There is an issue commonly faced when relabeling and using labels that start with `__` (double underscore). These labels are considered internal and are dropped before relabeling rules from a `prometheus.relabel` component are applied. If you would like to keep or act on these kinds of labels, use a [discovery.relabel](https://grafana.com/docs/agent//flow/reference/components/discovery.relabel/) component. +There is an issue commonly faced when relabeling and using labels that start with `__` (double underscore). +These labels are considered internal and are dropped before relabeling rules from a `prometheus.relabel` component are applied. +If you would like to keep or act on these kinds of labels, use a [discovery.relabel][] component. + +[discovery.relabel]: ../../../reference/components/discovery.relabel/ {{< /admonition >}} ## Send logs to Loki -[local.file_match]: https://grafana.com/docs/agent//flow/reference/components/local.file_match/ -[loki.source.file]: https://grafana.com/docs/agent//flow/reference/components/loki.source.file/ -[loki.write]: https://grafana.com/docs/agent//flow/reference/components/loki.write/ - **Recommended reading** - Optional: [local.file_match][] - Optional: [loki.source.file][] - Optional: [loki.write][] -Now that you're comfortable creating components and chaining them together, let's collect some logs and send them to Loki. We will use the `local.file_match` component to perform file discovery, the `loki.source.file` to collect the logs, and the `loki.write` component to send the logs to Loki. +Now that you're comfortable creating components and chaining them together, let's collect some logs and send them to Loki. +We will use the `local.file_match` component to perform file discovery, the `loki.source.file` to collect the logs, and the `loki.write` component to send the logs to Loki. Before doing this, we need to ensure we have a log file to scrape. We will use the `echo` command to create a file with some log content. @@ -124,7 +123,8 @@ The rough flow of this pipeline is: ![Diagram of pipeline that collects logs from /tmp/flow-logs and writes them to a local Loki instance](/media/docs/agent/diagram-flow-by-example-logs-0.svg) -If you navigate to [localhost:3000/explore](http://localhost:3000/explore) and switch the Datasource to `Loki`, you can query for `{filename="/tmp/flow-logs/log.log"}` and see the log line we created earlier. Try running the following command to add more logs to the file. +If you navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`, you can query for `{filename="/tmp/flow-logs/log.log"}` and see the log line we created earlier. +Try running the following command to add more logs to the file. ```bash echo "This is another log line!" >> /tmp/flow-logs/log.log @@ -134,14 +134,11 @@ If you re-execute the query, you can see the new log lines. ![Grafana Explore view of example log lines](/media/docs/agent/screenshot-flow-by-example-log-lines.png) -If you are curious how {{< param "PRODUCT_ROOT_NAME" >}} keeps track of where it is in a log file, you can look at `data-agent/loki.source.file.local_files/positions.yml`. +If you are curious how {{< param "PRODUCT_ROOT_NAME" >}} keeps track of where it's in a log file, you can look at `data-agent/loki.source.file.local_files/positions.yml`. If you delete this file, {{< param "PRODUCT_ROOT_NAME" >}} starts reading from the beginning of the file again, which is why keeping the {{< param "PRODUCT_ROOT_NAME" >}}'s data directory in a persistent location is desirable. ## Exercise -[loki.relabel]: https://grafana.com/docs/agent//flow/reference/components/loki.relabel/ -[loki.process]: https://grafana.com/docs/agent//flow/reference/components/loki.process/ - **Recommended reading** - [loki.relabel][] @@ -149,7 +146,8 @@ If you delete this file, {{< param "PRODUCT_ROOT_NAME" >}} starts reading from t ### Add a Label to Logs -This exercise will have two parts, building on the previous example. Let's start by adding an `os` label (just like the Prometheus example) to all of the logs we collect. +This exercise will have two parts, building on the previous example. +Let's start by adding an `os` label (just like the Prometheus example) to all of the logs we collect. Modify the following snippet to add the label `os` with the value of the `os` constant. @@ -171,7 +169,10 @@ loki.write "local_loki" { ``` {{< admonition type="note" >}} -You can use the [loki.relabel](https://grafana.com/docs/agent//flow/reference/components/loki.relabel) component to relabel and add labels, just like you can with the [prometheus.relabel](https://grafana.com/docs/agent//flow/reference/components/prometheus.relabel) component. +You can use the [loki.relabel][] component to relabel and add labels, just like you can with the [prometheus.relabel][] component. + +[loki.relabel]: ../../../reference/components/loki.relabel +[prometheus.relabel]: ../../../reference/components/prometheus.relabel {{< /admonition >}} Once you have your completed configuration, run {{< param "PRODUCT_ROOT_NAME" >}} and execute the following: @@ -182,9 +183,11 @@ echo 'level=warn msg="WARN: This is a warn level log!"' >> /tmp/flow-logs/log.lo echo 'level=debug msg="DEBUG: This is a debug level log!"' >> /tmp/flow-logs/log.log ``` -Navigate to [localhost:3000/explore](http://localhost:3000/explore) and switch the Datasource to `Loki`. Try querying for `{filename="/tmp/flow-logs/log.log"}` and see if you can find the new label! +Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`. +Try querying for `{filename="/tmp/flow-logs/log.log"}` and see if you can find the new label! -Now that we have added new labels, we can also filter on them. Try querying for `{os!=""}`. You should only see the lines you added in the previous step. +Now that we have added new labels, we can also filter on them. Try querying for `{os!=""}`. +You should only see the lines you added in the previous step. {{< collapse title="Solution" >}} @@ -221,10 +224,12 @@ loki.write "local_loki" { ### Extract and add a Label from Logs {{< admonition type="note" >}} -This exercise is more challenging than the previous one. If you are having trouble, skip it and move to the next section, which will cover some of the concepts used here. You can always come back to this exercise later. +This exercise is more challenging than the previous one. +If you are having trouble, skip it and move to the next section, which will cover some of the concepts used here. +You can always come back to this exercise later. {{< /admonition >}} -This exercise will build on the previous one, though it's more involved. +This exercise will build on the previous one, though it's more involved. Let's say we want to extract the `level` from the logs and add it as a label. As a starting point, look at [loki.process][]. This component allows you to perform processing on logs, including extracting values from log contents. @@ -236,7 +241,7 @@ If needed, you can find a solution to the previous exercise at the end of the [p The `stage.logfmt` and `stage.labels` blocks for `loki.process` may be helpful. {{< /admonition >}} -Once you have your completed config, run {{< param "PRODUCT_ROOT_NAME" >}} and execute the following: +Once you have your completed configuration, run {{< param "PRODUCT_ROOT_NAME" >}} and execute the following: ```bash echo 'level=info msg="INFO: This is an info level log!"' >> /tmp/flow-logs/log.log @@ -244,7 +249,7 @@ echo 'level=warn msg="WARN: This is a warn level log!"' >> /tmp/flow-logs/log.lo echo 'level=debug msg="DEBUG: This is a debug level log!"' >> /tmp/flow-logs/log.log ``` -Navigate to [localhost:3000/explore](http://localhost:3000/explore) and switch the Datasource to `Loki`. Try querying for `{level!=""}` to see the new labels in action. +Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`. Try querying for `{level!=""}` to see the new labels in action. ![Grafana Explore view of example log lines, now with the extracted 'level' label](/media/docs/agent/screenshot-flow-by-example-log-line-levels.png) @@ -304,5 +309,16 @@ loki.write "local_loki" { ## Finishing up and next steps -You have learned the concepts of components, attributes, and expressions. You have also seen how to use some standard library components to collect metrics and logs. In the next tutorial, you will learn more about how to use the `loki.process` component to extract values from logs and use them. - +You have learned the concepts of components, attributes, and expressions. You have also seen how to use some standard library components to collect metrics and logs. +In the next tutorial, you will learn more about how to use the `loki.process` component to extract values from logs and use them. + +[First components and introducing the standard library]: ../first-components-and-stdlib/ +[prometheus.relabel]: ../../../reference/components/prometheus.relabel/ +[constants]: ../../../reference/stdlib/constants/ +[localhost:3000/explore]: http://localhost:3000/explore +[prometheus.relabel rule-block]: ../../../reference/components/prometheus.relabel/#rule-block +[local.file_match]: ../../../reference/components/local.file_match/ +[loki.source.file]: ../../../reference/components/loki.source.file/ +[loki.write]: ../../../reference/components/loki.write/ +[loki.relabel]: ../../../reference/components/loki.relabel/ +[loki.process]: ../../../reference/components/loki.process/ diff --git a/docs/sources/tutorials/flow-by-example/processing-logs/index.md b/docs/sources/tutorials/flow-by-example/processing-logs/index.md index 327b40716c..cb194b8d9b 100644 --- a/docs/sources/tutorials/flow-by-example/processing-logs/index.md +++ b/docs/sources/tutorials/flow-by-example/processing-logs/index.md @@ -1,10 +1,7 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/processing-logs/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/processing-logs/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/processing-logs/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/processing-logs/ -canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/processing-logs/ +- ./tutorials/flow-by-example/processing-logs/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/processing-logs/ description: Learn how to process logs title: Processing Logs weight: 40 @@ -19,7 +16,7 @@ It covers using `loki.source.api` to receive logs over HTTP, processing and filt **Recommended reading** -- Optional: [loki.source.api](https://grafana.com/docs/agent//flow/reference/components/loki.source.api/) +- Optional: [loki.source.api][] The `loki.source.api` component can receive logs over HTTP. It can be useful for receiving logs from other {{< param "PRODUCT_ROOT_NAME" >}}s or collectors, or directly from applications that can send logs over HTTP, and then processing them centrally. @@ -51,9 +48,9 @@ Next, you can configure the `loki.process` and `loki.write` components. **Recommended reading** -- [loki.process#stage.drop](https://grafana.com/docs/agent//flow/reference/components/loki.process/#stagedrop-block) -- [loki.process#stage.json](https://grafana.com/docs/agent//flow/reference/components/loki.process/#stagejson-block) -- [loki.process#stage.labels](https://grafana.com/docs/agent//flow/reference/components/loki.process/#stagelabels-block) +- [loki.process#stage.drop][] +- [loki.process#stage.json][] +- [loki.process#stage.labels][] ```river // Let's send and process more logs! @@ -142,7 +139,8 @@ In subsequent stages, you can use the extracted map to filter logs, add or remov `stage.*` blocks are executed in the order they appear in the component, top down. {{< /admonition >}} -Let's use an example log line to illustrate this, then go stage by stage, showing the contents of the extracted map. Here is our example log line: +Let's use an example log line to illustrate this, then go stage by stage, showing the contents of the extracted map. +Here is our example log line: ```json { @@ -166,10 +164,11 @@ stage.json { } ``` -This stage parses the log line as JSON, extracts two values from it, `log` and `timestamp`, and puts them into the extracted map with keys `log` and `ts`, respectively. +This stage parses the log line as JSON, extracts two values from it, `log` and `timestamp`, and puts them into the extracted map with keys `log` and `ts`, respectively. {{< admonition type="note" >}} -Supplying an empty string is shorthand for using the same key as in the input log line (so `log = ""` is the same as `log = "log"`). The _keys_ of the `expressions` object end up as the keys in the extracted map, and the _values_ are used as keys to look up in the parsed log line. +Supplying an empty string is shorthand for using the same key as in the input log line (so `log = ""` is the same as `log = "log"`). +The _keys_ of the `expressions` object end up as the keys in the extracted map, and the _values_ are used as keys to look up in the parsed log line. {{< /admonition >}} If this were Python, it would be roughly equivalent to: @@ -293,7 +292,7 @@ stage.drop { This stage acts on the `is_secret` value in the extracted map, which is a value that you extracted in the previous stage. This stage drops the log line if the value of `is_secret` is `"true"` and does not modify the extracted map. There are many other ways to filter logs, but this is a simple example. -Refer to the [loki.process#stage.drop](https://grafana.com/docs/agent//flow/reference/components/loki.process/#stagedrop-block) documentation for more information. +Refer to the [loki.process#stage.drop][] documentation for more information. ### Stage 5 @@ -320,12 +319,12 @@ stage.output { This stage uses the `log_line` value in the extracted map to set the actual log line that is forwarded to Loki. Rather than sending the entire JSON blob to Loki, you are only sending `original_log_line["log"]["message"]`, along with some labels that you attached. -This stage does not modify the extracted map. +This stage doesn't modify the extracted map. ## Putting it all together -Now that you have all of the pieces, let's run the {{< param "PRODUCT_ROOT_NAME" >}} and send some logs to it. -Modify `config.river` with the config from the previous example and start the {{< param "PRODUCT_ROOT_NAME" >}} with: +Now that you have all of the pieces, let's run {{< param "PRODUCT_ROOT_NAME" >}} and send some logs to it. +Modify `config.river` with the config from the previous example and start {{< param "PRODUCT_ROOT_NAME" >}} with: ```bash /path/to/agent run config.river @@ -344,7 +343,7 @@ curl localhost:9999/loki/api/v1/raw -XPOST -H "Content-Type: application/json" - ``` Now that you have sent some logs, let's see how they look in Grafana. -Navigate to [localhost:3000/explore](http://localhost:3000/explore) and switch the Datasource to `Loki`. +Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`. Try querying for `{source="demo-api"}` and see if you can find the logs you sent. Try playing around with the values of `"level"`, `"message"`, `"timestamp"`, and `"is_secret"` and see how the logs change. @@ -355,12 +354,12 @@ You can also try adding more stages to the `loki.process` component to extract m ## Exercise Since you are already using Docker and Docker exports logs, let's get those logs into Loki. -You can refer to the [discovery.docker](https://grafana.com/docs/agent//flow/reference/components/discovery.docker/) and [loki.source.docker](https://grafana.com/docs/agent//flow/reference/components/loki.source.docker/) documentation for more information. +You can refer to the [discovery.docker][] and [loki.source.docker][] documentation for more information. To ensure proper timestamps and other labels, make sure you use a `loki.process` component to process the logs before sending them to Loki. -Although you have not used it before, let's use a `discovery.relabel` component to attach the container name as a label to the logs. -You can refer to the [discovery.relabel](https://grafana.com/docs/agent//flow/reference/components/discovery.relabel/) documentation for more information. +Although you haven't used it before, let's use a `discovery.relabel` component to attach the container name as a label to the logs. +You can refer to the [discovery.relabel][] documentation for more information. The `discovery.relabel` component is very similar to the `prometheus.relabel` component, but is used to relabel discovered targets rather than metrics. {{< collapse title="Solution" >}} @@ -404,4 +403,13 @@ loki.write "local_loki" { } ``` -{{< /collapse >}} \ No newline at end of file +{{< /collapse >}} + +[loki.source.api]: ../../../reference/components/loki.source.api/ +[loki.process#stage.drop]: ../../../reference/components/loki.process/#stagedrop-block +[loki.process#stage.json]: ../../../reference/components/loki.process/#stagejson-block +[loki.process#stage.labels]: ../../../reference/components/loki.process/#stagelabels-block +[localhost:3000/explore]: http://localhost:3000/explore +[discovery.docker]: ../../../reference/components/discovery.docker/ +[loki.source.docker]: ../../../reference/components/loki.source.docker/ +[discovery.relabel]: ../../../reference/components/discovery.relabel/