diff --git a/docs/developer/release/10-update-otel.md b/docs/developer/release/10-update-otel.md index 244ec8cb64..9ff6d7b92e 100644 --- a/docs/developer/release/10-update-otel.md +++ b/docs/developer/release/10-update-otel.md @@ -1,6 +1,6 @@ # Update Open Telemetry Contrib -Grafana Agent is listed as a distribution of the OpenTelemetry Collector. If there are any new OTel components that Grafana Agent needs to be associated with, then open a PR in [OpenTelemetry Contrib](https://github.com/open-telemetry/opentelemetry-collector-contrib) and add the Agent to the list of distributions. [Example](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/653ab064bb797ed2b4ae599936a7b9cfdad18a29/receiver/kafkareceiver/README.md?plain=1#L7) +Grafana Alloy is listed as a distribution of the OpenTelemetry Collector. If there are any new OTel components that Grafana Alloy needs to be associated with, then open a PR in [OpenTelemetry Contrib](https://github.com/open-telemetry/opentelemetry-collector-contrib) and add Alloy to the list of distributions. [Example](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/653ab064bb797ed2b4ae599936a7b9cfdad18a29/receiver/kafkareceiver/README.md?plain=1#L7) ## Steps @@ -8,6 +8,6 @@ Grafana Agent is listed as a distribution of the OpenTelemetry Collector. If the 2. Create a PR in OpenTelemetry Contrib. -3. Find those OTEL components in contrib and add Grafana Agent as a distribution. +3. Find those OTEL components in contrib and add Grafana Alloy as a distribution. 4. Tag Juraci ([jpkrohling](https://github.com/jpkrohling)) on the PR. diff --git a/docs/developer/release/7-test-release.md b/docs/developer/release/7-test-release.md index 7728e49257..dbc7c7cce2 100644 --- a/docs/developer/release/7-test-release.md +++ b/docs/developer/release/7-test-release.md @@ -6,4 +6,4 @@ Validate the new version is working by running it. 1. Validate performance metrics are consistent with the prior version. -2. Validate Flow components are healthy. \ No newline at end of file +2. Validate components are healthy. diff --git a/docs/developer/release/9-announce-release.md b/docs/developer/release/9-announce-release.md index 4a117ecd42..18cf5bfa0d 100644 --- a/docs/developer/release/9-announce-release.md +++ b/docs/developer/release/9-announce-release.md @@ -4,12 +4,12 @@ You made it! This is the last step for any release. ## Steps -1. Announce the release in the Grafana Labs Community #agent channel. +1. Announce the release in the Grafana Labs Community `#alloy` channel. - Example RCV message: ``` - :grafana-agent: Grafana Agent RELEASE_VERSION is now available! :grafana-agent: + :alloy: Grafana Alloy RELEASE_VERSION is now available! :alloy: Release: https://github.com/grafana/alloy/releases/tag/RELEASE_VERSION Full changelog: https://github.com/grafana/alloy/blob/RELEASE_VERSION/CHANGELOG.md We'll be publishing STABLE_RELEASE_VERSION on STABLE_RELEASE_DATE if we haven't heard about any major issues. @@ -18,7 +18,7 @@ You made it! This is the last step for any release. - Example Stable Release or Patch Release message: ``` - :grafana-agent: Grafana Agent RELEASE_VERSION is now available! :grafana-agent: + :alloy: Grafana Alloy RELEASE_VERSION is now available! :alloy: Release: https://github.com/grafana/alloy/releases/tag/RELEASE_VERSION Full changelog: https://github.com/grafana/alloy/blob/RELEASE_VERSION/CHANGELOG.md ``` diff --git a/docs/developer/release/README.md b/docs/developer/release/README.md index 5486eed7f4..123eb1505d 100644 --- a/docs/developer/release/README.md +++ b/docs/developer/release/README.md @@ -2,7 +2,7 @@ This document describes the process of creating a release for the `grafana/alloy` repo. A release includes release assets for everything inside -the repository, including Grafana Agent and Grafana Agent Operator. +the repository. The processes described here are for v0.24.0 and above. diff --git a/docs/developer/release/concepts/version.md b/docs/developer/release/concepts/version.md index 95f8fd2744..d7054f5e25 100644 --- a/docs/developer/release/concepts/version.md +++ b/docs/developer/release/concepts/version.md @@ -1,6 +1,6 @@ # Version -Grafana Agent uses Semantic Versioning. The next version can be determined +Grafana Alloy uses Semantic Versioning. The next version can be determined by looking at the current version and incrementing it. ## Version @@ -11,7 +11,7 @@ To determine the `VERSION` for a Release Candidate, append `-rc.#` to the Semant - Examples - For example, `v0.31.0` is the Stable Release `VERSION` for the v0.31.0 release. - - For example, `v0.31.1` is the first Patch Release `VERSION` for the v0.31.0 release. + - For example, `v0.31.1` is the first Patch Release `VERSION` for the v0.31.0 release. - For example, `v0.31.0-rc.0` is the first Release Candidate `VERSION` for the v0.31.0 release. ## Version Prefix @@ -19,4 +19,4 @@ To determine the `VERSION` for a Release Candidate, append `-rc.#` to the Semant To determine the `VERSION PREFIX`, use only the major and minor version `vX.Y`. - Examples - - `v0.31` \ No newline at end of file + - `v0.31` diff --git a/docs/developer/updating-otel.md b/docs/developer/updating-otel.md index e0ea00f563..9af13f1191 100644 --- a/docs/developer/updating-otel.md +++ b/docs/developer/updating-otel.md @@ -1,6 +1,6 @@ # Updating OpenTelemetry Collector dependencies -The Agent depends on various OpenTelemetry (Otel) modules such as these: +Alloy depends on various OpenTelemetry (Otel) modules such as these: ``` github.com/open-telemetry/opentelemetry-collector-contrib/exporter/jaegerexporter github.com/open-telemetry/opentelemetry-collector-contrib/extension/sigv4authextension @@ -20,14 +20,14 @@ The dependencies mostly come from these repositories: Unfortunately, updating Otel dependencies is not straightforward: -* Some of the modules in `opentelemetry-collector` come from a [grafana/opentelemetry-collector](https://github.com/grafana/opentelemetry-collector) fork. - * This is mostly so that we can include metrics of Collector components with the metrics shown under the Agent's `/metrics` endpoint. -* All Collector and Collector-Contrib dependencies should be updated at the same time, because they +* Some of the modules in `opentelemetry-collector` come from a [grafana/opentelemetry-collector](https://github.com/grafana/opentelemetry-collector) fork. + * This is mostly so that we can include metrics of Collector components with the metrics shown under Alloy's `/metrics` endpoint. +* All Collector and Collector-Contrib dependencies should be updated at the same time, because they are kept in sync on the same version. * E.g. if we use `v0.85.0` of `go.opentelemetry.io/collector`, we also use `v0.85.0` of `spanmetricsprocessor`. * This is in line with how the Collector itself imports dependencies. * It helps us avoid bugs. - * It makes it easier to communicate to customers the version of Collector which we use in the Agent. + * It makes it easier to communicate to customers the version of Collector which we use in Alloy. * Unfortunately, updating everything at once makes it tedious to check if any of our docs or code need updating due to changes in Collector components. A lot of these checks are manual - for example, cross checking the Otel config and Otel documentation between versions. * There are some exceptions for modules which don't follow the same versioning. For example, `collector/pdata` is usually on a different version, like `v1.0.0-rcv0013`. @@ -36,12 +36,12 @@ Unfortunately, updating Otel dependencies is not straightforward: ### Update the Grafana fork of Otel Collector 1. Create a new release branch from the [opentelemetry release branch](https://github.com/open-telemetry/opentelemetry-collector) with a `-grafana` suffix under [grafana/opentelemetry-collector](https://github.com/grafana/opentelemetry-collector). For example, if porting branch `v0.86.0`, make a branch under the fork repo called `0.86-grafana`. -2. Check which branch of the fork repo the Agent currently uses. +2. Check which branch of the fork repo Alloy currently uses. 3. See what commits were pushed onto that branch to customize it. 4. Create a PR to cherry-pick the same commits to the new branch. See the [changes to the 0.85 branch](https://github.com/grafana/opentelemetry-collector/pull/8) for an example PR. 5. Run `make` on the branch to make sure it builds and that the tests pass. -### Update the Agent's dependencies +### Update Alloy's dependencies 1. Make sure we use the same version of Collector and Collector-Contrib for all relevant modules. For example, if we use version `v0.86.0` of Collector, we should also use version `v0.86.0` for all Contrib modules. 2. Update the `replace` directives in the go.mod file to point to the latest commit of the forked release branch. Use a command like this: @@ -51,56 +51,50 @@ Unfortunately, updating Otel dependencies is not straightforward: Repeat this for any other modules where a replacement is necessary. For debugging purposes, you can first have the replace directive pointing to your local repo. 3. Note that sometimes Collector depends on packages with "rc" versions such as `v1.0.0-rcv0013`. This is ok, as long as the go.mod of Collector also references the same versions - for example, [pdata](https://github.com/open-telemetry/opentelemetry-collector/blob/v0.81.0/go.mod#L25) and [featuregate](https://github.com/open-telemetry/opentelemetry-collector/blob/v0.81.0/go.mod#L24). -### Update otelcol Flow components +### Update otelcol Alloy components -1. Note which Otel components are in use by the Agent. - * For every "otelcol" Flow component there is usually a corresponding Collector component. - * For example, the Otel component used by [otelcol.auth.sigv4](https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.auth.sigv4/) is [sigv4auth](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/sigv4authextension). +1. Note which Otel components are in use by Alloy. + * For every "otelcol" Alloy component there is usually a corresponding Collector component. + * For example, the Otel component used by [otelcol.auth.sigv4](https://grafana.com/docs/alloy/latest/reference/components/otelcol.auth.sigv4/) is [sigv4auth](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/sigv4authextension). * In some cases we don't use the corresponding Collector component: - * For example, [otelcol.receiver.prometheus](https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.prometheus/) and [otelcol.exporter.prometheus](https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.prometheus/). + * For example, [otelcol.receiver.prometheus](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.prometheus/) and [otelcol.exporter.prometheus](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.prometheus/). * Those components usually have a note like this: > NOTE: otelcol.exporter.prometheus is a custom component unrelated to the prometheus exporter from OpenTelemetry Collector. 2. Make a list of the components which have changed since the previously used version. 1. Go through the changelogs of both [Collector](https://github.com/open-telemetry/opentelemetry-collector/releases) and [Collector-Contrib](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases). - 2. If a component which is in use by the Agent has changed, note it down. + 2. If a component which is in use by Alloy has changed, note it down. 3. For each Otel component which has changed, compare how they changed. 1. Compare the old and new version of Otel's documentation. 2. Compare the config.go file to see if new parameters were added. -4. Update the Agent's code and documentation where needed. +4. Update Alloy's code and documentation where needed. * Pay attention to stability labels: - * Never lower the stability label in the Agent. E.g. if the stability - of an Otel component is "alpha", there are cases where it might be - stable in the Agent and that is ok. Stability labels in the Agent can + * Never lower the stability label in Alloy. E.g. if the stability + of an Otel component is "alpha", there are cases where it might be + stable in Alloy and that is ok. Stability labels in Alloy can be increased, but not decreased. - * If the stability level of an Otel component has increased, consult - the rest of the team on whether the stability of the corresponding - Agent component should also be increased. - * Update the [documentation](https://grafana.com/docs/agent/latest/static/configuration/traces-config/) - for Static mode's Tracing subsystem:. - * Static mode's Tracing subsystem code should generally not updated to - have new parameters which have been added to the Otel components recently. - If you do think it should be updated, check with the rest of the team on - whether it is really necessary. - * Search the Agent repository for the old version (e.g. "0.87") to find code and + * If the stability level of an Otel component has increased, consult + the rest of the team on whether the stability of the corresponding + Alloy component should also be increased. + * Search the Alloy repository for the old version (e.g. "0.87") to find code and documentation which also needs updating. * Update the `OTEL_VERSION` parameter in the `docs/sources/_index.md.t` file. Then run `make generate-versioned-files`, which will update `docs/sources/_index.md`. -5. Some Agent components reuse OpenTelemetry code, but do not import it: - * `otelcol.extension.jaeger_remote_sampling`: a lot of this code has - been copy-pasted from Otel and modified slightly to fit the Agent's needs. - This component needs to be updated by copy-pasting the new Otel code +5. Some alloy components reuse OpenTelemetry code, but do not import it: + * `otelcol.extension.jaeger_remote_sampling`: a lot of this code has + been copy-pasted from Otel and modified slightly to fit Alloy's needs. + This component needs to be updated by copy-pasting the new Otel code and modifying it again. 6. Note that we don't port every single config option which OpenTelemetry Collector exposes. For example, Collector's [oauth2client extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/extension/oauth2clientauthextension) supports `client_id_file` and `client_secret_file` - parameters. However, Agent's [otelcol.auth.oauth2](https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.auth.oauth2/) does not support them because the idiomatic way of doing the same - in the Agent is to use the local.file component. + parameters. However, Alloy's [otelcol.auth.oauth2](https://grafana.com/docs/alloy/latest/reference/components/otelcol.auth.oauth2/) does not support them because the idiomatic way of doing the same + in Alloy is to use the local.file component. 7. When updating semantic conventions, check those the changelogs of those repositories for breaking changes: * [opentelemetry-go](https://github.com/open-telemetry/opentelemetry-go/releases) * [semantic-conventions](https://github.com/open-telemetry/semantic-conventions/releases) * [opentelemetry-specification](https://github.com/open-telemetry/opentelemetry-specification/releases) -You can refer to [PR #5290](https://github.com/grafana/agent/pull/5290) -for an example on how to update the Agent. +You can refer to [PR grafana/agent#5290](https://github.com/grafana/agent/pull/5290) +for an example on how to update Alloy. ## Testing @@ -108,117 +102,22 @@ for an example on how to update the Agent. You can use the resources in the [Tempo repository](https://github.com/grafana/tempo/tree/main/example/docker-compose/agent) to create a local source of traces using k6. You can also start your own Tempo and Grafana instances. -1. Comment out the "agent" and "prometheus" sections in the [docker-compose](https://github.com/grafana/tempo/blob/main/example/docker-compose/agent/docker-compose.yaml). We don't need this - instead, we will start our own locally built Agent. +1. Comment out the "agent" and "prometheus" sections in the [docker-compose](https://github.com/grafana/tempo/blob/main/example/docker-compose/agent/docker-compose.yaml). We don't need this - instead, we will start our own locally built Alloy collector. 2. Change the "k6-tracing" endpoint to send traces on the localhost, outside of the Docker container. * For example, use `ENDPOINT=host.docker.internal:4320`. - * Then our local Agent should be configured to accept traces on `0.0.0.0:4320`. + * Then our local Alloy should be configured to accept traces on `0.0.0.0:4320`. 3. Optionally, e.g. if you prefer Grafana Cloud, comment out the "tempo" and "grafana" sections of the docker-compose file. -4. Add a second k6 instance if needed - for example, when testing a Static Agent which has 2 Traces instances. +4. Add a second k6 instance if needed - for example, if multiple receivers are configured. -### Static mode - -The [tracing subsystem](https://grafana.com/docs/agent/latest/static/configuration/traces-config/) is the only part of Static mode which uses Otel. Try to test as many features of it using a config file like this one: - -
- Example Static config - -``` -server: - log_level: debug - -logs: - positions_directory: "/Users/ExampleUser/Desktop/otel_test/test_log_pos_dir" - configs: - - name: "grafanacloud-oteltest-logs" - clients: - - url: "https://logs-prod-008.grafana.net/loki/api/v1/push" - basic_auth: - username: "USERNAME" - password: "PASSWORD" - -traces: - configs: - - name: firstConfig - receivers: - otlp: - protocols: - grpc: - endpoint: "0.0.0.0:4320" - remote_write: - - endpoint: tempo-prod-06-prod-gb-south-0.grafana.net:443 - basic_auth: - username: "USERNAME" - password: "PASSWORD" - batch: - timeout: 5s - send_batch_size: 100 - automatic_logging: - backend: "logs_instance" - logs_instance_name: "grafanacloud-oteltest-logs" - roots: true - spanmetrics: - handler_endpoint: "localhost:8899" - namespace: "otel_test_" - tail_sampling: - policies: - [ - { - name: test-policy-4, - type: probabilistic, - probabilistic: {sampling_percentage: 100} - }, - ] - service_graphs: - enabled: true - - name: secondConfig - receivers: - otlp: - protocols: - grpc: - endpoint: "0.0.0.0:4321" - remote_write: - - endpoint: tempo-prod-06-prod-gb-south-0.grafana.net:443 - basic_auth: - username: "USERNAME" - password: "PASSWORD" - batch: - timeout: 5s - send_batch_size: 100 - tail_sampling: - policies: - [ - { - name: test-policy-4, - type: probabilistic, - probabilistic: {sampling_percentage: 100} - }, - ] - service_graphs: - enabled: true - -``` - -
- -Run this file for two types of Agents - an upgraded one, and another one built using the codebase of the `main` branch. Check the following: - -* Open `localhost:12345/metrics` in your browser for both Agents. - * Are new metrics added? Mention them in the changelog. - * Are metrics missing? Did any metrics change names? If it's intended, mention them in the changelog and the upgrade guide. -* Try opening `localhost:8888/metrics` in your browser for the new Agent. 8888 is the Collector's default port for exposing metrics. Make sure this page doesn't display anything - the Agent should use port `12345` instead. -* Check the logs for errors or anything else that's suspicious. -* Check Tempo to make sure the traces were received. -* Check Loki to make sure the logs generated from traces got received. -* Check `localhost:8899/metrics` to make sure the span metrics are being generated. - -### Flow mode - -The "otelcol" [components](https://grafana.com/docs/agent/latest/flow/reference/components/) are the only part of Flow mode which uses Otel. Try to test as many of them as possible using a config file like this one: +The "otelcol" +[components](https://grafana.com/docs/alloy/latest/reference/components/) are +the only components which use OTel. Try to test as many of them as possible +using a config file like this one:
- Example Flow config + Example Alloy config -``` +```grafana-alloy otelcol.receiver.otlp "default" { grpc { endpoint = "0.0.0.0:4320" @@ -278,9 +177,9 @@ otelcol.exporter.otlp "default" {
-Run this file for two types of Agents - an upgraded one, and another one built using the codebase of the `main` branch. Check the following: +Run this file for two types of Alloy instances - an upgraded one, and another one built using the codebase of the `main` branch. Check the following: -* Open `localhost:12345/metrics` in your browser for both Agents. +* Open `localhost:12345/metrics` in your browser for both Alloy instances. * Are new metrics added? Mention them in the changelog. * Are metrics missing? Did any metrics change names? If it's intended, mention them in the changelog and the upgrade guide. * Check the logs for errors or anything else that's suspicious. diff --git a/docs/developer/windows/certificate_store/README.md b/docs/developer/windows/certificate_store/README.md index b0d020c747..78e4ee368a 100644 --- a/docs/developer/windows/certificate_store/README.md +++ b/docs/developer/windows/certificate_store/README.md @@ -1,10 +1,10 @@ # Guide to setting up a Windows Server for Certificate Testing -This guide is used to set up a Windows Server for Windows Store Certificate Testing. This guide assumes you have downloaded a Windows 2022 Server ISO and have installed the Windows Server onto a virtual machine. Certificate Templates can only be installed on Windows Server and it must enabled as an Enterprise Certificate Authority. This guide is NOT meant to be a guide on how to set up a Windows Server for production use, and is meant only for setting up an environment to test the Grafana Agent and certificate store. +This guide is used to set up a Windows Server for Windows Store Certificate Testing. This guide assumes you have downloaded a Windows 2022 Server ISO and have installed the Windows Server onto a virtual machine. Certificate Templates can only be installed on Windows Server and it must enabled as an Enterprise Certificate Authority. This guide is NOT meant to be a guide on how to set up a Windows Server for production use, and is meant only for setting up an environment to test the Grafana Alloy and certificate store. ## Prerequisites -* The install should be fresh with no server roles defined or installed. +* The install should be fresh with no server roles defined or installed. * You should be logged in via an administrator account. ## Set up as domain controller @@ -118,9 +118,9 @@ For this setup we are using a one-node Domain Controller set up as the Enterpris 4. Under `Export File Format` ensure that `Include all certificates in the certificate path if possible`. 5. Export it to a file. -## Setup Grafana Agent +## Setup Grafana Alloy -1. Open the Agent configuration file. +1. Open the Alloy configuration file. 2. Open `Certificate Templates Console`, right-click `Certstore Template` and find the Object identifier. ![](./images/object_identifier.png) @@ -132,7 +132,7 @@ For this setup we are using a one-node Domain Controller set up as the Enterpris 6. Configuration should look like this. ![](./images/config.png) -7. Start Agent. +7. Start Alloy. ## Copy certificate to browser diff --git a/docs/developer/writing-flow-component-documentation.md b/docs/developer/writing-component-documentation.md similarity index 96% rename from docs/developer/writing-flow-component-documentation.md rename to docs/developer/writing-component-documentation.md index e147052b6d..e169a7b33f 100644 --- a/docs/developer/writing-flow-component-documentation.md +++ b/docs/developer/writing-component-documentation.md @@ -136,7 +136,7 @@ For example: ````markdown ## Usage -```river +```alloy pyroscope.scrape "LABEL" { targets = TARGET_LIST forward_to = RECEIVER_LIST @@ -228,8 +228,8 @@ relevant to the arguments. If there is component behavior relevant to a specific block, describe that component behavior in the documentation section for that block instead. -It is acceptable to provide Flow configuration snippets for the arguments -if it aids documentation. +It is acceptable to provide configuration snippets for the arguments if it aids +documentation. ### Blocks @@ -357,8 +357,8 @@ not provided, their Go-inherited defaults will not display in the component UI page. ``` -It is acceptable for block sections to provide Flow configuration snippets for -the block if it aids documentation. +It is acceptable for block sections to provide configuration snippets for the +block if it aids documentation. ### Exported fields @@ -417,7 +417,7 @@ healthy. ### Debug information The Debug information section describes debug information exposed in the -Grafana Agent Flow UI. The section starts with an `h2` header called Debug +Grafana Alloy UI. The section starts with an `h2` header called Debug information. If the component does not expose any debug information, the content of the @@ -463,8 +463,8 @@ should always prefix the metrics table. ### Examples The Examples section provides copy-and-paste Alloy pipelines which use the -Flow component. The section starts with an `h2` header called Examples. If -there is only one example, call the section Example instead. +component. The section starts with an `h2` header called Examples. If there is +only one example, call the section Example instead. If there is more than one example, each example should have an `h3` header containing a descriptive name. For example: @@ -486,7 +486,7 @@ followed by the example in a code block. For example: This example reads a JSON array of objects from an endpoint and uses them for the set of scrape targets: -```river +```alloy remote.http "targets" { url = TARGETS_URL } @@ -516,7 +516,7 @@ written in all uppercase and underscore delimited, for example: `API_URL`. Examples of the new component should avoid using placeholders and instead use realistic example values. For example, if documenting a `prometheus.scrape` component, use: - ```river + ```grafana-alloy remote.http "targets" { url = "http://localhost:8080/targets" } @@ -543,7 +543,7 @@ source for the clarifying comment. Clarifying comments must only be used be supplementary information to reenforce knowledge, and not as the primary source of information. -Examples should be formatted using the [grafana-agent fmt](https://grafana.com/docs/agent/latest/flow/reference/cli/fmt/) command. +Examples should be formatted using the [alloy fmt](https://grafana.com/docs/alloy/latest/reference/cli/fmt/) command. ## Exceptions @@ -569,6 +569,6 @@ doc page, but because it contains yaml config for the Collector, users might get how this maps to Alloy and it is better not to link to it. In the future we could try to move this information from [transformprocessor][] to the [OTTL Context][ottl context] doc. -[loki.source.podlogs]: ../sources/flow/reference/components/loki.source.podlogs.md -[otelcol.processor.transform]: ../sources/flow/reference/components/otelcol.processor.transform.md -[ottl context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/contexts/README.md \ No newline at end of file +[loki.source.podlogs]: ../sources/reference/components/loki.source.podlogs.md +[otelcol.processor.transform]: ../sources/reference/components/otelcol.processor.transform.md +[ottl context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/contexts/README.md diff --git a/docs/developer/writing-docs.md b/docs/developer/writing-docs.md index b4c5be5830..a0bb0531b8 100644 --- a/docs/developer/writing-docs.md +++ b/docs/developer/writing-docs.md @@ -1,21 +1,23 @@ # Writing documentation This page is a collection of guidelines and best practices for writing -documentation for Grafana Agent. +documentation for Grafana Alloy. -## Flow Mode documentation organisation +## Documentation organisation -The Flow mode documentation is organized into the following sections: +The documentation is organized into the following sections: -### Get started +### Introduction The best place to start for new users who are onboarding. -We showcase the features of the Agent and help users decide when to use Flow and +We showcase the features of Alloy and help users decide when to use it and whether it's a good fit for them. -This section includes how to quickly install the agent and get hands-on -experience with a simple "hello world" configuration. +### Get started + +This section includes how to quickly install Alloy and get hands-on experience +with a simple "hello world" configuration. ### Concepts @@ -24,7 +26,7 @@ As defined in the [writer's toolkit][]: > Provides an overview and background information. Answers the question “What is > it?”. -It helps users to learn the concepts of the Agent used throughout the +It helps users to learn the concepts of Alloy used throughout the documentation. ### Tutorials @@ -46,7 +48,7 @@ As defined in the [writer's toolkit][]: > Provides numbered steps that describe how to achieve an outcome. Answers the > question “How do I?”. -However, in the Agent documentation we don't mandate the use of numbered steps. +However, in Alloy documentation we don't mandate the use of numbered steps. We do expect that tasks allow users to achieve a specific outcome by following the page step by step, but we don't require numbered steps because some tasks branch out into multiple paths, and numbering the steps would look more @@ -62,12 +64,12 @@ Instead, they should link to relevant Reference pages. ### Reference -The Reference section is a collection of pages that describe the Agent -components and their configuration options exhaustively. This is a more narrow -definition than the one found in the [writer's toolkit][]. +The Reference section is a collection of pages that describe Alloy components +and their configuration options exhaustively. This is a more narrow definition +than the one found in the [writer's toolkit][]. We have a dedicated page with the best practices for writing Reference -docs: [writing flow components documentation][writing-flow-docs]. +docs: [writing components documentation][writing-docs]. This is our most detailed documentation, and it should be used as a source of truth. The contents of the Reference pages should not be repeated in other parts @@ -75,8 +77,8 @@ of the documentation. ### Release notes -Release notes contain all the notable changes in the Agent. They are updated as -part of the release process. +Release notes notify users of changes in Alloy that require user action when +upgrading. They are updated as part of the release process. [writer's toolkit]: https://grafana.com/docs/writers-toolkit/structure/topic-types/ -[writing-flow-docs]: writing-flow-component-documentation.md +[writing-docs]: writing-component-documentation.md diff --git a/docs/developer/writing-exporter-flow-components.md b/docs/developer/writing-exporter-components.md similarity index 72% rename from docs/developer/writing-exporter-flow-components.md rename to docs/developer/writing-exporter-components.md index 401b21b0d5..b07da3bf33 100644 --- a/docs/developer/writing-exporter-flow-components.md +++ b/docs/developer/writing-exporter-components.md @@ -1,13 +1,6 @@ -# Create Prometheus Exporter Flow Components +# Create Prometheus Exporter Components -This guide will walk you through the process of creating a new Prometheus exporter Flow component and best practices for implementing it. - -It is required that the exporter has an existing [Agent integration](../sources/static/configuration/integrations/_index.md) in order to wrap it as a Flow component. In the future, we will drop this requirement and Flow components will expose the logic of the exporter directly. - -Use the following exporters as a reference: -- [process_exporter](../../component/prometheus/exporter/process/process.go) - [documentation](../sources/flow/reference/components/prometheus.exporter.process.md) -- [blackbox_exporter](../../component/prometheus/exporter/blackbox/blackbox.go) - [documentation](../sources/flow/reference/components/prometheus.exporter.blackbox.md) -- [node_exporter](../../component/prometheus/exporter/unix/unix.go) - [documentation](../sources/flow/reference/components/prometheus.exporter.unix.md) +This guide will walk you through the process of creating a new Prometheus exporter component and best practices for implementing it. ## Arguments (Configuration) @@ -19,11 +12,11 @@ Use the following exporters as a reference: The config would look like this using `matcher` block multiple times: -```river +```grafana-alloy prometheus.exporter.process "example" { track_children = false matcher { - comm = ["grafana-agent"] + comm = ["alloy"] } matcher { comm = ["firefox"] @@ -35,7 +28,7 @@ prometheus.exporter.process "example" { The config would look like this: -```river +```grafana-alloy prometheus.exporter.blackbox "example" { config_file = "blackbox_modules.yml" @@ -70,8 +63,8 @@ prometheus.exporter.blackbox "example" { ## Registering the component -In order to make the component visible for Agent Flow, it needs to be added to [all.go](../../component/all/all.go) file. +In order to make the component visible to Alloy configurations, it needs to be added to [all.go](../../component/all/all.go) file. ## Documentation -Writing the documentation for the component is very important. Please, follow the [Writing documentation for Flow components](./writing-flow-component-documentation.md) and take a look at the existing documentation for other exporters. +Writing the documentation for the component is very important. Please, follow the [Writing documentation for components](./writing-component-documentation.md) and take a look at the existing documentation for other exporters. diff --git a/docs/sources/get-started/run/binary.md b/docs/sources/get-started/run/binary.md index 259232f2b3..ce1bc98769 100644 --- a/docs/sources/get-started/run/binary.md +++ b/docs/sources/get-started/run/binary.md @@ -1,5 +1,5 @@ --- -canonical: https://grafana.com/docs/alloy/latest/flow/get-started/run/binary/ +canonical: https://grafana.com/docs/alloy/latest/get-started/run/binary/ description: Learn how to run Grafana Alloy as a standalone binary menuTitle: Standalone title: Run Grafana Alloy as a standalone binary diff --git a/docs/sources/reference/components/discovery.kubelet.md b/docs/sources/reference/components/discovery.kubelet.md index e2a5d76bad..915c9d87ab 100644 --- a/docs/sources/reference/components/discovery.kubelet.md +++ b/docs/sources/reference/components/discovery.kubelet.md @@ -21,7 +21,7 @@ discovery.kubelet "LABEL" { ## Requirements -* The Kubelet must be reachable from the `grafana-agent` pod network. +* The Kubelet must be reachable from the `alloy` pod network. * Follow the [Kubelet authorization][] documentation to configure authentication to the Kubelet API. [Kubelet authorization]: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization diff --git a/docs/sources/reference/components/discovery.relabel.md b/docs/sources/reference/components/discovery.relabel.md index b98c6fa7ed..c8f2aae2ce 100644 --- a/docs/sources/reference/components/discovery.relabel.md +++ b/docs/sources/reference/components/discovery.relabel.md @@ -6,7 +6,7 @@ title: discovery.relabel # discovery.relabel -In Flow, targets are defined as sets of key-value pairs called _labels_. +In {{< param "PRODUCT_NAME" >}}, targets are defined as sets of key-value pairs called _labels_. `discovery.relabel` rewrites the label set of the input targets by applying one or more relabeling rules. If no rules are defined, then the input targets are exported as-is. diff --git a/docs/sources/reference/components/loki.source.kubernetes_events.md b/docs/sources/reference/components/loki.source.kubernetes_events.md index 9bcd18be63..c8e61a13e6 100644 --- a/docs/sources/reference/components/loki.source.kubernetes_events.md +++ b/docs/sources/reference/components/loki.source.kubernetes_events.md @@ -156,8 +156,8 @@ to store read offsets, so that if a component or {{< param "PRODUCT_NAME" >}} re The data path is inside the directory configured by the `--storage.path` [command line argument][cmd-args]. -In the Static mode's [eventhandler integration][eventhandler-integration], a `cache_path` argument is used to configure a positions file. -In Flow mode, this argument is no longer necessary. +In Grafana Agent Static's [eventhandler integration][eventhandler-integration], a `cache_path` argument is used to configure a positions file. +In {{< param "PRODUCT_NAME" >}}, this argument is no longer necessary. [cmd-args]: ../../cli/run/ [eventhandler-integration]: https://grafana.com/docs/agent/latest/static/configuration/integrations/integrations-next/eventhandler-config/ diff --git a/docs/sources/reference/components/loki.source.podlogs.md b/docs/sources/reference/components/loki.source.podlogs.md index 9ad97b2705..cb6b4636cf 100644 --- a/docs/sources/reference/components/loki.source.podlogs.md +++ b/docs/sources/reference/components/loki.source.podlogs.md @@ -15,7 +15,7 @@ the Kubernetes API, tails logs from Kubernetes containers of Pods specified by the discovered them. `loki.source.podlogs` is similar to `loki.source.kubernetes`, but uses custom -resources rather than being fed targets from another Flow component. +resources rather than being fed targets from another component. {{< admonition type="note" >}} Unlike `loki.source.kubernetes`, it is not possible to distribute responsibility of collecting logs across multiple {{< param "PRODUCT_NAME" >}}s. diff --git a/docs/sources/reference/components/otelcol.connector.spanlogs.md b/docs/sources/reference/components/otelcol.connector.spanlogs.md index 02c590ccef..4be79bf967 100644 --- a/docs/sources/reference/components/otelcol.connector.spanlogs.md +++ b/docs/sources/reference/components/otelcol.connector.spanlogs.md @@ -292,4 +292,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/reference/components/otelcol.exporter.loadbalancing.md b/docs/sources/reference/components/otelcol.exporter.loadbalancing.md index f2e1927f2d..1ac5053f4d 100644 --- a/docs/sources/reference/components/otelcol.exporter.loadbalancing.md +++ b/docs/sources/reference/components/otelcol.exporter.loadbalancing.md @@ -262,11 +262,11 @@ Name | Type | Description ## Choose a load balancing strategy - + Different {{< param "PRODUCT_NAME" >}} components require different load-balancing strategies. -The use of `otelcol.exporter.loadbalancing` is only necessary for [stateful Flow components][stateful-and-stateless-components]. +The use of `otelcol.exporter.loadbalancing` is only necessary for [stateful components][stateful-and-stateless-components]. [stateful-and-stateless-components]: ../../../get-started/deploy-alloy/#stateful-and-stateless-components @@ -314,7 +314,7 @@ You could differentiate the series by adding an attribute such as `"collector.id The series from different {{< param "PRODUCT_NAME" >}}s can be aggregated using PromQL queries on the backed metrics database. If the metrics are stored in Grafana Mimir, cardinality issues due to `"collector.id"` labels can be solved using [Adaptive Metrics][adaptive-metrics]. -A simpler, more scalable alternative to generating service graph metrics in {{< param "PRODUCT_NAME" >}} is to generate them entirely in the backend database. +A simpler, more scalable alternative to generating service graph metrics in {{< param "PRODUCT_NAME" >}} is to generate them entirely in the backend database. For example, service graphs can be [generated][tempo-servicegraphs] in Grafana Cloud by the Tempo traces database. [tempo-servicegraphs]: https://grafana.com/docs/tempo/latest/metrics-generator/service_graphs/ @@ -668,7 +668,7 @@ The following example shows a Kubernetes configuration that sets up two sets of * A pool of load-balancer {{< param "PRODUCT_NAME" >}}s: * Spans are received from instrumented applications via `otelcol.receiver.otlp` * Spans are exported via `otelcol.exporter.loadbalancing`. - * The load-balancer {{< param "PRODUCT_NAME" >}}s will get notified by the Kubernetes API any time a pod + * The load-balancer {{< param "PRODUCT_NAME" >}}s will get notified by the Kubernetes API any time a pod is added or removed from the pool of sampling {{< param "PRODUCT_NAME" >}}s. * A pool of sampling {{< param "PRODUCT_NAME" >}}s: * The sampling {{< param "PRODUCT_NAME" >}}s do not need to run behind a headless service. diff --git a/docs/sources/reference/components/otelcol.processor.span.md b/docs/sources/reference/components/otelcol.processor.span.md index 1c05759103..99e46cf58c 100644 --- a/docs/sources/reference/components/otelcol.processor.span.md +++ b/docs/sources/reference/components/otelcol.processor.span.md @@ -88,14 +88,14 @@ At least one of these 2 fields must be set. `from_attributes` represents the attribute keys to pull the values from to generate the new span name: -* All attribute keys are required in the span to rename a span. +* All attribute keys are required in the span to rename a span. If any attribute is missing from the span, no rename will occur. * The new span name is constructed in order of the `from_attributes` specified in the configuration. `separator` is the string used to separate attributes values in the new span name. If no value is set, no separator is used between attribute -values. `separator` is used with `from_attributes` only; +values. `separator` is used with `from_attributes` only; it is not used with [to-attributes][]. ### to_attributes block @@ -110,12 +110,12 @@ Name | Type | Description `break_after_match` | `bool` | Configures if processing of rules should stop after the first match. | `false` | no Each rule in the `rules` list is a regex pattern string. -1. The span name is checked against each regex in the list. -2. If it matches, then all named subexpressions of the regex are extracted as attributes and are added to the span. -3. Each subexpression name becomes an attribute name and the subexpression matched portion becomes the attribute value. -4. The matched portion in the span name is replaced by extracted attribute name. -5. If the attributes already exist in the span then they will be overwritten. -6. The process is repeated for all rules in the order they are specified. +1. The span name is checked against each regex in the list. +2. If it matches, then all named subexpressions of the regex are extracted as attributes and are added to the span. +3. Each subexpression name becomes an attribute name and the subexpression matched portion becomes the attribute value. +4. The matched portion in the span name is replaced by extracted attribute name. +5. If the attributes already exist in the span then they will be overwritten. +6. The process is repeated for all rules in the order they are specified. 7. Each subsequent rule works on the span name that is the output after processing the previous rule. `break_after_match` specifies if processing of rules should stop after the first @@ -142,7 +142,7 @@ The supported values for `code` are: ### include block -The `include` block provides an option to include data being fed into the +The `include` block provides an option to include data being fed into the [name][] and [status][] blocks based on the properties of a span. The following arguments are supported: @@ -158,12 +158,12 @@ Name | Type | Description A match occurs if at least one item in the lists matches. -One of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified +One of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified with a non-empty value for a valid configuration. ### exclude block -The `exclude` block provides an option to exclude data from being fed into the +The `exclude` block provides an option to exclude data from being fed into the [name][] and [status][] blocks based on the properties of a span. The following arguments are supported: @@ -179,7 +179,7 @@ Name | Type | Description A match occurs if at least one item in the lists matches. -One of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified +One of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified with a non-empty value for a valid configuration. ### regexp block @@ -210,7 +210,7 @@ Name | Type | Description --------|--------------------|----------------------------------------------------------------- `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. -`input` accepts `otelcol.Consumer` OTLP-formatted data for traces telemetry signals. +`input` accepts `otelcol.Consumer` OTLP-formatted data for traces telemetry signals. Logs and metrics are not supported. ## Component health @@ -228,7 +228,7 @@ information. ### Creating a new span name from attribute values This example creates a new span name from the values of attributes `db.svc`, -`operation`, and `id`, in that order, separated by the value `::`. +`operation`, and `id`, in that order, separated by the value `::`. All attribute keys need to be specified in the span for the processor to rename it. ```alloy @@ -245,21 +245,21 @@ otelcol.processor.span "default" { ``` For a span with the following attributes key/value pairs, the above -Flow configuration will change the span name to `"location::get::1234"`: +configuration will change the span name to `"location::get::1234"`: ```json -{ - "db.svc": "location", - "operation": "get", +{ + "db.svc": "location", + "operation": "get", "id": "1234" } ``` -For a span with the following attributes key/value pairs, the above -Flow configuration will not change the span name. +For a span with the following attributes key/value pairs, the above +configuration will not change the span name. This is because the attribute key `operation` isn't set: ```json -{ - "db.svc": "location", +{ + "db.svc": "location", "id": "1234" } ``` @@ -279,18 +279,18 @@ otelcol.processor.span "default" { ``` For a span with the following attributes key/value pairs, the above -Flow configuration will change the span name to `"locationget1234"`: +configuration will change the span name to `"locationget1234"`: ```json -{ - "db.svc": "location", - "operation": "get", +{ + "db.svc": "location", + "operation": "get", "id": "1234" } ``` ### Renaming a span name and adding attributes -Example input and output using the Flow configuration below: +Example input and output using the configuration below: 1. Let's assume input span name is `/api/v1/document/12345678/update` 2. The span name will be changed to `/api/v1/document/{documentId}/update` 3. A new attribute `"documentId"="12345678"` will be added to the span. @@ -360,7 +360,7 @@ otelcol.processor.span "default" { ### Setting a status depending on an attribute value -This example sets the status to success only when attribute `http.status_code` +This example sets the status to success only when attribute `http.status_code` is equal to `400`. ```alloy @@ -398,4 +398,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/reference/components/prometheus.exporter.process.md b/docs/sources/reference/components/prometheus.exporter.process.md index f709ed66eb..760d2a07a8 100644 --- a/docs/sources/reference/components/prometheus.exporter.process.md +++ b/docs/sources/reference/components/prometheus.exporter.process.md @@ -99,7 +99,7 @@ prometheus.exporter.process "example" { track_children = false matcher { - comm = ["grafana-agent"] + comm = ["alloy"] } } diff --git a/docs/sources/reference/components/prometheus.exporter.unix.md b/docs/sources/reference/components/prometheus.exporter.unix.md index a96fff1ae3..141e6e039f 100644 --- a/docs/sources/reference/components/prometheus.exporter.unix.md +++ b/docs/sources/reference/components/prometheus.exporter.unix.md @@ -281,8 +281,8 @@ debug metrics. The following table lists the available collectors that `node_exporter` brings bundled in. Some collectors only work on specific operating systems; enabling a -collector that is not supported by the host OS where Flow is running -is a no-op. +collector that is not supported by the host OS where {{< param "PRODUCT_NAME" >}} +is running is a no-op. Users can choose to enable a subset of collectors to limit the amount of metrics exposed by the `prometheus.exporter.unix` component, @@ -365,9 +365,9 @@ or disable collectors that are expensive to run. ## Running on Docker/Kubernetes -When running Flow in a Docker container, you need to bind mount the filesystem, -procfs, and sysfs from the host machine, as well as set the corresponding -arguments for the component to work. +When running {{< param "PRODUCT_NAME" >}} in a Docker container, you need to +bind mount the filesystem, procfs, and sysfs from the host machine, as well as +set the corresponding arguments for the component to work. You may also need to add capabilities such as `SYS_TIME` and make sure that the Agent is running with elevated privileges for some of the collectors to work diff --git a/docs/sources/reference/components/prometheus.exporter.windows.md b/docs/sources/reference/components/prometheus.exporter.windows.md index f9743190a3..04123edaf1 100644 --- a/docs/sources/reference/components/prometheus.exporter.windows.md +++ b/docs/sources/reference/components/prometheus.exporter.windows.md @@ -188,9 +188,9 @@ For a server name to be included, it must match the regular expression specified ### text_file block -Name | Type | Description | Default | Required -----------------------|----------|----------------------------------------------------|-------------------------------------------------------|--------- -`text_file_directory` | `string` | The directory containing the files to be ingested. | `C:\Program Files\Grafana Agent Flow\textfile_inputs` | no +Name | Type | Description | Default | Required +----------------------|----------|----------------------------------------------------|------------------------------------------------------|--------- +`text_file_directory` | `string` | The directory containing the files to be ingested. | `C:\Program Files\GrafanaLabs\Alloy\textfile_inputs` | no When `text_file_directory` is set, only files with the extension `.prom` inside the specified directory are read. Each `.prom` file found must end with an empty line feed to work properly. @@ -218,7 +218,7 @@ debug metrics. ## Collectors list The following table lists the available collectors that `windows_exporter` brings bundled in. Some collectors only work on specific operating systems; enabling a -collector that is not supported by the host OS where Flow is running +collector that is not supported by the host OS where Alloy is running is a no-op. Users can choose to enable a subset of collectors to limit the amount of diff --git a/docs/sources/reference/components/prometheus.remote_write.md b/docs/sources/reference/components/prometheus.remote_write.md index 7a0807ecf6..95122fd4b5 100644 --- a/docs/sources/reference/components/prometheus.remote_write.md +++ b/docs/sources/reference/components/prometheus.remote_write.md @@ -226,7 +226,7 @@ The WAL serves two primary purposes: The WAL is located inside a component-specific directory relative to the storage path {{< param "PRODUCT_NAME" >}} is configured to use. See the -[`agent run` documentation][run] for how to change the storage path. +[`run` documentation][run] for how to change the storage path. The `truncate_frequency` argument configures how often to clean up the WAL. Every time the `truncate_frequency` period elapses, the lower two-thirds of @@ -476,13 +476,14 @@ before being pushed to the remote_write endpoint. ### WAL corruption -WAL corruption can occur when Grafana Agent unexpectedly stops while the latest WAL segments -are still being written to disk. For example, the host computer has a general disk failure -and crashes before you can stop Grafana Agent and other running services. When you restart Grafana -Agent, it verifies the WAL, removing any corrupt segments it finds. Sometimes, this repair -is unsuccessful, and you must manually delete the corrupted WAL to continue. - -If the WAL becomes corrupted, Grafana Agent writes error messages such as +WAL corruption can occur when {{< param "PRODUCT_NAME" >}} unexpectedly stops +while the latest WAL segments are still being written to disk. For example, the +host computer has a general disk failure and crashes before you can stop +{{< param "PRODUCT_NAME" >}} and other running services. When you restart +{{< param "PRODUCT_NAME" >}}, it verifies the WAL, removing any corrupt +segments it finds. Sometimes, this repair is unsuccessful, and you must +manually delete the corrupted WAL to continue. If the WAL becomes corrupted, +{{< param "PRODUCT_NAME" >}} writes error messages such as `err="failed to find segment for index"` to the log file. {{< admonition type="note" >}} @@ -491,7 +492,7 @@ Deleting a WAL segment or a WAL file permanently deletes the stored WAL data. To delete the corrupted WAL: -1. [Stop][] Grafana Agent. +1. [Stop][] {{< param "PRODUCT_NAME" >}}. 1. Find and delete the contents of the `wal` directory. By default the `wal` directory is a subdirectory diff --git a/docs/sources/shared/reference/components/otelcol-debug-metrics-block.md b/docs/sources/shared/reference/components/otelcol-debug-metrics-block.md index 704c6e2776..b5c03cd72e 100644 --- a/docs/sources/shared/reference/components/otelcol-debug-metrics-block.md +++ b/docs/sources/shared/reference/components/otelcol-debug-metrics-block.md @@ -12,6 +12,6 @@ Name | Type | Description -----------------------------------|-----------|------------------------------------------------------|---------|--------- `disable_high_cardinality_metrics` | `boolean` | Whether to disable certain high cardinality metrics. | `true` | no -`disable_high_cardinality_metrics` is the Grafana Agent equivalent to the `telemetry.disableHighCardinalityMetrics` feature gate in the OpenTelemetry Collector. +`disable_high_cardinality_metrics` is the Grafana Alloy equivalent to the `telemetry.disableHighCardinalityMetrics` feature gate in the OpenTelemetry Collector. It removes attributes that could cause high cardinality metrics. For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed. diff --git a/docs/sources/tasks/configure/configure-kubernetes.md b/docs/sources/tasks/configure/configure-kubernetes.md index 94b24da8a4..8f9fe94389 100644 --- a/docs/sources/tasks/configure/configure-kubernetes.md +++ b/docs/sources/tasks/configure/configure-kubernetes.md @@ -98,7 +98,7 @@ Use this method if you prefer to embed your {{< param "PRODUCT_NAME" >}} configu 1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation: ```shell - helm upgrade --namespace grafana/grafana-agent -f + helm upgrade --namespace grafana/alloy -f ``` Replace the following: @@ -144,7 +144,7 @@ Use this method if you prefer to write your {{< param "PRODUCT_NAME" >}} configu 1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation: ```shell - helm upgrade --namespace grafana/grafana-agent -f + helm upgrade --namespace grafana/alloy -f ``` Replace the following: diff --git a/docs/sources/tasks/monitor/controller_metrics.md b/docs/sources/tasks/monitor/controller_metrics.md index 0d1184ad58..001f2ae9ee 100644 --- a/docs/sources/tasks/monitor/controller_metrics.md +++ b/docs/sources/tasks/monitor/controller_metrics.md @@ -1,7 +1,7 @@ --- canonical: https://grafana.com/docs/alloy/latest/monitor/controller_metrics/ description: Learn how to monitor controller metrics -title: Monitor the Grafana Agent component controller +title: Monitor the Grafana Alloy component controller menuTitle: Monitor the controller weight: 100 --- diff --git a/internal/component/prometheus/exporter/process/process_test.go b/internal/component/prometheus/exporter/process/process_test.go index 300ab35e87..28b5747142 100644 --- a/internal/component/prometheus/exporter/process/process_test.go +++ b/internal/component/prometheus/exporter/process/process_test.go @@ -44,7 +44,7 @@ func TestAlloyConfigConvert(t *testing.T) { var exampleAlloyConfig = ` matcher { name = "static" - comm = ["grafana-agent"] + comm = ["alloy"] cmdline = ["*config.file*"] } track_children = true @@ -65,7 +65,7 @@ func TestAlloyConfigConvert(t *testing.T) { expected := []MatcherGroup{ { Name: "static", - CommRules: []string{"grafana-agent"}, + CommRules: []string{"alloy"}, CmdlineRules: []string{"*config.file*"}, }, } @@ -80,7 +80,7 @@ func TestAlloyConfigConvert(t *testing.T) { e := config.MatcherRules{ { Name: "static", - CommRules: []string{"grafana-agent"}, + CommRules: []string{"alloy"}, CmdlineRules: []string{"*config.file*"}, }, } diff --git a/internal/converter/internal/staticconvert/testdata/sanitize.yaml b/internal/converter/internal/staticconvert/testdata/sanitize.yaml index b216313ff2..3d0ac3c64a 100644 --- a/internal/converter/internal/staticconvert/testdata/sanitize.yaml +++ b/internal/converter/internal/staticconvert/testdata/sanitize.yaml @@ -1,4 +1,4 @@ -integrations: +integrations: prometheus_remote_write: - basic_auth: password: token @@ -72,4 +72,4 @@ metrics: url: https://region.grafana.net/api/prom/push global: scrape_interval: 60s - wal_directory: /tmp/grafana-agent-wal \ No newline at end of file + wal_directory: /tmp/grafana-agent-wal diff --git a/internal/web/ui/public/manifest.json b/internal/web/ui/public/manifest.json index deb995a2d1..2823923a72 100644 --- a/internal/web/ui/public/manifest.json +++ b/internal/web/ui/public/manifest.json @@ -1,6 +1,6 @@ { - "short_name": "Grafana Agent", - "name": "Grafana Agent", + "short_name": "Grafana Alloy", + "name": "Grafana Alloy", "icons": [ { "src": "favicon.ico", diff --git a/operations/alloy-mixin/alerts/controller.libsonnet b/operations/alloy-mixin/alerts/controller.libsonnet index 082a963f09..12609b67c3 100644 --- a/operations/alloy-mixin/alerts/controller.libsonnet +++ b/operations/alloy-mixin/alerts/controller.libsonnet @@ -7,7 +7,7 @@ alert.newGroup( alert.newRule( 'SlowComponentEvaluations', 'sum by (cluster, namespace, component_id) (rate(alloy_component_evaluation_slow_seconds[10m])) > 0', - 'Flow component evaluations are taking too long.', + 'Component evaluations are taking too long.', '15m', ), @@ -15,7 +15,7 @@ alert.newGroup( alert.newRule( 'UnhealthyComponents', 'sum by (cluster, namespace) (alloy_component_controller_running_components{health_type!="healthy"}) > 0', - 'Unhealthy Flow components detected.', + 'Unhealthy components detected.', '15m', ), ] diff --git a/operations/alloy-mixin/dashboards/cluster-node.libsonnet b/operations/alloy-mixin/dashboards/cluster-node.libsonnet index 9064f77dcc..0e6241afdc 100644 --- a/operations/alloy-mixin/dashboards/cluster-node.libsonnet +++ b/operations/alloy-mixin/dashboards/cluster-node.libsonnet @@ -24,7 +24,7 @@ local filename = 'alloy-cluster-node.json'; ]) + // TODO(@tpaschalis) Make the annotation optional. dashboard.withAnnotations([ - dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="grafana-agent" | name_extracted=~"grafana-agent.*"', 'rgba(0, 211, 255, 1)'), + dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="alloy" | name_extracted=~"alloy.*"', 'rgba(0, 211, 255, 1)'), ]) + dashboard.withPanelsMixin([ // Node Info row diff --git a/operations/alloy-mixin/dashboards/cluster-overview.libsonnet b/operations/alloy-mixin/dashboards/cluster-overview.libsonnet index 5010c25341..314828cbe4 100644 --- a/operations/alloy-mixin/dashboards/cluster-overview.libsonnet +++ b/operations/alloy-mixin/dashboards/cluster-overview.libsonnet @@ -22,7 +22,7 @@ local cluster_node_filename = 'alloy-cluster-node.json'; ]) + // TODO(@tpaschalis) Make the annotation optional. dashboard.withAnnotations([ - dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="grafana-agent" | name_extracted=~"grafana-agent.*"', 'rgba(0, 211, 255, 1)'), + dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="alloy" | name_extracted=~"alloy.*"', 'rgba(0, 211, 255, 1)'), ]) + dashboard.withPanelsMixin([ // Nodes diff --git a/operations/alloy-mixin/dashboards/controller.libsonnet b/operations/alloy-mixin/dashboards/controller.libsonnet index 025465269e..12ac9729cb 100644 --- a/operations/alloy-mixin/dashboards/controller.libsonnet +++ b/operations/alloy-mixin/dashboards/controller.libsonnet @@ -21,7 +21,7 @@ local filename = 'alloy-controller.json'; ]) + // TODO(@tpaschalis) Make the annotation optional. dashboard.withAnnotations([ - dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="grafana-agent" | name_extracted=~"grafana-agent.*"', 'rgba(0, 211, 255, 1)'), + dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="alloy" | name_extracted=~"alloy.*"', 'rgba(0, 211, 255, 1)'), ]) + dashboard.withPanelsMixin([ // Running instances diff --git a/operations/alloy-mixin/dashboards/prometheus.libsonnet b/operations/alloy-mixin/dashboards/prometheus.libsonnet index 94dd0856d2..6dd3d152d1 100644 --- a/operations/alloy-mixin/dashboards/prometheus.libsonnet +++ b/operations/alloy-mixin/dashboards/prometheus.libsonnet @@ -415,7 +415,7 @@ local remoteWritePanels(y_offset) = [ ]) + // TODO(@tpaschalis) Make the annotation optional. dashboard.withAnnotations([ - dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="grafana-agent" | name_extracted=~"grafana-agent.*"', 'rgba(0, 211, 255, 1)'), + dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="alloy" | name_extracted=~"alloy.*"', 'rgba(0, 211, 255, 1)'), ]) + dashboard.withPanelsMixin( // First row, offset is 0 diff --git a/operations/alloy-mixin/dashboards/resources.libsonnet b/operations/alloy-mixin/dashboards/resources.libsonnet index aea409e140..8d38b7c789 100644 --- a/operations/alloy-mixin/dashboards/resources.libsonnet +++ b/operations/alloy-mixin/dashboards/resources.libsonnet @@ -44,7 +44,7 @@ local stackedPanelMixin = { ]) + // TODO(@tpaschalis) Make the annotation optional. dashboard.withAnnotations([ - dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="grafana-agent" | name_extracted=~"grafana-agent.*"', 'rgba(0, 211, 255, 1)'), + dashboard.newLokiAnnotation('Deployments', '{cluster="$cluster", container="kube-diff-logger"} | json | namespace_extracted="alloy" | name_extracted=~"alloy.*"', 'rgba(0, 211, 255, 1)'), ]) + dashboard.withPanelsMixin([ // CPU usage diff --git a/operations/alloy-mixin/grizzly/alerts.jsonnet b/operations/alloy-mixin/grizzly/alerts.jsonnet index c71ea973e5..700d675868 100644 --- a/operations/alloy-mixin/grizzly/alerts.jsonnet +++ b/operations/alloy-mixin/grizzly/alerts.jsonnet @@ -7,7 +7,7 @@ local mixin = import '../mixin.libsonnet'; apiVersion: 'grizzly.grafana.com/v1alpha1', kind: 'PrometheusRuleGroup', metadata: { - namespace: 'agent-flow', + namespace: 'alloy', name: group.name, }, spec: group, diff --git a/operations/alloy-mixin/grizzly/dashboards.jsonnet b/operations/alloy-mixin/grizzly/dashboards.jsonnet index 12cb98e8f0..3e3d8ce14a 100644 --- a/operations/alloy-mixin/grizzly/dashboards.jsonnet +++ b/operations/alloy-mixin/grizzly/dashboards.jsonnet @@ -5,7 +5,7 @@ local mixin = import '../mixin.libsonnet'; apiVersion: 'grizzly.grafana.com/v1alpha1', kind: 'DashboardFolder', metadata: { - name: 'grafana-agent-flow', + name: 'grafana-alloy', }, spec: { title: mixin.grafanaDashboardFolder, diff --git a/operations/helm/charts/alloy/README.md b/operations/helm/charts/alloy/README.md index b5f184292e..5a9c7fb7ad 100644 --- a/operations/helm/charts/alloy/README.md +++ b/operations/helm/charts/alloy/README.md @@ -186,7 +186,7 @@ more information on how to enable these capabilities. ### rbac.create `rbac.create` enables the creation of ClusterRole and ClusterRoleBindings for -the Grafana Alloy containers to use. The default permission set allows Flow +the Grafana Alloy containers to use. The default permission set allows components like [discovery.kubernetes][] to work properly. [discovery.kubernetes]: https://grafana.com/docs/alloy/latest/reference/components/discovery.kubernetes/ @@ -226,7 +226,7 @@ containers using the Kubernetes API. This component does not require mounting the hosts filesystem into the Agent, nor requires additional security contexts to work correctly. -[loki.source.kubernetes]: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.kubernetes/ +[loki.source.kubernetes]: https://grafana.com/docs/alloy/latest/reference/components/loki.source.kubernetes/ ### File-based collection diff --git a/operations/helm/charts/alloy/README.md.gotmpl b/operations/helm/charts/alloy/README.md.gotmpl index a0ab2ddc36..b483578f5e 100644 --- a/operations/helm/charts/alloy/README.md.gotmpl +++ b/operations/helm/charts/alloy/README.md.gotmpl @@ -86,7 +86,7 @@ more information on how to enable these capabilities. ### rbac.create `rbac.create` enables the creation of ClusterRole and ClusterRoleBindings for -the Grafana Alloy containers to use. The default permission set allows Flow +the Grafana Alloy containers to use. The default permission set allows components like [discovery.kubernetes][] to work properly. [discovery.kubernetes]: https://grafana.com/docs/alloy/latest/reference/components/discovery.kubernetes/ @@ -126,7 +126,7 @@ containers using the Kubernetes API. This component does not require mounting the hosts filesystem into the Agent, nor requires additional security contexts to work correctly. -[loki.source.kubernetes]: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.kubernetes/ +[loki.source.kubernetes]: https://grafana.com/docs/alloy/latest/reference/components/loki.source.kubernetes/ ### File-based collection diff --git a/operations/helm/charts/alloy/ci/topologyspreadconstraints-values.yaml b/operations/helm/charts/alloy/ci/topologyspreadconstraints-values.yaml index d69b5662c2..9b98df4df1 100644 --- a/operations/helm/charts/alloy/ci/topologyspreadconstraints-values.yaml +++ b/operations/helm/charts/alloy/ci/topologyspreadconstraints-values.yaml @@ -6,5 +6,5 @@ controller: whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: - app.kubernetes.io/name: grafana-agent - app.kubernetes.io/instance: grafana-agent + app.kubernetes.io/name: alloy + app.kubernetes.io/instance: alloy diff --git a/operations/helm/tests/topologyspreadconstraints/alloy/templates/controllers/deployment.yaml b/operations/helm/tests/topologyspreadconstraints/alloy/templates/controllers/deployment.yaml index 8d489053f7..4fd5fa2609 100644 --- a/operations/helm/tests/topologyspreadconstraints/alloy/templates/controllers/deployment.yaml +++ b/operations/helm/tests/topologyspreadconstraints/alloy/templates/controllers/deployment.yaml @@ -72,8 +72,8 @@ spec: topologySpreadConstraints: - labelSelector: matchLabels: - app.kubernetes.io/instance: grafana-agent - app.kubernetes.io/name: grafana-agent + app.kubernetes.io/instance: alloy + app.kubernetes.io/name: alloy maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway diff --git a/tools/build-image/README.md b/tools/build-image/README.md index ff73b021ed..a14aef2526 100644 --- a/tools/build-image/README.md +++ b/tools/build-image/README.md @@ -1,12 +1,12 @@ -# Grafana Agent build images +# Grafana Alloy build images -The Grafana Agent build images are used for CI workflows to manage builds of -Grafana Agent. +The Grafana Alloy build images are used for CI workflows to manage builds of +Grafana Alloy. There are two images: -* `grafana/agent-build-image:X.Y.Z` (for building Linux containers) -* `grafana/agent-build-image:X.Y.Z-windows` (for building Windows containers) +* `grafana/alloy-build-image:X.Y.Z` (for building Linux containers) +* `grafana/alloy-build-image:X.Y.Z-windows` (for building Windows containers) (Where `X.Y.Z` is replaced with some semantic version, like 0.14.0). @@ -14,7 +14,7 @@ There are two images: Once a commit is merged to main which updates the build-image Dockerfiles, a maintainer must push a tag matching the pattern `build-image/vX.Y.Z` to the -grafana/agent repo. For example, to create version v0.15.0 of the build images, +grafana/alloy repo. For example, to create version v0.15.0 of the build images, a maintainer would push the tag `build-image/v0.15.0`. > **NOTE**: The tag name is expected to be prefixed with `v`, but the pushed diff --git a/tools/release b/tools/release index e3733984d9..b252d81e84 100755 --- a/tools/release +++ b/tools/release @@ -6,7 +6,7 @@ set -ex # Zip up all the agent binaries to reduce the download size. DEBs and RPMs # aren't included to be easier to work with. find dist/ -type f \ - -name 'grafana-agent*' -not -name '*.deb' -not -name '*.rpm' \ + -name 'alloy*' -not -name '*.deb' -not -name '*.rpm' \ -exec zip -j -m "{}.zip" "{}" \; # Sign the RPM packages. DEB packages aren't signed. @@ -18,7 +18,7 @@ pushd dist && sha256sum -- * > SHA256SUMS && popd || exit ghr \ -t "${GITHUB_TOKEN}" \ -u "grafana" \ - -r "agent" \ + -r "alloy" \ -b="$(envsubst < ./tools/release-note.md)" \ -delete -draft \ "${VERSION}" ./dist/ diff --git a/tools/release-note.md b/tools/release-note.md index bdf5bf67f9..d2ece64134 100644 --- a/tools/release-note.md +++ b/tools/release-note.md @@ -1,21 +1,17 @@ -This is release `${VERSION}` of Grafana Agent. +This is release `${VERSION}` of Grafana Alloy. ### Upgrading -Read the relevant upgrade guides for specific instructions on upgrading from older versions: +Read the [release notes] for specific instructions on upgrading from older versions: -* [Static mode upgrade guide](https://grafana.com/docs/agent/${RELEASE_DOC_TAG}/static/upgrade-guide/) -* [Static mode Kubernetes operator upgrade guide](https://grafana.com/docs/agent/${RELEASE_DOC_TAG}/operator/upgrade-guide/) -* [Flow mode upgrade guide](https://grafana.com/docs/agent/${RELEASE_DOC_TAG}/flow/upgrade-guide/) +[release notes]: https://grafana.com/docs/alloy/${RELEASE_DOC_TAG}/release-notes/ ### Notable changes: -:warning: **ADD RELEASE NOTES HERE** :warning: +:warning: **ADD ENTRIES FROM CHANGELOG HERE** :warning: ### Installation -Refer to our installation guides for how to install the variants of Grafana Agent: +Refer to our [installation guide] for how to install Grafana Alloy. -* [Install static mode](https://grafana.com/docs/agent/${RELEASE_DOC_TAG}/static/set-up/install/) -* [Install the static mode Kubernetes operator](https://grafana.com/docs/agent/${RELEASE_DOC_TAG}/operator/helm-getting-started/) -* [Install flow mode](https://grafana.com/docs/agent/${RELEASE_DOC_TAG}/flow/setup/install/) +[installation guide]: https://grafana.com/docs/alloy/${RELEASE_DOC_TAG}/get-started/install/