Skip to content

Commit

Permalink
Merge pull request #174 from grafana/update-generated-tutorials
Browse files Browse the repository at this point in the history
Update generated tutorials
  • Loading branch information
Jayclifford345 authored Dec 12, 2024
2 parents aefe771 + a4634cc commit b204764
Show file tree
Hide file tree
Showing 9 changed files with 121 additions and 97 deletions.
14 changes: 14 additions & 0 deletions .github/workflows/regenerate-tutorials.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@ jobs:
with:
repository: grafana/mimir
path: mimir
- uses: actions/checkout@v4
with:
repository: grafana/pyroscope
path: pyroscope
- uses: actions/checkout@v4
with:
path: killercoda
Expand Down Expand Up @@ -66,6 +70,11 @@ jobs:
"${GITHUB_WORKSPACE}/loki/docs/sources/send-data/fluentbit/fluent-bit-loki-tutorial.md"
"${GITHUB_WORKSPACE}/killercoda/loki/fluentbit-loki-tutorial"
working-directory: killercoda/tools/transformer
- run: >
./transformer
"${GITHUB_WORKSPACE}/loki/docs/sources/query/logcli/logcli-tutorial.md"
"${GITHUB_WORKSPACE}/killercoda/loki/logcli-tutorial"
working-directory: killercoda/tools/transformer
- run: >
./transformer
"${GITHUB_WORKSPACE}/grafana/docs/sources/tutorials/alerting-get-started/index.md"
Expand Down Expand Up @@ -101,6 +110,11 @@ jobs:
"${GITHUB_WORKSPACE}/mimir/docs/sources/mimir/get-started/play-with-grafana-mimir/index.md"
"${GITHUB_WORKSPACE}/killercoda/mimir/play-with-mimir"
working-directory: killercoda/tools/transformer
- run: >
./transformer
"${GITHUB_WORKSPACE}/pyroscope/docs/sources/get-started/ride-share-tutorial.md"
"${GITHUB_WORKSPACE}/killercoda/pyroscope/ride-share-tutorial"
working-directory: killercoda/tools/transformer
- run: ./scripts/manage-pr.bash
env:
GH_TOKEN: ${{ github.token }}
Expand Down
42 changes: 21 additions & 21 deletions loki/logcli-tutorial/preprocessed.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ service_name
state
```
This confirms that LogCLI is connected to the Loki instance and we now know that the logs contain the following labels: `package_size`, `service_name`, and `state`. Let's now run some queries against Loki to better understand our package logistics.
This confirms that LogCLI is connected to the Loki instance and we now know that the logs contain the following labels: `package_size`, `service_name`, and `state`. Let's run some queries against Loki to better understand our package logistics.

<!-- INTERACTIVE page step1.md END -->

Expand All @@ -111,7 +111,7 @@ As part of our role within the logistics company, we need to build a report on t

### Find all critical packages

To find all critical packages in the last hour (default lookback time), we can run the following query:
To find all critical packages in the last hour (the default lookback time), we can run the following query:

```bash
logcli query '{service_name="Delivery World"} | package_status="critical"'
Expand All @@ -138,7 +138,7 @@ This will query all logs for the `package_status` `critical` in the last 24 hour
logcli query --since 24h --limit 100 '{service_name="Delivery World"} | package_status="critical"'
```

### Metric Queries
### Metric queries

We can also use LogCLI to query logs based on metrics. For instance as part of the site report we want to count the total number of packages sent from California in the last 24 hours in 1 hour intervals. We can use the following query:

Expand Down Expand Up @@ -194,7 +194,7 @@ logcli query --since 24h 'sum(count_over_time({state="California"}| json | pack

This will return a similar JSON object above but will only show a trend of the number of documents sent from California in 1 hour intervals.

### Instant Metric Queries
### Instant metric queries

Instant metric queries are a subset of metric queries that return the value of the metric at a specific point in time. This can be useful for quickly understanding an aggregate state of the stored logs.

Expand Down Expand Up @@ -246,7 +246,7 @@ This will write all logs for the `service_name` `Delivery World` in the last 24

<!-- INTERACTIVE page step3.md START -->

## Meta Queries
## Meta queries

As site managers, it's essential to maintain good data hygiene and ensure Loki operates efficiently. Understanding the labels and log volume in your logs plays a key role in this process. Beyond querying logs, LogCLI also supports meta queries on your Loki instance. Meta queries don't return log data but provide insights into the structure of your logs and the performance of your queries. The following examples demonstrate some of the core meta queries we run internally to better understand how a Loki instance is performing.

Expand Down Expand Up @@ -294,7 +294,7 @@ package_size 3 15
service_name 1 15
```
### Detected Fields
### Detected fields
Another useful feature of LogCLI is the ability to detect fields in your logs. This can be useful for understanding the structure of your logs and the keys that are present. This will let us detect keys which could be promoted to labels or to structured metadata.
Expand All @@ -305,25 +305,25 @@ logcli detected-fields --since 24h '{service_name="Delivery World"}'
This will return a list of all the keys detected in our logs. The output will look similar to the following:
```console
label: city type: string cardinality: 10
label: detected_level type: string cardinality: 3
label: note type: string cardinality: 7
label: package_id type: string cardinality: 20
label: package_size_extracted type: string cardinality: 3
label: package_status type: string cardinality: 4
label: package_type type: string cardinality: 5
label: receiver_address type: string cardinality: 20
label: receiver_name type: string cardinality: 19
label: sender_address type: string cardinality: 20
label: sender_name type: string cardinality: 19
label: state_extracted type: string cardinality: 5
label: timestamp type: string cardinality: 20
label: city type: string cardinality: 15
label: detected_level type: string cardinality: 3
label: note type: string cardinality: 7
label: package_id type: string cardinality: 994
label: package_size_extracted type: string cardinality: 3
label: package_status type: string cardinality: 4
label: package_type type: string cardinality: 5
label: receiver_address type: string cardinality: 991
label: receiver_name type: string cardinality: 100
label: sender_address type: string cardinality: 991
label: sender_name type: string cardinality: 100
label: state_extracted type: string cardinality: 5
label: timestamp type: string cardinality: 1000
```
You can now see why we opted to keep `package_id` in structured metadata and `package_size` as a label. Package ID has a high cardinality and is unique to each log entry, making it a good candidate for structured metadata since we potentially may need to query for it directly. Package size, on the other hand, has a low cardinality and is a good candidate for a label.
You can now see why we opted to keep `package_id` in structured metadata and `package_size` as a label. Package ID has a high cardinality and is unique to each log entry, making it a good candidate for structured metadata since we potentially may need to query for it directly. Package size, on the other hand, has a low cardinality, making it a good candidate for a label.
### Checking Query Performance
### Checking query performance
Another important aspect of keeping Loki healthy is to monitor the query performance. We can use LogCLI to check the query performance of our logs.
Expand Down
2 changes: 1 addition & 1 deletion loki/logcli-tutorial/step1.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,4 +40,4 @@ service_name
state
```{{copy}}
This confirms that LogCLI is connected to the Loki instance and we now know that the logs contain the following labels: `package_size`{{copy}}, `service_name`{{copy}}, and `state`{{copy}}. Let’s now run some queries against Loki to better understand our package logistics.
This confirms that LogCLI is connected to the Loki instance and we now know that the logs contain the following labels: `package_size`{{copy}}, `service_name`{{copy}}, and `state`{{copy}}. Let’s run some queries against Loki to better understand our package logistics.
6 changes: 3 additions & 3 deletions loki/logcli-tutorial/step2.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ As part of our role within the logistics company, we need to build a report on t

## Find all critical packages

To find all critical packages in the last hour (default lookback time), we can run the following query:
To find all critical packages in the last hour (the default lookback time), we can run the following query:

```bash
logcli query '{service_name="Delivery World"} | package_status="critical"'
Expand All @@ -31,7 +31,7 @@ This will query all logs for the `package_status`{{copy}} `critical`{{copy}} in
logcli query --since 24h --limit 100 '{service_name="Delivery World"} | package_status="critical"'
```{{exec}}
## Metric Queries
## Metric queries
We can also use LogCLI to query logs based on metrics. For instance as part of the site report we want to count the total number of packages sent from California in the last 24 hours in 1 hour intervals. We can use the following query:
Expand Down Expand Up @@ -87,7 +87,7 @@ logcli query --since 24h 'sum(count_over_time({state="California"}| json | pack
This will return a similar JSON object above but will only show a trend of the number of documents sent from California in 1 hour intervals.
## Instant Metric Queries
## Instant metric queries
Instant metric queries are a subset of metric queries that return the value of the metric at a specific point in time. This can be useful for quickly understanding an aggregate state of the stored logs.
Expand Down
34 changes: 17 additions & 17 deletions loki/logcli-tutorial/step3.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Meta Queries
# Meta queries

As site managers, it’s essential to maintain good data hygiene and ensure Loki operates efficiently. Understanding the labels and log volume in your logs plays a key role in this process. Beyond querying logs, LogCLI also supports meta queries on your Loki instance. Meta queries don’t return log data but provide insights into the structure of your logs and the performance of your queries. The following examples demonstrate some of the core meta queries we run internally to better understand how a Loki instance is performing.

Expand Down Expand Up @@ -47,7 +47,7 @@ package_size 3 15
service_name 1 15
```{{copy}}
## Detected Fields
## Detected fields
Another useful feature of LogCLI is the ability to detect fields in your logs. This can be useful for understanding the structure of your logs and the keys that are present. This will let us detect keys which could be promoted to labels or to structured metadata.
Expand All @@ -58,24 +58,24 @@ logcli detected-fields --since 24h '{service_name="Delivery World"}'
This will return a list of all the keys detected in our logs. The output will look similar to the following:
```console
label: city type: string cardinality: 10
label: detected_level type: string cardinality: 3
label: note type: string cardinality: 7
label: package_id type: string cardinality: 20
label: package_size_extracted type: string cardinality: 3
label: package_status type: string cardinality: 4
label: package_type type: string cardinality: 5
label: receiver_address type: string cardinality: 20
label: receiver_name type: string cardinality: 19
label: sender_address type: string cardinality: 20
label: sender_name type: string cardinality: 19
label: state_extracted type: string cardinality: 5
label: timestamp type: string cardinality: 20
label: city type: string cardinality: 15
label: detected_level type: string cardinality: 3
label: note type: string cardinality: 7
label: package_id type: string cardinality: 994
label: package_size_extracted type: string cardinality: 3
label: package_status type: string cardinality: 4
label: package_type type: string cardinality: 5
label: receiver_address type: string cardinality: 991
label: receiver_name type: string cardinality: 100
label: sender_address type: string cardinality: 991
label: sender_name type: string cardinality: 100
label: state_extracted type: string cardinality: 5
label: timestamp type: string cardinality: 1000
```{{copy}}
You can now see why we opted to keep `package_id`{{copy}} in structured metadata and `package_size`{{copy}} as a label. Package ID has a high cardinality and is unique to each log entry, making it a good candidate for structured metadata since we potentially may need to query for it directly. Package size, on the other hand, has a low cardinality and is a good candidate for a label.
You can now see why we opted to keep `package_id`{{copy}} in structured metadata and `package_size`{{copy}} as a label. Package ID has a high cardinality and is unique to each log entry, making it a good candidate for structured metadata since we potentially may need to query for it directly. Package size, on the other hand, has a low cardinality, making it a good candidate for a label.
## Checking Query Performance
## Checking query performance
Another important aspect of keeping Loki healthy is to monitor the query performance. We can use LogCLI to check the query performance of our logs.
Expand Down
5 changes: 4 additions & 1 deletion pyroscope/ride-share-tutorial/finish.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
# Summary

In this tutorial, you learned how to profile a simple “Ride Share” application using Pyroscope. You have learned some of the core instrumentation concepts such as tagging and how to use the Profile view of Grafana to identify performance bottlenecks.
In this tutorial, you learned how to profile a simple “Ride Share” application using Pyroscope.
You have learned some of the core instrumentation concepts such as tagging and how to use Explore Profiles identify performance bottlenecks.

## Next steps

- Learn more about the Pyroscope SDKs and how to [instrument your application with Pyroscope](https://grafana.com/docs/pyroscope/latest/configure-client/).

- Deploy Pyroscope in a production environment using the [Pyroscope Helm chart](https://grafana.com/docs/pyroscope/latest/deploy-kubernetes/).

- Continue exploring your profile data using [Explore Profiles](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/profiles/investigate/)
Loading

0 comments on commit b204764

Please sign in to comment.