-
Use an existing Grafana Cloud account or setup a new one. Then create an access token:
-
In a Grafana instance on Grafana Cloud go to Administration -> Users and Access -> Cloud access policies.
-
Click
Create access policy
. -
Fill in the
Display name
field and check theWrite
check box for metrics, logs and traces. Then clickCreate
. -
On the newly created access policy click
Add token
. -
Fill in the
Token name
field and clickCreate
. Make a copy of the token as it will be used later on.
-
-
Create the meta namespace
kubectl create namespace meta
-
Create secrets with credentials and the endpoint when sending logs, metrics or traces to Grafana Cloud.
kubectl create secret generic logs -n meta \ --from-literal=username=<logs username> \ --from-literal=password=<token> \ --from-literal=endpoint='https://logs-prod-us-central1.grafana.net/loki/api/v1/push' kubectl create secret generic metrics -n meta \ --from-literal=username=<metrics username> \ --from-literal=password=<token> \ --from-literal=endpoint='https://prometheus-us-central1.grafana.net/api/prom/push' kubectl create secret generic traces -n meta \ --from-literal=username=<OTLP instance ID> \ --from-literal=password=<token> \ --from-literal=endpoint='https://otlp-gateway-prod-us-east-0.grafana.net/otlp'
The logs, metrics and traces usernames are the
User / Username / Instance IDs
of the Loki, Prometheus/Mimir and OpenTelemetry instances in Grafana Cloud. FromHome
in Grafana click onStacks
. Then go to theDetails
pages of Loki and Prometheus/Mimir. For OpenTelemetry go to theConfigure
page. The endpoints will also have to be changed to match your settings. -
Create a values.yaml file based on the default one. Fill in the names of the secrets created above as needed. An example minimal values.yaml looks like this:
namespacesToMonitor: - loki cloud: logs: enabled: true secret: "logs" metrics: enabled: true secret: "metrics" traces: enabled: true secret: "traces"
-
Create the meta namespace
kubectl create namespace meta
-
Create a secret named
minio
with the user and password for the local Minio:kubectl create secret generic minio -n meta \ --from-literal=rootPassword=<password> \ --from-literal=rootUser=<user>
-
Create a values.yaml file based on the default one. An example minimal values.yaml looks like this:
namespacesToMonitor: - loki cloud: logs: enabled: false metrics: enabled: false traces: enabled: false local: grafana: enabled: true logs: enabled: true metrics: enabled: true traces: enabled: true minio: enabled: true
-
Add the repo
helm repo add grafana https://grafana.github.io/helm-charts
-
Fetch the latest charts from the grafana repo
helm repo update grafana
-
Install this helm chart
helm install -n meta -f values.yaml meta grafana/meta-monitoring
-
Upgrade
helm upgrade --install -f values.yaml -n meta meta grafana/meta-monitoring
-
Delete this chart:
helm delete -n meta meta
Only the files for the application monitored have to be copied. When monitoring Loki import dashboard files starting with 'loki-'.
For each of the dashboard files in charts/meta-monitoring/src/dashboards folder do the following:
-
Click on 'Dashboards' in Grafana
-
Click on the 'New` button and select 'Import'
-
Drop the dashboard file to the 'Upload dashboard JSON file' drop area
-
Click 'Import'
-
Select the rules files in charts/meta-monitoring/src/rules for the application to monitor. When monitoring Loki use loki-rules.yaml.
-
Install mimirtool as per the instructions
-
Create an access policy with Read and Write permission for Rules. Also create a token and record the token.
-
Get your cloud Prometheus endpoint and Instance ID from the
Prometheus
page inStacks
. -
Use them to load the rules using mimirtool as follows:
mimirtool rules load --address=<your_cloud_prometheus_endpoint> --id=<your_instance_id> --key=<your_cloud_access_policy_token> *.yaml
- To check the rules you have uploaded run:
mimirtool rules print --address=<your_cloud_prometheus_endpoint> --id=<your_instance_id> --key=<your_cloud_access_policy_token>
-
In the Loki that is being monitored enable tracing in the config:
loki: tracing: enabled: true
-
Add the following environment variables to your Loki binaries. When using the Loki Helm chart these can be added using the
extraEnv
setting for the Loki components.- JAEGER_ENDPOINT: http address of the mmc-alloy service installed by the meta-monitoring chart, for example "http://mmc-alloy:14268/api/traces"
- JAEGER_AGENT_TAGS: extra tags you would like to add to the spans, for example 'cluster="abc",namespace="def"'
- JAEGER_SAMPLER_TYPE: the sampling strategy, we suggest setting this to
ratelimiting
so at most 1 trace is accepted per second. See these docs for more options. - JAEGER_SAMPLER_PARAM: 1.0
-
If Loki is installed in a different namespace you can create an ExternalName service in Kubernetes to point to the mmc-alloy service in the meta monitoring namespace
When using local mode by default a Kubernetes Ingress object is created to access the Grafana instance. This will need to be adapted to your cloud provider by updating the grafana.ingress
section of the values.yaml
file provided to Helm. Check the documentation of your cloud provider for available options.
Metrics about Kubernetes objects are scraped from kube-state-metrics. This needs to be installed in the cluster. The kubeStateMetrics.endpoint
entry in values.yaml should be set to it's address (without the /metrics
part in the URL).