Skip to content

Commit

Permalink
Make alerting navigation more consistent with serverless docs (#3640) (
Browse files Browse the repository at this point in the history
…#3644)

(cherry picked from commit e64842f)

Co-authored-by: DeDe Morton <dede.morton@elastic.co>
  • Loading branch information
mergify[bot] and dedemorton authored Mar 7, 2024
1 parent 74cc786 commit 8349228
Show file tree
Hide file tree
Showing 11 changed files with 55 additions and 30 deletions.
31 changes: 17 additions & 14 deletions docs/en/observability/create-alerts.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,13 @@ Alerts and rules related to service level objectives (SLOs), and {observability}
You can also manage {observability} app rules alongside rules for other apps from the {kibana-ref}/create-and-manage-rules.html[{kib} Management UI].

[discrete]
== Next steps

* <<create-alerts-rules>>
* <<view-observability-alerts>>

[[create-alerts-rules]]
== Create rules
== Create and manage rules

The first step when setting up alerts is to create a rule.
To create and manage rules related to {observability} apps,
Expand Down Expand Up @@ -56,14 +61,14 @@ tie into other third-party systems. Connectors allow actions to talk to these se

Learn how to create specific types of rules:

* {kibana-ref}/apm-alerts.html[APM rules]
* <<custom-threshold-alert,Custom threshold rule>>
* <<logs-threshold-alert,Logs threshold rule>>
* <<logs-threshold-alert,Log threshold rule>>
* <<infrastructure-threshold-alert,Infrastructure threshold rule>>
* <<metrics-threshold-alert,Metrics threshold rule>>
* <<metrics-threshold-alert,Metric threshold rule>>
* <<monitor-status-alert,Monitor status rule>>
* <<tls-certificate-alert,TLS certificate rule>>
* <<duration-anomaly-alert,Uptime duration anomaly rule>>
* {kibana-ref}/apm-alerts.html[APM rules]
* <<slo-burn-rate-alert,SLO burn rate rule>>

[discrete]
Expand Down Expand Up @@ -157,20 +162,18 @@ xpack.observability.unsafe.alertingExperience.enabled: 'false'
----


include::threshold-alert.asciidoc[leveloffset=+1]

include::logs-threshold-alert.asciidoc[leveloffset=+1]
include::threshold-alert.asciidoc[leveloffset=+2]

include::infrastructure-threshold-alert.asciidoc[leveloffset=+1]
include::logs-threshold-alert.asciidoc[leveloffset=+2]

include::metrics-threshold-alert.asciidoc[leveloffset=+1]
include::infrastructure-threshold-alert.asciidoc[leveloffset=+2]

include::monitor-status-alert.asciidoc[leveloffset=+1]
include::metrics-threshold-alert.asciidoc[leveloffset=+2]

include::uptime-tls-alert.asciidoc[leveloffset=+1]
include::monitor-status-alert.asciidoc[leveloffset=+2]

include::uptime-duration-anomaly-alert.asciidoc[leveloffset=+1]
include::uptime-tls-alert.asciidoc[leveloffset=+2]

include::slo-burn-rate-alert.asciidoc[leveloffset=+1]
include::uptime-duration-anomaly-alert.asciidoc[leveloffset=+2]

include::view-observability-alerts.asciidoc[leveloffset=+1]
include::slo-burn-rate-alert.asciidoc[leveloffset=+2]
1 change: 1 addition & 0 deletions docs/en/observability/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,7 @@ include::profiling-troubleshooting.asciidoc[leveloffset=+2]

// Alerting
include::create-alerts.asciidoc[leveloffset=+1]
include::view-observability-alerts.asciidoc[leveloffset=+2]

//SLOs
include::slo-overview.asciidoc[leveloffset=+1]
Expand Down
3 changes: 3 additions & 0 deletions docs/en/observability/infrastructure-threshold-alert.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
[[infrastructure-threshold-alert]]
= Create an infrastructure threshold rule
++++
<titleabbrev>Infrastructure threshold</titleabbrev>
++++

Based on the resources listed on the *Inventory* page within the {infrastructure-app},
you can create a threshold rule to notify you when a metric has reached or exceeded a value for a specific
Expand Down
2 changes: 1 addition & 1 deletion docs/en/observability/logs-checklist.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ Refer to <<application-logs>>.

[discrete]
[[logs-alerts-checklist]]
== Create a logs threshold alert
== Create a log threshold alert

You can create a rule to send an alert when the log aggregation exceeds a threshold.

Expand Down
8 changes: 6 additions & 2 deletions docs/en/observability/logs-threshold-alert.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
[[logs-threshold-alert]]
= Create a logs threshold rule
= Create a log threshold rule
++++
<titleabbrev>Log threshold</titleabbrev>
++++


. To access this page, go to **{observability}** -> **Logs**.
. Click **Alerts and rules** -> **Create rule**.
Expand Down Expand Up @@ -128,7 +132,7 @@ You can add more context to the message by clicking the icon above the message t
and selecting from a list of available variables.

[role="screenshot"]
image::images/logs-threshold-alert-default-message.png[Default notification message for logs threshold rules with open "Add variable" popup listing available action variables,width=600]
image::images/logs-threshold-alert-default-message.png[Default notification message for log threshold rules with open "Add variable" popup listing available action variables,width=600]

[discrete]
[[performance-considerations]]
Expand Down
9 changes: 6 additions & 3 deletions docs/en/observability/metrics-threshold-alert.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
[[metrics-threshold-alert]]
= Create a metrics threshold rule
= Create a metric threshold rule
++++
<titleabbrev>Metric threshold</titleabbrev>
++++

Based on the metrics that are listed on the **Metrics Explorer** page within the {infrastructure-app},
you can create a threshold rule to notify you when a metric has reached or exceeded a value for a specific
Expand Down Expand Up @@ -93,14 +96,14 @@ You can add more context to the message by clicking the icon above the message t
and selecting from a list of available variables.

[role="screenshot"]
image::images/metrics-threshold-alert-default-message.png[Default notification message for metrics threshold rules with open "Add variable" popup listing available action variables,width=600]
image::images/metrics-threshold-alert-default-message.png[Default notification message for metric threshold rules with open "Add variable" popup listing available action variables,width=600]
// NOTE: This is an autogenerated screenshot. Do not edit it directly.

[discrete]
[[metrics-alert-settings]]
== Settings

With metrics threshold rules, it's not possible to set an explicit index pattern as part of the configuration. The index pattern is instead inferred from
With metric threshold rules, it's not possible to set an explicit index pattern as part of the configuration. The index pattern is instead inferred from
*Metrics indices* on the <<configure-settings,Settings>> page of the {infrastructure-app}.

With each execution of the rule check, the *Metrics indices* setting is checked, but it is not stored when the rule is created.
Expand Down
9 changes: 6 additions & 3 deletions docs/en/observability/monitor-status-alert.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
[[monitor-status-alert]]
= Create a monitor status rule
++++
<titleabbrev>Monitor status</titleabbrev>
++++

Within the {uptime-app}, create a **Monitor Status** rule to receive notifications
based on errors and outages.
based on errors and outages.

. To access this page, go to **{observability}** -> **Uptime**.
. At the top of the page, click **Alerts and rules** -> **Create rule**.
Expand All @@ -19,15 +22,15 @@ If you already have a query in the overview page search bar, it's populated here

You can specify the following thresholds for your rule.

|===
|===

| *Status check* | Receive alerts when a monitor goes down a specified number of
times within a time range (seconds, minutes, hours, or days).

| *Availability* | Receive alerts when a monitor goes below a specified availability
threshold within a time range (days, weeks, months, or years).

|===
|===

Let's create a rule for any monitor that shows `Down` more than three times in 10 minutes.

Expand Down
3 changes: 1 addition & 2 deletions docs/en/observability/slo-burn-rate-alert.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
[[slo-burn-rate-alert]]
= Create a service-level objective (SLO) burn rate rule

++++
<titleabbrev>Create an SLO burn rate rule</titleabbrev>
<titleabbrev>SLO burn rate</titleabbrev>
++++

include::slo-overview.asciidoc[tag=slo-license]
Expand Down
5 changes: 4 additions & 1 deletion docs/en/observability/threshold-alert.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
[[custom-threshold-alert]]
= Create a custom threshold rule
++++
<titleabbrev>Custom threshold</titleabbrev>
++++

beta::[]

Expand Down Expand Up @@ -151,4 +154,4 @@ You can add more context to the message by clicking the icon above the message t
and selecting from a list of available variables.

[role="screenshot"]
image::images/logs-threshold-alert-default-message.png[Default notification message for logs threshold rules with open "Add variable" popup listing available action variables,width=600]
image::images/logs-threshold-alert-default-message.png[Default notification message for log threshold rules with open "Add variable" popup listing available action variables,width=600]
7 changes: 5 additions & 2 deletions docs/en/observability/uptime-duration-anomaly-alert.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
[[duration-anomaly-alert]]
= Create an uptime duration anomaly rule
++++
<titleabbrev>Uptime duration anomaly</titleabbrev>
++++

Within the {uptime-app}, create an *Uptime duration anomaly* rule to receive notifications
based on the response durations for all of the geographic locations of each monitor. When a
Expand All @@ -20,7 +23,7 @@ The _anomaly score_ is a value from `0` to `100`, which indicates the significan
compared to previously seen anomalies. The highly anomalous values are shown in
red and the low scored values are indicated in blue.

|===
|===

| *warning* | Score `0` and above.

Expand All @@ -30,7 +33,7 @@ red and the low scored values are indicated in blue.

| *critical* | Score `75` and above.

|===
|===

[role="screenshot"]
image::images/response-durations-alert.png[Uptime response duration rule]
Expand Down
7 changes: 5 additions & 2 deletions docs/en/observability/uptime-tls-alert.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
[[tls-certificate-alert]]
= Create a TLS certificate rule
++++
<titleabbrev>TLS certificate</titleabbrev>
++++

Within the {uptime-app}, you can create a rule that notifies
you when one or more of your monitors has a TLS certificate expiring
Expand All @@ -18,15 +21,15 @@ The threshold values for each condition are configurable on the

You can specify the following thresholds for your rule.

|===
|===

| *Expiration threshold* | The `expiration` threshold specifies when you are notified
about certificates that are approaching expiration dates.

| *Age limit* | The `age` threshold specifies when you are notified about certificates
that have been valid for too long.

|===
|===

Let's create a rule to check every 6 hours and notify us when any of the TLS certificates on sites we're monitoring are close to expiring.

Expand Down

0 comments on commit 8349228

Please sign in to comment.