diff --git a/docs/en/observability/apm/collect-application-data/open-telemetry/otel-metrics.asciidoc b/docs/en/observability/apm/collect-application-data/open-telemetry/otel-metrics.asciidoc index bff834fd90..b108f731ef 100644 --- a/docs/en/observability/apm/collect-application-data/open-telemetry/otel-metrics.asciidoc +++ b/docs/en/observability/apm/collect-application-data/open-telemetry/otel-metrics.asciidoc @@ -36,7 +36,7 @@ Use *Discover* to validate that metrics are successfully reported to {kib}. include::{observability-docs-root}/docs/en/observability/apm/tab-widgets/open-kibana-widget.asciidoc[] -- -. Open the main menu, then click *Discover*. +. Find **Discover** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Select `apm-*` as your index pattern. . Filter the data to only show documents with metrics: `[data_stream][type]: "metrics"` . Narrow your search with a known OpenTelemetry field. For example, if you have an `order_value` field, add `order_value: *` to your search to return diff --git a/docs/en/observability/apm/configure/shared/input-apm.asciidoc b/docs/en/observability/apm/configure/shared/input-apm.asciidoc index 2f3b13904b..49558a3aaa 100644 --- a/docs/en/observability/apm/configure/shared/input-apm.asciidoc +++ b/docs/en/observability/apm/configure/shared/input-apm.asciidoc @@ -2,7 +2,7 @@ // tag::fleet-managed-settings[] Configure and customize Fleet-managed APM settings directly in {kib}: -. Open {kib} and navigate to **{fleet}**. +. In {kib}, find **Fleet** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Under the **Agent policies** tab, select the policy you would like to configure. . Find the Elastic APM integration and select **Actions** > **Edit integration**. // end::fleet-managed-settings[] diff --git a/docs/en/observability/apm/manage-storage/custom-index-template.asciidoc b/docs/en/observability/apm/manage-storage/custom-index-template.asciidoc index 3ba94d595d..de479eb91b 100644 --- a/docs/en/observability/apm/manage-storage/custom-index-template.asciidoc +++ b/docs/en/observability/apm/manage-storage/custom-index-template.asciidoc @@ -15,7 +15,8 @@ These index templates are composed of multiple component templates--reusable bui that configure index mappings, settings, and aliases. The default APM index templates can be viewed in {kib}. -Navigate to **{stack-manage-app}** → **Index Management** → **Index Templates**, and search for `apm`. +To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Select **Index Templates** and search for `apm`. Select any of the APM index templates to view their relevant component templates. [discrete] @@ -29,8 +30,8 @@ Do not change or customize any default mappings. The APM index templates by default reference a non-existent `@custom` component template for each data stream. You can create or edit this `@custom` component template to customize your {es} indices. -First, determine which <> you'd like to edit. -Then, open {kib} and navigate to **{stack-manage-app}** → **Index Management** → **Component Templates**. +First, determine which <> you'd like to edit in {kib}. +To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Select **Component Templates**. Custom component templates are named following this pattern: `@custom`. Search for the name of the data stream, like `traces-apm`, and select its custom component template. diff --git a/docs/en/observability/apm/manage-storage/ilm-how-to.asciidoc b/docs/en/observability/apm/manage-storage/ilm-how-to.asciidoc index ee855c1800..50e9899c3d 100644 --- a/docs/en/observability/apm/manage-storage/ilm-how-to.asciidoc +++ b/docs/en/observability/apm/manage-storage/ilm-how-to.asciidoc @@ -123,7 +123,8 @@ The default rollover definition for each APM data stream is applied based on {re The APM data stream lifecycle policies can be viewed in {kib}: -. Navigate to *{stack-manage-app}* → *Index Management* → *Component Templates*. +. To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Select **Component Templates**. . Search for `apm`. . Look for templates with `@lifecycle` suffix. @@ -146,7 +147,8 @@ This tutorial explains how to apply a custom index lifecycle policy to the `trac The **Data Streams** view in {kib} shows you data streams, index templates, and lifecycle policies: -. Navigate to **{stack-manage-app}** → **Index Management** → **Data Streams**. +. To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Select **Data Streams**. . Search for `traces-apm` to see all data streams associated with APM trace data. . In this example, I only have one data stream because I'm only using the `default` namespace. You may have more if your setup includes multiple namespaces. @@ -158,7 +160,7 @@ image::images/data-stream-overview.png[Data streams info] [id="apm-data-streams-custom-two{append-legacy}"] == Step 2: Create an index lifecycle policy -. Navigate to **{stack-manage-app}** → **Index Lifecycle Policies**. +. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Click **Create policy**. Name your new policy; For this tutorial, I've chosen `custom-traces-apm-policy`. diff --git a/docs/en/observability/apm/manage-storage/reduce-apm-storage.asciidoc b/docs/en/observability/apm/manage-storage/reduce-apm-storage.asciidoc index 7b58dd4d14..4f22669256 100644 --- a/docs/en/observability/apm/manage-storage/reduce-apm-storage.asciidoc +++ b/docs/en/observability/apm/manage-storage/reduce-apm-storage.asciidoc @@ -90,7 +90,7 @@ POST /.ds-*-apm*/_delete_by_query {kib}'s {ref}/index-mgmt.html[Index Management] allows you to manage your cluster's indices, data streams, index templates, and much more. -In {kib}, navigate to **Stack Management** > **Index Management** > **Data Streams**. +To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Select **Data Streams**. Select the data streams you want to delete, and click **Delete data streams**. [float] diff --git a/docs/en/observability/apm/security/apm-agents/api-keys.asciidoc b/docs/en/observability/apm/security/apm-agents/api-keys.asciidoc index 612efe3bda..dd38ce53cd 100644 --- a/docs/en/observability/apm/security/apm-agents/api-keys.asciidoc +++ b/docs/en/observability/apm/security/apm-agents/api-keys.asciidoc @@ -78,7 +78,7 @@ The Applications UI has a built-in workflow that you can use to easily create an Only API keys created in the Applications UI will show up here. Using a superuser account, or a user with the role created in the previous step, -open {kib} and navigate to **{observability}** → **Applications** → **Settings** → **Agent keys**. +In {kib}, find **Applications** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Go to **Settings** → **Agent keys**. Enter a name for your API key and select at least one privilege. For example, to create an API key that can be used to ingest APM events diff --git a/docs/en/observability/apm/security/data-security/apm-spaces.asciidoc b/docs/en/observability/apm/security/data-security/apm-spaces.asciidoc index dfd2d3ff73..d7fab241ba 100644 --- a/docs/en/observability/apm/security/data-security/apm-spaces.asciidoc +++ b/docs/en/observability/apm/security/data-security/apm-spaces.asciidoc @@ -247,7 +247,8 @@ POST /_aliases?pretty === Step 2: Create {kib} spaces Next, you'll need to create a {Kib} space for each service environment. -To create these spaces, navigate to **Stack Management** > **Spaces** > **Create a space**. +To open **Spaces**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +To create a new space, click **Create a space**. For this guide, we've created two Kibana spaces, one named `production` and one named `staging`. See {kibana-ref}/xpack-spaces.html[Kibana spaces] for more information on creating a space. @@ -273,7 +274,8 @@ The values in each column match the names of the filtered aliases we created in [float] === Step 4: Create {kib} access roles -In {kib}, navigate to **Stack Management** > **Roles** and click **Create role**. +To open **Roles**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Click **Create role**. You'll need to create two roles: one for `staging` users (we'll call this role `staging_apm_viewer`) and one for `production` users (we'll call this role `production_apm_viewer`). diff --git a/docs/en/observability/apm/security/data-security/custom-filter.asciidoc b/docs/en/observability/apm/security/data-security/custom-filter.asciidoc index 32566c4828..300f5ef6c5 100644 --- a/docs/en/observability/apm/security/data-security/custom-filter.asciidoc +++ b/docs/en/observability/apm/security/data-security/custom-filter.asciidoc @@ -276,9 +276,9 @@ PUT _ingest/pipeline/traces-apm@custom ---- <1> The name of the pipeline we previously created -TIP: If you prefer using a GUI, you can instead open {kib} and navigate to -**Stack Management** -> **Ingest Pipelines** -> **Create pipeline**. -Use the same naming convention explained previously to ensure your new pipeline matches the correct APM data stream. +TIP: If you prefer using a GUI, you can instead use the **Ingest Pipelines** page in {kib}. +To open **Ingest Pipelines**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Click **Create pipeline** and use the same naming convention explained previously to ensure your new pipeline matches the correct APM data stream. That's it! Passwords will now be redacted from your APM HTTP body data. diff --git a/docs/en/observability/apm/security/elastic-stack/access-api-keys.asciidoc b/docs/en/observability/apm/security/elastic-stack/access-api-keys.asciidoc index 3a8862fd6f..d763055fd0 100644 --- a/docs/en/observability/apm/security/elastic-stack/access-api-keys.asciidoc +++ b/docs/en/observability/apm/security/elastic-stack/access-api-keys.asciidoc @@ -17,7 +17,8 @@ You can create as many API keys per user as necessary. [[apm-beats-api-key-publish]] == Create an API key for writing events -In {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. +To open **API keys**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Click **Create API key**. [role="screenshot"] image::images/server-api-key-create.png[API key creation] @@ -78,7 +79,8 @@ output.elasticsearch: [[apm-beats-api-key-monitor]] == Create an API key for monitoring -In {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. +To open **API keys**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Click **Create API key**. [role="screenshot"] image::images/server-api-key-create.png[API key creation] diff --git a/docs/en/observability/apm/tab-widgets/jaeger.asciidoc b/docs/en/observability/apm/tab-widgets/jaeger.asciidoc index 2e1982b6e2..db49eb178d 100644 --- a/docs/en/observability/apm/tab-widgets/jaeger.asciidoc +++ b/docs/en/observability/apm/tab-widgets/jaeger.asciidoc @@ -1,6 +1,7 @@ // tag::ess[] . Log into {ess-console}[{ecloud}] and select your deployment. -In {kib}, select **Add data**, then search for and select "Elastic APM". +. In {kib}, find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Select **Elastic APM**. If the integration is already installed, under the polices tab, select **Actions** > **Edit integration**. If the integration has not been installed, select **Add Elastic APM**. Copy the URL. If you're using Agent authorization, copy the Secret token as well. @@ -25,7 +26,7 @@ See the https://www.jaegertracing.io/docs/1.27/cli/[Jaeger CLI flags documentati // tag::self-managed[] . Configure the APM Integration as a collector for your Jaeger agents. -In {kib}, select **Add data**, then search for and select "Elastic APM". +In {kib}, find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Select **Elastic APM**. If the integration is already installed, under the polices tab, select **Actions** > **Edit integration**. If the integration has not been installed, select **Add Elastic APM**. Copy the Host. If you're using Agent authorization, copy the Secret token as well. diff --git a/docs/en/observability/apm/upgrading-to-integration.asciidoc b/docs/en/observability/apm/upgrading-to-integration.asciidoc index b0d20bb674..bfac1aa09d 100644 --- a/docs/en/observability/apm/upgrading-to-integration.asciidoc +++ b/docs/en/observability/apm/upgrading-to-integration.asciidoc @@ -174,7 +174,7 @@ The process of migrating should only take a few minutes. With a Superuser account, complete the following steps: -. In {kib}, navigate to **{observability}** → **Applications** → **Settings** → **Schema**. +. In {kib}, go to the **Applications** app and click **Settings** → **Schema**. + image::./images/schema-agent.png[switch to {agent}] diff --git a/docs/en/observability/apm/view-and-analyze/filter-and-search/cross-cluster-search.asciidoc b/docs/en/observability/apm/view-and-analyze/filter-and-search/cross-cluster-search.asciidoc index 16e3494026..0f2e129a08 100644 --- a/docs/en/observability/apm/view-and-analyze/filter-and-search/cross-cluster-search.asciidoc +++ b/docs/en/observability/apm/view-and-analyze/filter-and-search/cross-cluster-search.asciidoc @@ -21,7 +21,7 @@ and allowing for better performance while managing multiple observability use ca If you're using the Hosted {ess}, see {cloud}/ec-enable-ccs.html[Enable cross-cluster search]. // lint ignore elasticsearch -You can add remote clusters directly in {kib}, under *Management* → *Elasticsearch* → *Remote clusters*. +To add remote clusters directly in {kib}, find `Remote Clusters` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. All you need is a name for the remote cluster and the seed node(s). Remember the names of your remote clusters, you'll need them in step two. See {ref}/ccr-getting-started.html[managing remote clusters] for detailed information on the setup process. @@ -44,8 +44,8 @@ You can also specify certain clusters to display data from, for example, There are two ways to edit the default {data-source}: -* In the Applications UI -- Navigate to *Applications* → *Settings* → *Indices*, and change all `xpack.apm.indices.*` values to -include remote clusters. +* In the Applications UI -- Find **Applications** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Go to **Settings** → **Indices** and change all `xpack.apm.indices.*` values to include remote clusters. * In `kibana.yml` -- Update the {kibana-ref}/apm-settings-kb.html[`xpack.apm.indices.*`] configuration values to include remote clusters. @@ -53,11 +53,11 @@ include remote clusters. .Exclude data tiers from search ==== In a cross-cluster search (CCS) environment, it's possible for different clusters to serve different data tiers in responses. -If one of the requested clusters responds slowly, it can cause a timeout at the proxy after 320 seconds. +If one of the requested clusters responds slowly, it can cause a timeout at the proxy after 320 seconds. This results in 502 Bad Gateway server error responses presented as failure toast messages in the UI, and no data loaded. To prevent this, you can exclude {ref}/data-tiers.html[data tiers] that might slow down responses from search: the `data_frozen` and `data_cold` tiers. To exclude data tiers from search in the APM UI: -. In {kib}, go to *Stack Management* → *Advanced Settings*. +. To open **Advanced settings**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the *Observability* section, update the *Excluded data tiers from search* option with a list of data tiers. ==== diff --git a/docs/en/observability/apm/view-and-analyze/inventory.asciidoc b/docs/en/observability/apm/view-and-analyze/inventory.asciidoc index 2e5ae89f9e..6ccffb1e13 100644 --- a/docs/en/observability/apm/view-and-analyze/inventory.asciidoc +++ b/docs/en/observability/apm/view-and-analyze/inventory.asciidoc @@ -39,7 +39,7 @@ Inventory allows you to: [[explore-your-entities]] == Explore your entities -. In your Observability project, go to **Inventory** to view all of your entities. +. To view all your entities, find **Inventory** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + When you open the Inventory for the first time, you'll be asked to enable the EEM. Once enabled, the Inventory will be accessible to anyone with the appropriate privileges. + @@ -75,16 +75,22 @@ You can add entities to the Inventory through one of the following approaches: * [[add-data-entities]] == Add data -To add entities, select **Add data** from the left-hand navigation and choose one of the following onboarding journeys: +To add entities, select **Add data** and choose one of the following onboarding journeys: -- **Auto-detect logs and metrics** +- **Host** Detects hosts (with metrics and logs) - **Kubernetes** Detects hosts, containers, and services -- **Elastic APM / OpenTelemetry / Synthetic Monitor** +- **Application** Detects services -Associate existing service logs + +- **Cloud** +Ingests telemetry data from the Cloud + +[float] +[[associate-existing-service-logs]] +== Associate existing service logs To learn how, refer to <>. \ No newline at end of file diff --git a/docs/en/observability/apm/view-and-analyze/ui-overview/services.asciidoc b/docs/en/observability/apm/view-and-analyze/ui-overview/services.asciidoc index 9581bfc43d..3c578a33a4 100644 --- a/docs/en/observability/apm/view-and-analyze/ui-overview/services.asciidoc +++ b/docs/en/observability/apm/view-and-analyze/ui-overview/services.asciidoc @@ -31,12 +31,10 @@ Service groups are {kib} space-specific and available for any users with appropr [role="screenshot"] image::./images/apm-service-group.png[Example view of service group in the Applications UI in Kibana] -To enable Service groups, open {kib} and navigate to **Stack Management** → **Advanced Settings** → **Observability**, -and enable the **Service groups feature**. To create a service group: -. Navigate to **Observability** → **Applications** → **Service inventory**. +. To open **Service inventory**, find **Applications** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Switch to **Service groups**. . Click **Create group**. . Specify a name, color, and description. diff --git a/docs/en/observability/categorize-logs.asciidoc b/docs/en/observability/categorize-logs.asciidoc index 4162f3d128..0e6af5e5a4 100644 --- a/docs/en/observability/categorize-logs.asciidoc +++ b/docs/en/observability/categorize-logs.asciidoc @@ -5,7 +5,7 @@ Application log events are often unstructured and contain variable data. Many log messages are the same or very similar, so classifying them can reduce millions of log lines into just a few categories. -Within the {logs-app}, the *Categories* page enables you to identify patterns in +The *Categories* page enables you to identify patterns in your log events quickly. Instead of manually identifying similar logs, the logs categorization view lists log events that have been grouped based on their messages and formats so that you can take action quicker. @@ -28,11 +28,11 @@ the static parts of the message, clusters similar messages, classifies them into message categories, and detects unusually high message counts in the categories. // lint ignore ml -1. Select *Categories*, and you are prompted to use {ml} to create - log rate categorizations. +1. Open the **Categories** page by finding `Logs / Categories` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + You are prompted to use {ml} to create log rate categorizations. 2. Choose a time range for the {ml} analysis. By default, the {ml} job analyzes log messages no older than four weeks and continues indefinitely. -3. Add the indices that contain the logs you want to examine. By default, Machine Learning analyzes messages in all log indices that match the patterns set in the *logs source* advanced setting. Update this setting by going to *Management* → *Advanced Settings* and searching for _logs source_. +3. Add the indices that contain the logs you want to examine. By default, Machine Learning analyzes messages in all log indices that match the patterns set in the *logs sources* advanced setting. To open **Advanced settings**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. 4. Click *Create ML job*. The job is created, and it starts to run. It takes a few minutes for the {ml} robots to collect the necessary data. After the job processed the data, you can view the results. diff --git a/docs/en/observability/cloud-monitoring/aws/ingest-aws-firehose.asciidoc b/docs/en/observability/cloud-monitoring/aws/ingest-aws-firehose.asciidoc index 9749a64bad..ee5b882ab9 100644 --- a/docs/en/observability/cloud-monitoring/aws/ingest-aws-firehose.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/ingest-aws-firehose.asciidoc @@ -29,7 +29,7 @@ The deployment includes an {es} cluster for storing and searching your data, and [[firehose-step-one]] === Step 1: Install AWS integration in {kib} -. Install AWS integrations to load index templates, ingest pipelines, and dashboards into {kib}. In {kib}, navigate to *Management* > *Integrations* in the sidebar. Find the AWS Integration by browsing the catalog. +. Install AWS integrations to load index templates, ingest pipelines, and dashboards into {kib}. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Find the AWS Integration by browsing the catalog. . Navigate to the *Settings* tab and click *Install AWS assets*. Confirm by clicking *Install AWS* in the popup. @@ -39,9 +39,9 @@ The deployment includes an {es} cluster for storing and searching your data, and [[firehose-step-two]] === Step 2: Create a delivery stream in Amazon Data Firehose -. Go to the https://console.aws.amazon.com/[AWS console] and navigate to Amazon Data Firehose. +. Go to the https://console.aws.amazon.com/[AWS console] and navigate to Amazon Data Firehose. -. Click *Create Firehose stream* and choose the source and destination of your Firehose stream. Unless you are streaming data from Kinesis Data Streams, set source to `Direct PUT` and destination to `Elastic`. +. Click *Create Firehose stream* and choose the source and destination of your Firehose stream. Unless you are streaming data from Kinesis Data Streams, set source to `Direct PUT` and destination to `Elastic`. . Provide a meaningful *Firehose stream name* that will allow you to identify this delivery stream later. @@ -55,9 +55,9 @@ NOTE: For advanced use cases, source records can be transformed by invoking a cu + * *Elastic endpoint URL*: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the Elastic Cloud console and select *Connection details*. Here is an example of how it looks like: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`. + -* *API key*: Enter the encoded Elastic API key. To create an API key, go to the Elastic Cloud console, select *Connection details* and click *Create and manage API keys*. If you are using an API key with *Restrict privileges*, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream. +* *API key*: Enter the encoded Elastic API key. To create an API key, go to the Elastic Cloud console, select *Connection details* and click *Create and manage API keys*. If you are using an API key with *Restrict privileges*, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream. + -* *Content encoding*: For a better network efficiency, leave content encoding set to GZIP. +* *Content encoding*: For a better network efficiency, leave content encoding set to GZIP. + * *Retry duration*: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases. + @@ -84,5 +84,5 @@ For example, a typical workflow for sending CloudTrail logs to Firehose would be We also added support for sending CloudWatch monitoring metrics to Elastic using Firehose. For example, you can configure metrics ingestion by creating a metric stream through CloudWatch. You can select an existing Firehose stream by choosing the option **Custom setup with Firehose**. For more information, refer to the AWS documentation https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-metric-streams-setup-datalake.html[about the custom setup with Firehose]. -For more information on Amazon Data Firehose, you can also check the https://docs.elastic.co/integrations/awsfirehose[Amazon Data Firehose Integrations documentation]. +For more information on Amazon Data Firehose, you can also check the https://docs.elastic.co/integrations/awsfirehose[Amazon Data Firehose Integrations documentation]. diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-amazon-ec2.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-amazon-ec2.asciidoc index efc82a5d56..4bed41f3e5 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-amazon-ec2.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-amazon-ec2.asciidoc @@ -59,7 +59,7 @@ ways, refer to {cloud}/ec-cloud-ingest-data.html[Adding data to {es}]. [[dashboard-ec2]] == Dashboards -{kibana-desc} +{kib} provides a full data analytics platform with out-of-the-box dashboards that you can clone and enhance to satisfy your custom visualization use cases. For example, to see an overview of your EC2 instance metrics in {kib}, go to the **Dashboard** app and navigate to the **[Metrics AWS] EC2 Overview** dashboard. diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-amazon-kinesis.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-amazon-kinesis.asciidoc index 7bc9dad8a8..7c99e12e65 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-amazon-kinesis.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-amazon-kinesis.asciidoc @@ -59,7 +59,7 @@ other ways, refer to {cloud}/ec-cloud-ingest-data.html[Adding data to {es}]. [[dashboard-kinesis]] == Dashboards -{kibana-desc} +{kib} provides a full data analytics platform with out-of-the-box dashboards that you can clone and enhance to satisfy your custom visualization use cases. For example, to see an overview of your Kinesis data streams in {kib}, go to the **Dashboard** app and navigate to the **[Metrics AWS] Kinesis Overview** dashboard. diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-amazon-s3.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-amazon-s3.asciidoc index 24bc3bcdf4..63555f024a 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-amazon-s3.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-amazon-s3.asciidoc @@ -57,7 +57,7 @@ ways, refer to {cloud}/ec-cloud-ingest-data.html[Adding data to {es}]. [[dashboard-s3]] == Dashboards -{kibana-desc} +{kib} provides a full data analytics platform with out-of-the-box dashboards that you can clone and enhance to satisfy your custom visualization use cases. For example, to see an overview of your S3 metrics in {kib}, go to the **Dashboard** app and navigate to the **[Metrics AWS] S3 Overview** dashboard. diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-aws-agent.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-aws-agent.asciidoc index 34dc15dec3..255d656f9c 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-aws-agent.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-aws-agent.asciidoc @@ -144,7 +144,7 @@ After you get that working, you'll learn how to add S3 access logs. To add the integration: -. Go to the {kib} home page and click **Add integrations**. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the query bar, search for **AWS** and select the AWS integration to see more details about it. @@ -299,8 +299,9 @@ The {agent} you've deployed is already running and collecting VPC flow logs. Now you need to edit the agent policy and configure the integration to collect S3 access logs. -. From the main menu in {kib}, go to **{fleet} -> Agents** and click the policy -your agent is using. +. In {kib}, find **Fleet** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +. On the **Agents** tab, click the policy your agent is using. . Edit the AWS integration policy and turn on the **Collect S3 access logs from S3** selector. @@ -316,9 +317,9 @@ collecting data. === Step 5: Visualize AWS logs Now that logs are streaming into {es}, you can visualize them in {kib}. To see -the raw logs, open the main menu in {kib}, then click **Logs**. Notice that you +the raw logs, find **Discover** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. -can filter on a specific data stream. For example, set +Notice that you can filter on a specific data stream. For example, set `data_stream.dataset : "aws.s3access"` to show S3 access logs. The AWS integration also comes with pre-built dashboards that you can use to @@ -373,8 +374,9 @@ least the following permissions: } ---- -. From the main menu in {kib}, go to **{fleet} -> Agents** and click the policy -your agent is using. +. In {kib}, find **Fleet** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +. On the **Agents** tab, click the policy your agent is using. . Edit the AWS integration policy and turn on the **Collect billing metrics** selector. You can accept the defaults. @@ -390,14 +392,14 @@ collecting data. === Step 7: Visualize AWS metrics Now that the metrics are streaming into {es}, you can visualize them in {kib}. -In {kib}, open the main menu and click **Discover**. Select the `metrics-*` -data view, then filter on `data_stream.dataset: "aws.ec2_metrics"`: +Find **Discover** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Select the `metrics-*` data view, then filter on `data_stream.dataset: "aws.ec2_metrics"`: [role="screenshot"] image::images/agent-tut-ec2-metrics-discover.png[Screenshot of the Discover app showing EC2 metrics] The AWS integration also comes with pre-built dashboards that you can use to -visualize the data. In {kib}, open the main menu and click **Dashboard**. +visualize the data. Find **Dashboards** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Search for EC2 and select the dashboard called **[Metrics AWS] EC2 Overview**: diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-aws-beats.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-aws-beats.asciidoc index ca333dcea0..d669b1f8f0 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-aws-beats.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-aws-beats.asciidoc @@ -288,15 +288,11 @@ running the following command: === Step 6: Visualize Logs Now that the logs are being shipped to {es} we can visualize them in -{kib}. To see the raw logs, open the main menu in {kib}, then click -**Logs**: - -image::EC2-logs.png[EC2 logs in the Logs UI] +{kib}. To see the raw logs, find **Discover** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. // lint ignore filebeat The filesets we used in the previous steps also come with pre-built dashboards -that you can use to visualize the data. In {kib}, open the main menu and click -**Dashboard**. Search for S3 and select the dashboard called: +that you can use to visualize the data. In {kib}, find **Dashboards** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Search for S3 and select the dashboard called: **[Filebeat AWS] S3 Server Access Log Overview**: image::S3-Server-Access-Logs.png[S3 Server Access Log Overview] @@ -415,15 +411,13 @@ You can now start {metricbeat}: === Step 10: Visualize metrics Now that the metrics are being streamed to {es} we can visualize them in -{kib}. In {kib}, open the main menu and click -**Infrastructure**. Make sure to show the **AWS** source and the **EC2 Instances**: +{kib}. To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Make sure to show the **AWS** source and the **EC2 Instances**: image::EC2-instances.png[Your EC2 Infrastructure] // lint ignore metricbeat The metricsets we used in the previous steps also comes with pre-built dashboard -that you can use to visualize the data. In {kib}, open the main menu and click -**Dashboard**. Search for EC2 and select the dashboard called: **[Metricbeat +that you can use to visualize the data. In {kib}, find **Dashboards** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Search for EC2 and select the dashboard called: **[Metricbeat AWS] EC2 Overview**: image::ec2-dashboard.png[EC2 Overview] diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-aws-cloudtrail-firehose.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-aws-cloudtrail-firehose.asciidoc index 7f5921b02b..8c5a12242d 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-aws-cloudtrail-firehose.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-aws-cloudtrail-firehose.asciidoc @@ -29,7 +29,9 @@ IMPORTANT: Make sure the deployment is on AWS, because the Amazon Data Firehose [[firehose-cloudtrail-step-one]] == Step 1: Install AWS integration in {kib} -. In {kib}, navigate to *Management* > *Integrations* and browse the catalog to find the Amazon Data Firehose integration. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +. Browse the catalog to find the Amazon Data Firehose integration. . Navigate to the *Settings* tab and click *Install Amazon Data Firehose assets*. diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-aws-cloudwatch-firehose.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-aws-cloudwatch-firehose.asciidoc index bc8c095626..2d431b2304 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-aws-cloudwatch-firehose.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-aws-cloudwatch-firehose.asciidoc @@ -30,7 +30,9 @@ IMPORTANT: AWS PrivateLink is not supported. Make sure the deployment is on AWS, [[firehose-cloudwatch-step-one]] == Step 1: Install AWS integration in {kib} -. In {kib}, navigate to *Management* > *Integrations* and browse the catalog to find the AWS integration. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +. Browse the catalog to find the AWS integration. . Navigate to the *Settings* tab and click *Install AWS assets*. diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-aws-esf.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-aws-esf.asciidoc index 4c825bdb29..0c4b120cfb 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-aws-esf.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-aws-esf.asciidoc @@ -5,7 +5,7 @@ Monitor {aws} with Elastic Serverless Forwarder ++++ -The Elastic Serverless Forwarder (ESF) is an Amazon Web Services (AWS) Lambda function that ships logs from your AWS environment to Elastic. Elastic Serverless Forwarder is published in the AWS Serverless Application Repository (SAR). For more information on ESF, check the {esf-ref}/aws-elastic-serverless-forwarder.html[Elastic Serverless Forwarder Guide]. +The Elastic Serverless Forwarder (ESF) is an Amazon Web Services (AWS) Lambda function that ships logs from your AWS environment to Elastic. Elastic Serverless Forwarder is published in the AWS Serverless Application Repository (SAR). For more information on ESF, check the {esf-ref}/aws-elastic-serverless-forwarder.html[Elastic Serverless Forwarder Guide]. [discrete] [[aws-esf-what-you-learn]] @@ -30,7 +30,7 @@ You also need an AWS account with permissions to pull the necessary data from AW [[esf-step-one]] === Step 1: Create an S3 Bucket to store VPC flow logs -. In the https://s3.console.aws.amazon.com/s3[AWS S3 console], choose *Create bucket* from the left navigation pane. +. In the https://s3.console.aws.amazon.com/s3[AWS S3 console], choose *Create bucket* from the left navigation pane. . Specify the AWS region in which you want it deployed. . Enter the bucket name. @@ -44,7 +44,7 @@ For more details, refer to the Amazon documentation on how to https://docs.aws.a 2. Select the network interface you want to use. 3. From the *Actions* drop-down menu, choose *Create flow log*. 4. For *Destination*, select *Send to an S3 bucket*. -5. For *S3 bucket ARN*, enter the name of the S3 bucket you created in the previous step. +5. For *S3 bucket ARN*, enter the name of the S3 bucket you created in the previous step. For more details, refer to the Amazon documentation on how to https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-s3.html[Create a flow log that publishes to Amazon S3]. @@ -94,11 +94,12 @@ For more details, refer to the AWS documentation on how to https://docs.aws.amaz [discrete] [[esf-step-four]] -=== Step 4: Install the Elastic AWS integration +=== Step 4: Install the Elastic AWS integration -{kib} offers prebuilt dashboards, ingest node configurations, and other assets that help you get the most value out of the logs you ingest. +{kib} offers prebuilt dashboards, ingest node configurations, and other assets that help you get the most value out of the logs you ingest. -. Go to *Integrations* in {kib} and search for AWS. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Search for AWS. . Click the AWS integration, select *Settings* and click *Install AWS assets* to install all the AWS integration assets. [discrete] @@ -113,12 +114,12 @@ IMPORTANT: Make sure you create the S3 bucket in the same region as the bucket c [discrete] [[esf-step-six]] -=== Step 6: Create a configuration file to specify the source and destination +=== Step 6: Create a configuration file to specify the source and destination Elastic Serverless Forwarder uses the configuration file to know the input source and the Elastic connection for the destination information. -. In Elastic Cloud, from the AWS Integrations page click *Connection details* on the upper right corner and copy your Cloud ID. -. Create an encoded API key for authentication. +. In Elastic Cloud, from the AWS Integrations page click *Connection details* on the upper right corner and copy your Cloud ID. +. Create an encoded API key for authentication. + You are going to reference both the Cloud ID and the newly created API key from the configuration file. Here is an example: + diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-aws-firewall-firehose.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-aws-firewall-firehose.asciidoc index d52de9ce90..47b4e53c62 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-aws-firewall-firehose.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-aws-firewall-firehose.asciidoc @@ -29,7 +29,9 @@ IMPORTANT: AWS PrivateLink is not supported. Make sure the deployment is on AWS, [[firehose-firewall-step-one]] == Step 1: Install AWS integration in {kib} -. In {kib}, navigate to *Management* > *Integrations* and browse the catalog to find the AWS integration. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +. Browse the catalog to find the AWS integration. . Navigate to the *Settings* tab and click *Install AWS assets*. diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-aws-waf-firehose.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-aws-waf-firehose.asciidoc index fde9abbe36..95511dc580 100644 --- a/docs/en/observability/cloud-monitoring/aws/monitor-aws-waf-firehose.asciidoc +++ b/docs/en/observability/cloud-monitoring/aws/monitor-aws-waf-firehose.asciidoc @@ -30,7 +30,9 @@ IMPORTANT: Make sure the deployment is on AWS, because the Amazon Data Firehose [[firehose-waf-step-one]] == Step 1: Install the AWS integration in {kib} -. In {kib}, navigate to *Management* > *Integrations* and browse the catalog to find the AWS integration. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +. Browse the catalog to find the AWS integration. . Navigate to the *Settings* tab and click *Install AWS assets*. diff --git a/docs/en/observability/cloud-monitoring/azure/monitor-azure-agent.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-agent.asciidoc index c7790c244a..645fda8d94 100644 --- a/docs/en/observability/cloud-monitoring/azure/monitor-azure-agent.asciidoc +++ b/docs/en/observability/cloud-monitoring/azure/monitor-azure-agent.asciidoc @@ -112,10 +112,7 @@ details and forecast information, about your subscription. To add the integration: -. Go to the {kib} home page and click **Add integrations**. -+ -[role="screenshot"] -image::images/kibana-home.png[Screenshot of the {kib} home page] +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the query bar, search for **Azure Billing** and select the Azure Billing Metrics integration to see more details about it. @@ -203,9 +200,9 @@ to confirm incoming data, or close the window. [[azure-elastic-agent-visualize-metrics]] === Step 4: Visualize Azure billing metrics -Now that the metrics are streaming to {es}, you can visualize them in {kib}. In -Kibana, open the main menu and click **Dashboard**. Search for Azure Billing and -select the dashboard called **[Azure Billing] Billing Overview**. +Now that the metrics are streaming to {es}, you can visualize them in {kib}. +Find **Dashboards** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Search for Azure Billing and select the dashboard called **[Azure Billing] Billing Overview**. [role="screenshot"] image::images/agent-tut-azure-billing-dashboard.png[Screenshot of Azure billing overview dashboard] @@ -308,7 +305,7 @@ the Azure activity log integration to ingest the logs. To add the integration: -. Go to the {kib} home page and click **Add integrations**. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the query bar, search for **Azure activity logs** and select the Azure activity logs integration to see more details about it. @@ -361,26 +358,10 @@ change and start sending Azure activity logs to {es}. [[azure-elastic-agent-visualize-azure-logs]] === Step 5: Visualize Azure activity logs -Now that logs are streaming into {es}, you can visualize them in {kib}. To see -the raw logs, open the main menu in {kib}, then click **Logs**. Notice that you -can filter on a specific data stream. This example uses -`data_stream.dataset : "azure.activitylogs"` to show Azure activity logs: - -[role="screenshot"] -image::images/agent-tut-azure-activity-logs.png[Screenshot of Logs app showing Azure activity logs] - -[TIP] -==== -The default view on the Stream page includes the Message column, which is not -populated for activity logs. To avoid seeing `failed to find message` repeated -on the Stream page, you can change the default columns shown in the view. On the -**Logs -> Stream** page, click **Settings** and delete the Message column. Add a -new column based on a different field, for example, -`azure.activitylogs.event_category`. - -[role="screenshot"] -image::images/agent-tut-azure-activity-log-columns.png[Screenshot showing the log columns changed to include the azure.activitylogs.event_category field] -==== +Now that logs are streaming into {es}, you can visualize them in {kib}. +To see the raw logs, find **Discover** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Notice that you +can filter on a specific data stream. For example, you can use +`data_stream.dataset : "azure.activitylogs"` to show Azure activity logs. The Azure activity logs integration also comes with pre-built dashboards that you can use to visualize the data. In {kib}, open the main menu and click diff --git a/docs/en/observability/cloud-monitoring/azure/monitor-azure-beats.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-beats.asciidoc index 2091667af9..be854ed02c 100644 --- a/docs/en/observability/cloud-monitoring/azure/monitor-azure-beats.asciidoc +++ b/docs/en/observability/cloud-monitoring/azure/monitor-azure-beats.asciidoc @@ -116,19 +116,11 @@ Native metrics collection is not fully supported yet and is discussed later. ==== -. Within {kib}, click *{observability}* until you -see some data. +. In {kib}, find the {observability} **Overview** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Refresh the page until you see some data. This may take a few minutes. -+ -[role="screenshot"] -image:monitor-azure-kibana-observability-page-data.png[{kib} {observability} page -(with data)] -. To access the {logs-app} and analyze all your subscription -and resource logs, click *View in app*. -+ -[role="screenshot"] -image:monitor-azure-kibana-logs-app.png[{kib} {logs-app}] +. To analyze your subscription and resource logs, click **Show Logs Explorer**. [discrete] [[azure-step-three]] @@ -147,16 +139,10 @@ image:monitor-azure-elastic-vms.png[Select VMs to collect logs and metrics from] . Wait until it is installed and sending data (if the list does not update, click *Refresh* ). -To see the logs from the VM in the {logs-app}, click *Logs*. -+ -[role="screenshot"] -image:monitor-azure-kibana-vms-logs.png[VMs logs in the {logs-app}] -+ -To see the VM metrics dashboard, click *Infrastructure*. -+ -[role="screenshot"] -image:monitor-azure-kibana-vms-metrics.png[VMs metrics dashboard] +To see the logs from the VM, open **Logs Explorer** (find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]). + +To view VM metrics, go to **Infrastructure inventory** and then select a VM. (To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field].) + [NOTE] ==== Both logs and metrics are filtered by the VM name that you selected. diff --git a/docs/en/observability/cloud-monitoring/azure/monitor-azure-native.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-native.asciidoc index 43f10ad1f1..790dfe27ae 100644 --- a/docs/en/observability/cloud-monitoring/azure/monitor-azure-native.asciidoc +++ b/docs/en/observability/cloud-monitoring/azure/monitor-azure-native.asciidoc @@ -102,17 +102,11 @@ NOTE: Native metrics collection for Azure services is not fully supported yet. To learn how to collect metrics from Azure services, refer to <>. -. In {kib}, under **{observability}**, click **Overview** until data appears in -{kib}. This might take several minutes. -+ -[role="screenshot"] -image::monitor-azure-native-kibana-observability-page-data.png[Screenshot of {kib} {observability} overview] +. In {kib}, under **{observability}**, find **Overview** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +Refresh the page until you see some data. +This may take a few minutes. -. To analyze your subscription and resource logs, click **Show log stream** (or - click **Stream** in the navigation pane). -+ -[role="screenshot"] -image::monitor-azure-native-kibana-logs-app.png[{kib} {logs-app}] +. To analyze your subscription and resource logs, click **Show Logs Explorer**. [discrete] [[azure-ingest-VM-logs-metrics]] @@ -129,22 +123,13 @@ image::monitor-azure-native-elastic-vms.png[Screenshot that shows VMs selected f . Wait until the extension is installed and sending data (if the list does not update, click **Refresh** ). -. Back in {kib}, view the log stream again (**Logs -> Stream**). +. Back in {kib}, view the **Logs Explorer** again. Notice that you can filter the view to show logs for a specific instance, for example -`cloud.instance.name : "ingest-tutorial-linux"`: -+ -[role="screenshot"] -image::monitor-azure-native-kibana-vms-logs.png[Screenshot of VM logs in the {logs-app}] +`cloud.instance.name : "ingest-tutorial-linux"`. -. To view VM metrics, go to **Infrastructure -> Inventory** and then select a VM. -+ -[role="screenshot"] -image::monitor-azure-native-kibana-vms-metrics.png[Screenshot of VM metrics] +. To view VM metrics, go to **Infrastructure inventory** and then select a VM. (To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field].) + To explore the data further, click **Open as page**. -+ -[role="screenshot"] -image::monitor-azure-native-kibana-vms-metrics-detail.png[Screenshot of detailed VM metrics] Congratulations! You have completed the tutorial. diff --git a/docs/en/observability/cloud-monitoring/azure/monitor-azure-openai-apm.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-openai-apm.asciidoc index 09583dfbf8..7fdc478ef4 100644 --- a/docs/en/observability/cloud-monitoring/azure/monitor-azure-openai-apm.asciidoc +++ b/docs/en/observability/cloud-monitoring/azure/monitor-azure-openai-apm.asciidoc @@ -14,7 +14,7 @@ For this tutorial, we'll be using an https://github.com/mdbirnstiehl/AzureOpenAI To start collecting APM data for your Azure OpenAI applications, gather the OpenTelemetry OTLP exporter endpoint and authentication header from your {ecloud} instance: -. From the {kib} homepage, select **Add integrations**. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Select the **APM** integration. . Scroll down to **APM Agents** and select the **OpenTelemetry** tab. . Make note of the configuration values for the following configuration settings: diff --git a/docs/en/observability/cloud-monitoring/azure/monitor-azure-openai.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-openai.asciidoc index e736407854..e0091d735a 100644 --- a/docs/en/observability/cloud-monitoring/azure/monitor-azure-openai.asciidoc +++ b/docs/en/observability/cloud-monitoring/azure/monitor-azure-openai.asciidoc @@ -164,7 +164,7 @@ To add a role assignment to your app: [[azure-openai-configure-integration]] === Step 3: Configure the Azure OpenAI integration -. Go to the {kib} homepage and click **Add integrations**. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the query bar, search for **Azure OpenAI** and select the Azure OpenAI integration card. . Click **Add Azure OpenAI**. . Under Integration settings, configure the integration name and optionally add a description. @@ -267,7 +267,7 @@ You have the following options for viewing your data: The Elastic Azure OpenAI integration comes with a built-in overview dashboard to visualize your log and metric data. To view the integration dashboards: -. From the {kib} menu under **Analytics**, select **Dashboards**. +. Find **Dashboards** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Search for **Azure OpenAI**. . Select the `[Azure OpenAI] Overview` dashboard. @@ -282,7 +282,7 @@ For more on dashboards and visualization, refer to the {kibana-ref}/dashboard.ht [[azure-openai-discover]] ==== View logs and metrics with Discover -Go to **Discover** from the {kib} menu under **Analytics**. +Find **Discover** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. From the data view drop-down, select either `logs-*` or `metrics-*` to view specific data. You can also create data views if, for example, you wanted to view both `logs-*` and `metrics-*` simultaneously. @@ -301,7 +301,7 @@ For more on using Discover and creating data views, refer to the {kibana-ref}/di [[azure-openai-logs-explorer]] ==== View logs with Logs Explorer -To view Azure OpenAI logs, open {kib} and go to *{observability} → Logs Explorer*. +To view Azure OpenAI logs, open {kib} and go to **Logs Explorer** (find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]). With **Logs Explorer**, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. [role="screenshot"] diff --git a/docs/en/observability/configure-logs-sources.asciidoc b/docs/en/observability/configure-logs-sources.asciidoc index 84ca27ac39..e36623f14a 100644 --- a/docs/en/observability/configure-logs-sources.asciidoc +++ b/docs/en/observability/configure-logs-sources.asciidoc @@ -2,7 +2,7 @@ = Configure data sources Specify the source configuration for logs in the -{kibana-ref}/logs-ui-settings-kb.html[{logs-app} settings] in the +{kibana-ref}/logs-ui-settings-kb.html[Logs settings] in the {kibana-ref}/settings.html[{kib} configuration file]. By default, the configuration uses the index patterns stored in the {kib} log sources advanced setting to query the data. The configuration also defines the default columns displayed in the logs stream. @@ -15,16 +15,14 @@ default configuration settings. [[edit-config-settings]] == Edit configuration settings -. To access this page, go to *{observability} > Logs*. -+ -. Click *Settings*. +. Find `Logs / Settings` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + |=== | *Name* | Name of the source configuration. | *{kib} log sources advanced setting* | Use index patterns stored in the {kib} *log sources* advanced setting, which provides a centralized place to store and query log index patterns. -Update this setting by going to *Stack Management* → *Advanced Settings* and searching for _logs sources_. +To open **Advanced settings**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. | *{data-source-cap} (deprecated)* | The Logs UI integrates with {data-sources} to configure the used indices by clicking *Use {data-sources}*. diff --git a/docs/en/observability/create-alerts.asciidoc b/docs/en/observability/create-alerts.asciidoc index 416cb9b5a7..270a4c948c 100644 --- a/docs/en/observability/create-alerts.asciidoc +++ b/docs/en/observability/create-alerts.asciidoc @@ -26,9 +26,8 @@ You can also manage {observability} app rules alongside rules for other apps fro The first step when setting up alerts is to create a rule. To create and manage rules related to {observability} apps, go to the {observability} *Alerts* page and click *Manage Rules* to navigate to the {observability} Rules page. - -You can also create rules directly from the Applications, Logs, Infrastructure, Synthetics, and Uptime UIs without leaving the UI by -clicking *Alerts and rules* and selecting a rule, or you can select *Manage Rules* to go to the {observability} Rules page. +You can also create rules directly from most {observability} UIs by +clicking *Alerts and rules* and selecting a rule. To create SLO rules, you must first define a new SLO via the *Create new SLO* button. Once an SLO has been defined, you can create SLO rules tied to this SLO. diff --git a/docs/en/observability/explore-logs.asciidoc b/docs/en/observability/explore-logs.asciidoc index 8106a95e38..a84ff70f54 100644 --- a/docs/en/observability/explore-logs.asciidoc +++ b/docs/en/observability/explore-logs.asciidoc @@ -7,7 +7,7 @@ With **Logs Explorer**, you can quickly search and filter your log data, get inf You can also customize and save your searches and place them on a dashboard. Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view. -From the {observability} navigation menu, click **Explorer** under the **Logs** heading to open Logs Explorer. +To open **Logs Explorer**, find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. [role="screenshot"] image::images/log-explorer.png[Screen capture of the Logs Explorer] @@ -22,8 +22,8 @@ Viewing data in Logs Explorer requires `read` privileges for *Discover* and *Int [[find-your-logs]] == Find your logs -By default, Logs Explorer shows all of your logs, according to the index patterns set in the *logs source* advanced setting. -Update this setting by going to *Stack Management* → *Advanced Settings* and searching for __. +By default, Logs Explorer shows all of your logs, according to the index patterns set in the *logs sources* advanced setting. +To open **Advanced settings**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. If you need to focus on logs from a specific integration, select the integration from the logs menu: @@ -77,6 +77,7 @@ The following actions help you filter and focus on specific fields in the log de [[view-log-data-set-details]] == View log data set details -From the main {kib} menu, go to **Stack Management** → **Data Set Quality* to view more details about your data sets and monitor their overall quality. +Go to **Data Set Quality** to view more details about your data sets and monitor their overall quality. +To open **Data Set Quality**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Refer to <> for more information. \ No newline at end of file diff --git a/docs/en/observability/gcp-dataflow.asciidoc b/docs/en/observability/gcp-dataflow.asciidoc index 3b6dd49ebe..9ce922264e 100644 --- a/docs/en/observability/gcp-dataflow.asciidoc +++ b/docs/en/observability/gcp-dataflow.asciidoc @@ -27,7 +27,8 @@ You’ll start with installing the Elastic GCP integration to add pre-built dashboards, ingest node configurations, and other assets that help you get the most of the GCP logs you ingest. -. Go to *Integrations* in {kib} and search for `gcp`. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Search for `gcp`. + image::monitor-gcp-kibana-integrations.png[{kib} integrations] diff --git a/docs/en/observability/handle-no-results-found-message.asciidoc b/docs/en/observability/handle-no-results-found-message.asciidoc index ca7b4a7ae1..09ccd45fdd 100644 --- a/docs/en/observability/handle-no-results-found-message.asciidoc +++ b/docs/en/observability/handle-no-results-found-message.asciidoc @@ -20,13 +20,11 @@ This could be for any of these reasons: For example, to collect metrics from your host system, you can use the {integrations-docs}/system[System integration]. To fix the problem, install the integration and configure it to send the missing metrics. + -TIP: Follow one of our quickstarts under **Observability** → **Add data** → **Collect and analyze logs** to make sure the correct integrations are installed and all required metrics are collected. +TIP: Follow one of our quickstarts under **Observability** → **Add data** to make sure the correct integrations are installed and all required metrics are collected. * You are not using the Elastic Distribution of the OpenTelemetry Collector, which automatically maps data to the Elastic Common Schema (ECS) fields expected by the visualization. + -TIP: Follow our OpenTelemetry quickstart under **Observability** → **Add data** → **Monitor infrastructure** to make sure OpenTelemetry data is correctly mapped to ECS-compliant fields. - -//TODO: Make quickstarts an active link after the docs are merged. +TIP: Follow our OpenTelemetry quickstart under **Observability** → **Add data** to make sure OpenTelemetry data is correctly mapped to ECS-compliant fields. * You have explicitly chosen not to send these metrics. You may choose to limit the metrics sent to Elastic to save on space and improve cluster performance. diff --git a/docs/en/observability/images/EC2-logs.png b/docs/en/observability/images/EC2-logs.png deleted file mode 100644 index 0fc9da79bf..0000000000 Binary files a/docs/en/observability/images/EC2-logs.png and /dev/null differ diff --git a/docs/en/observability/images/agent-tut-azure-activity-log-columns.png b/docs/en/observability/images/agent-tut-azure-activity-log-columns.png deleted file mode 100644 index fc9a01f06f..0000000000 Binary files a/docs/en/observability/images/agent-tut-azure-activity-log-columns.png and /dev/null differ diff --git a/docs/en/observability/images/agent-tut-azure-activity-logs.png b/docs/en/observability/images/agent-tut-azure-activity-logs.png deleted file mode 100644 index bbb911e58e..0000000000 Binary files a/docs/en/observability/images/agent-tut-azure-activity-logs.png and /dev/null differ diff --git a/docs/en/observability/images/kibana-home.png b/docs/en/observability/images/kibana-home.png deleted file mode 100644 index f949b5a293..0000000000 Binary files a/docs/en/observability/images/kibana-home.png and /dev/null differ diff --git a/docs/en/observability/images/monitor-azure-kibana-logs-app.png b/docs/en/observability/images/monitor-azure-kibana-logs-app.png deleted file mode 100644 index 4c2e750404..0000000000 Binary files a/docs/en/observability/images/monitor-azure-kibana-logs-app.png and /dev/null differ diff --git a/docs/en/observability/images/monitor-azure-kibana-observability-page-data.png b/docs/en/observability/images/monitor-azure-kibana-observability-page-data.png deleted file mode 100644 index 7595e4a380..0000000000 Binary files a/docs/en/observability/images/monitor-azure-kibana-observability-page-data.png and /dev/null differ diff --git a/docs/en/observability/images/monitor-azure-native-kibana-logs-app.png b/docs/en/observability/images/monitor-azure-native-kibana-logs-app.png deleted file mode 100644 index d035bd06d6..0000000000 Binary files a/docs/en/observability/images/monitor-azure-native-kibana-logs-app.png and /dev/null differ diff --git a/docs/en/observability/images/monitor-azure-native-kibana-vms-metrics-detail.png b/docs/en/observability/images/monitor-azure-native-kibana-vms-metrics-detail.png deleted file mode 100644 index 79cbf04316..0000000000 Binary files a/docs/en/observability/images/monitor-azure-native-kibana-vms-metrics-detail.png and /dev/null differ diff --git a/docs/en/observability/images/monitor-azure-native-kibana-vms-metrics.png b/docs/en/observability/images/monitor-azure-native-kibana-vms-metrics.png deleted file mode 100644 index 1cdcc604b7..0000000000 Binary files a/docs/en/observability/images/monitor-azure-native-kibana-vms-metrics.png and /dev/null differ diff --git a/docs/en/observability/inspect-log-anomalies.asciidoc b/docs/en/observability/inspect-log-anomalies.asciidoc index 985b586086..7dc0400144 100644 --- a/docs/en/observability/inspect-log-anomalies.asciidoc +++ b/docs/en/observability/inspect-log-anomalies.asciidoc @@ -2,7 +2,7 @@ = Inspect log anomalies When the {anomaly-detect} features of {ml} are enabled, you can use the -**Anomalies** page in the {logs-app} to detect and inspect log anomalies and the +**Logs Anomalies** page to detect and inspect log anomalies and the log partitions where the log anomalies occur. This means you can easily see anomalous behavior without significant human intervention -- no more manually sampling log data, calculating rates, and determining if rates are expected. @@ -35,7 +35,7 @@ Create a {ml} job to detect anomalous log entry rates automatically. 1. Select *Anomalies*, and you'll be prompted to create a {ml} job which will carry out the log rate analysis. 2. Choose a time range for the {ml} analysis. -3. Add the indices that contain the logs you want to examine. By default, Machine Learning analyzes messages in all log indices that match the patterns set in the *logs source* advanced setting. Update this setting by going to *Management* → *Advanced Settings* and searching for _logs source_. +3. Add the indices that contain the logs you want to examine. By default, Machine Learning analyzes messages in all log indices that match the patterns set in the *logs source* advanced setting. To open **Advanced settings**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. 4. Click *Create {ml-init} job*. 5. You're now ready to explore your log partitions. diff --git a/docs/en/observability/inventory-threshold-alert.asciidoc b/docs/en/observability/inventory-threshold-alert.asciidoc index 4b9061dc39..7aa673077c 100644 --- a/docs/en/observability/inventory-threshold-alert.asciidoc +++ b/docs/en/observability/inventory-threshold-alert.asciidoc @@ -4,20 +4,16 @@ Inventory threshold ++++ -Based on the resources listed on the *Inventory* page within the {infrastructure-app}, +Based on the resources listed on the *Infrastructure inventory* page within the {infrastructure-app}, you can create a threshold rule to notify you when a metric has reached or exceeded a value for a specific resource or a group of resources within your infrastructure. Additionally, each rule can be defined using multiple conditions that combine metrics and thresholds to create precise notifications and reduce false positives. -. To access this page, go to **{observability}** -> **Infrastructure**. -. On the *Inventory* page or the *Metrics Explorer* page, click **Alerts and rules** -> **Infrastructure**. -. Select *Create inventory rule*. - [TIP] ============================================== -When you select *Create inventory alert*, the parameters you configured on the *Inventory* page will automatically +When you select *Create inventory alert*, the parameters you configured on the *Infrastructure inventory* page will automatically populate the rule. You can use the Inventory first to view which nodes in your infrastructure you'd like to be notified about and then quickly create a rule in just a few clicks. ============================================== diff --git a/docs/en/observability/logs-add-service-name.asciidoc b/docs/en/observability/logs-add-service-name.asciidoc index 4ffb514eef..0afa367ab2 100644 --- a/docs/en/observability/logs-add-service-name.asciidoc +++ b/docs/en/observability/logs-add-service-name.asciidoc @@ -41,7 +41,8 @@ For more on defining processors, refer to {fleet-guide}/elastic-agent-processor- For logs that with an existing field being used to represent the service name, map that field to the `service.name` field using the {ref}/field-alias.html[alias field type]. Follow these steps to update your mapping: -. From the main {kib} menu, go to **Stack Management** → **Index Management** → **Index Templates**. +. To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Select **Index Templates**. . Search for the index template you want to update. . From the **Actions** menu for that template, select **Edit**. . Go to **Mappings**, and select **Add field**. diff --git a/docs/en/observability/logs-app-fields.asciidoc b/docs/en/observability/logs-app-fields.asciidoc index 9e64f0e78a..f9fa1c8c42 100644 --- a/docs/en/observability/logs-app-fields.asciidoc +++ b/docs/en/observability/logs-app-fields.asciidoc @@ -1,7 +1,7 @@ [[logs-app-fields]] -= {logs-app} fields += Logs Explorer fields -This section lists the required fields the {logs-app} uses to display data. +This section lists the required fields the **Logs Explorer** uses to display data. Please note that some of the fields listed are not {ecs-ref}/ecs-reference.html#_what_is_ecs[ECS fields]. `@timestamp`:: diff --git a/docs/en/observability/logs-ecs-application.asciidoc b/docs/en/observability/logs-ecs-application.asciidoc index bb9bd8ab81..6bb28afb66 100644 --- a/docs/en/observability/logs-ecs-application.asciidoc +++ b/docs/en/observability/logs-ecs-application.asciidoc @@ -152,7 +152,7 @@ Add the custom logs integration to ingest and centrally manage your logs using { To add the custom logs integration to your project: -. From your deployment's home page, click **Add Integrations**. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Type `custom` in the search bar and select **Custom Logs**. . Click **Install {agent}** at the bottom of the page, and follow the instructions for your system to install the {agent}. . After installing the {agent}, click **Save and continue** to configure the integration from the **Add Custom Logs integration** page. diff --git a/docs/en/observability/logs-filter.asciidoc b/docs/en/observability/logs-filter.asciidoc index d38746f1b7..116870ec92 100644 --- a/docs/en/observability/logs-filter.asciidoc +++ b/docs/en/observability/logs-filter.asciidoc @@ -12,7 +12,7 @@ This guide shows you how to: [[logs-filter-and-aggregate-prereq]] == Before you get started -The examples on this page use the following ingest pipeline and index template, which you can set in *Dev Tools*. If you haven't used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the <> documentation. +The examples on this page use the following ingest pipeline and index template, which you can set in *Developer tools*. If you haven't used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the <> documentation. Set the ingest pipeline with the following command: @@ -62,21 +62,21 @@ PUT _index_template/logs-example-default-template Filter your data using the fields you've extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways: -- <> – Filter and visualize log data in {kib} using Log Explorer. -- <> – Filter log data from Dev Tools using Query DSL. +- <> – Filter and visualize log data in {kib} using Logs Explorer. +- <> – Filter log data from Developer tools using Query DSL. [discrete] [[logs-filter-logs-explorer]] === Filter logs in Log Explorer -Log Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. You can find Log Explorer in the Observability menu under *Logs*. +Logs Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. To open **Logs Explorer**, find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. From Log Explorer, you can use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Log Explorer. For example, you might want to look into an event that occurred within a specific time range. Add some logs with varying timestamps and log levels to your data stream: -. In {kib}, go to *Management* -> *Dev Tools*. +. To open **Console**, find `Dev Tools` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the *Console* tab, run the following command: [source,console] @@ -120,11 +120,11 @@ For more on using Log Explorer, refer to the {kibana-ref}/discover.html[Discover [[logs-filter-qdsl]] === Filter logs with Query DSL -{ref}/query-dsl.html[Query DSL] is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from *Developer Tools*. +{ref}/query-dsl.html[Query DSL] is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from *Developer tools*. For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a {ref}/query-dsl-range-query.html[range query] to filter for the specific timestamp range and a {ref}/query-dsl-term-query.html[term query] to filter for `WARN` and `ERROR` log levels. -First, from *Dev Tools*, add some logs with varying timestamps and log levels to your data stream with the following command: +First, from *Developer tools*, add some logs with varying timestamps and log levels to your data stream with the following command: [source,console] ---- @@ -212,7 +212,7 @@ Use aggregation to analyze and summarize your log data to find patterns and gain For example, you might want to understand error distribution by analyzing the count of logs per log level. -First, from *Dev Tools*, add some logs with varying log levels to your data stream using the following command: +First, from *Developer tools*, add some logs with varying log levels to your data stream using the following command: [source,console] ---- diff --git a/docs/en/observability/logs-index-template.asciidoc b/docs/en/observability/logs-index-template.asciidoc index 02bcabc941..feaf17e2b6 100644 --- a/docs/en/observability/logs-index-template.asciidoc +++ b/docs/en/observability/logs-index-template.asciidoc @@ -5,8 +5,7 @@ Index templates are used to configure the backing indices of data streams as the These index templates are composed of multiple {ref}/indices-component-template.html[component templates]—reusable building blocks that configure index mappings, settings, and aliases. -You can view the default `logs` index template in {kib}. -Navigate to **{stack-manage-app}** → **Index Management** → **Index Templates**, and search for `logs`. +You can view the default `logs` index template in {kib}. To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Select **Index Templates** and search for `logs`. Select the `logs` index templates to view relevant component templates. [discrete] @@ -22,7 +21,8 @@ The default `logs` index template for the `logs-*-*` index pattern is composed o You can use the `logs@custom` component template to customize your {es} indices. The `logs@custom` component template is not installed by default, but you can create a component template named `logs@custom` to override and extend default mappings or settings. To do this: -. Open {kib} and navigate to **{stack-manage-app}** → **Index Management** → **Component Templates**. +. To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Select **Component Templates**. . Click *Create component template*. . Name the component template logs@custom. . Add any custom metadata, index settings, or mappings. @@ -43,7 +43,8 @@ You can update the `default_field` to search in the `message` field instead of If you haven't already created the `logs@custom`component template, create it as outlined in the previous section. Then, follow these steps to update the *Index settings* of the component template: -. Open {kib} and navigate to **{stack-manage-app}** → **Index Management** → **Component Templates**. +. To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Select **Component Templates**. . Search for `logs` and find the `logs@custom` component template. . Open the **Actions** menu and select **Edit**. . Select **Index settings** and add the following code: diff --git a/docs/en/observability/logs-metrics-get-started.asciidoc b/docs/en/observability/logs-metrics-get-started.asciidoc index d5d3cd895a..2c4bfd253f 100644 --- a/docs/en/observability/logs-metrics-get-started.asciidoc +++ b/docs/en/observability/logs-metrics-get-started.asciidoc @@ -38,12 +38,7 @@ integrations for new data sources, security protections, and more. In this step, add the System integration to monitor host logs and metrics. -. Go to the {kib} home page and click **Add integrations**. -+ --- -[role="screenshot"] -image::images/kibana-home.png[{kib} home page] --- +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the query bar, search for **System** and select the integration to see more details about it. @@ -146,7 +141,7 @@ You can hover over any visualization to adjust its settings, or click the Next, add additional integrations to the policy used by your agent. -. In {kib}, go to the **Integrations** page. +. Go back to **Integrations**. . In the query bar, search for the source you want to monitor (for example, nginx) and select the integration to see more details about it. @@ -173,8 +168,9 @@ image::images/kibana-fleet-policies-default-with-nginx.png[{fleet} showing defau + Any {agent}s assigned to this policy will begin collecting data for the newly configured integrations. -. To view the data, go to **Management > {fleet}**, then click the -**Data streams** tab. +. To view the data, find **{fleet}** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +. Click the **Data streams** tab. . In the **Actions** column, navigate to the dashboards corresponding to the data stream. diff --git a/docs/en/observability/logs-monitor-datasets.asciidoc b/docs/en/observability/logs-monitor-datasets.asciidoc index a3494c68bb..dc61c38533 100644 --- a/docs/en/observability/logs-monitor-datasets.asciidoc +++ b/docs/en/observability/logs-monitor-datasets.asciidoc @@ -6,7 +6,7 @@ beta:[] The **Data Set Quality** page provides an overview of your log, metric, trace, and synthetic data sets. Use this information to get an idea of your overall data set quality and find data sets that contain incorrectly parsed documents. -Access the Data Set Quality page from the main {kib} menu at **Stack Management** → **Data Set Quality**. +To open **Data Set Quality**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. By default, the page only shows log data sets. To see other data set types, select them from the **Type** menu. [role="screenshot"] diff --git a/docs/en/observability/logs-parse.asciidoc b/docs/en/observability/logs-parse.asciidoc index 5d55a9568e..21e35b14c7 100644 --- a/docs/en/observability/logs-parse.asciidoc +++ b/docs/en/observability/logs-parse.asciidoc @@ -25,7 +25,7 @@ Follow the steps below to see how the following unstructured log data is indexed Start by storing the document in the `logs-example-default` data stream: -. In {kib}, go to *Management* -> *Dev Tools*. +. To open **Console**, find `Dev Tools` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the *Console* tab, add the example log to {es} using the following command: + [source,console] diff --git a/docs/en/observability/logs-plaintext.asciidoc b/docs/en/observability/logs-plaintext.asciidoc index c97334af88..d5081d5c36 100644 --- a/docs/en/observability/logs-plaintext.asciidoc +++ b/docs/en/observability/logs-plaintext.asciidoc @@ -158,7 +158,7 @@ Follow these steps to ingest and centrally manage your logs using {agent} and {f To add the custom logs integration to your project: -. From your deployment's home page, click **Add Integrations**. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Type `custom` in the search bar and select **Custom Logs**. . Click **Add Custom Logs**. . Click **Install {agent}** at the bottom of the page, and follow the instructions for your system to install the {agent}. diff --git a/docs/en/observability/logs-threshold-alert.asciidoc b/docs/en/observability/logs-threshold-alert.asciidoc index 04a987d2ba..1548375c86 100644 --- a/docs/en/observability/logs-threshold-alert.asciidoc +++ b/docs/en/observability/logs-threshold-alert.asciidoc @@ -4,10 +4,6 @@ Log threshold ++++ - -. To access this page, go to **{observability}** -> **Logs**. -. Click **Alerts and rules** -> **Create rule**. - [role="screenshot"] image::images/log-threshold-alert.png[Log threshold alert configuration] diff --git a/docs/en/observability/manage-cases.asciidoc b/docs/en/observability/manage-cases.asciidoc index 67bb8f01cf..6f36899020 100644 --- a/docs/en/observability/manage-cases.asciidoc +++ b/docs/en/observability/manage-cases.asciidoc @@ -9,8 +9,9 @@ To perform these tasks, you must have <> to the Open a new case to keep track of issues and share the details with colleagues. -. Go to *Cases* -> *Create new case*. -. If you defined <>, optionally select one to use its default field values. preview:[] +. Find **Cases** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Click *Create case*. +. If you defined <>, optionally select one to use its default field values. preview:[] . Give the case a name, severity, and description. + TIP: In the `Description` area, you can use @@ -18,7 +19,7 @@ https://www.markdownguide.org/cheat-sheet[Markdown] syntax to create formatted t . Optionally, add a category, assignees, and tags. You can add users only if they meet the necessary <>. -. If you defined <>, they appear in the *Additional fields* section. added:[8.15.0] +. If you defined <>, they appear in the *Additional fields* section. added:[8.15.0] . Under External incident management system, select a <>. If you've previously added one, that connector displays as the default selection. Otherwise, the default setting is `No connector selected`. @@ -77,7 +78,8 @@ To view an image, click its name in the activity or file list. [NOTE] ============================================================================ -Uploaded files are also accessible in *{stack-manage-app} > Files*. +Uploaded files are also accessible on the **Files** page. +To open **Files**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. When you export cases as {kibana-ref}/managing-saved-objects.html[saved objects], the case files are not exported. ============================================================================ diff --git a/docs/en/observability/metrics-app-fields.asciidoc b/docs/en/observability/metrics-app-fields.asciidoc index 2c07767c4d..0512aeec82 100644 --- a/docs/en/observability/metrics-app-fields.asciidoc +++ b/docs/en/observability/metrics-app-fields.asciidoc @@ -260,7 +260,7 @@ ECS field: False [[group-inventory-fields]] == Additional grouping fields -Depending on which entity you select in the *Inventory* view, these additional fields can be mapped to group entities by. +Depending on which entity you select in the *Infrastructure inventory* view, these additional fields can be mapped to group entities by. `cloud.availability_zone`:: diff --git a/docs/en/observability/metrics-threshold-alert.asciidoc b/docs/en/observability/metrics-threshold-alert.asciidoc index 626318340c..e69648ebec 100644 --- a/docs/en/observability/metrics-threshold-alert.asciidoc +++ b/docs/en/observability/metrics-threshold-alert.asciidoc @@ -11,14 +11,9 @@ time period. Additionally, each rule can be defined using multiple conditions that combine metrics and thresholds to create precise notifications. -. To access this page, go to **{observability}** -> **Infrastructure**. -. On the **Inventory** page or the **Metrics Explorer** page, click **Alerts and rules** -> **Metrics**. -. Select **Create threshold alert**. - [TIP] ===== -When you select *Create threshold alert*, the rule is automatically populated with the same parameters -you've configured on the *Metrics Explorer* page. If you've chosen a *graph per* value, your rule is +When you create this rule on the **Metrics Explorer** page, the rule is automatically populated with the same parameters as the page. If you've chosen a *graph per* value, your rule is preconfigured to monitor and notify about each individual graph displayed on the page. You can also create a rule based on a single graph. On the **Metrics Explorer** page, diff --git a/docs/en/observability/monitor-infra/analyze-hosts.asciidoc b/docs/en/observability/monitor-infra/analyze-hosts.asciidoc index d8d9a10c0f..cbeb26050e 100644 --- a/docs/en/observability/monitor-infra/analyze-hosts.asciidoc +++ b/docs/en/observability/monitor-infra/analyze-hosts.asciidoc @@ -11,8 +11,7 @@ health and performance metrics to help you quickly: * View historical data to rule out false alerts and identify root causes. * Filter and search the data to focus on the hosts you care about the most. -To access this page from the main {kib} menu, go to -**{observability} -> Infrastructure -> Hosts**. +To open **Hosts**, find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. [role="screenshot"] image::images/hosts.png[Screenshot of the Hosts page] diff --git a/docs/en/observability/monitor-infra/aws-metrics.asciidoc b/docs/en/observability/monitor-infra/aws-metrics.asciidoc index a9f52e77a4..fdc5fbd2b2 100644 --- a/docs/en/observability/monitor-infra/aws-metrics.asciidoc +++ b/docs/en/observability/monitor-infra/aws-metrics.asciidoc @@ -12,12 +12,12 @@ Additional AWS charges for GetMetricData API requests are generated using this m [[monitor-ec2-instances]] == Monitor EC2 instances -To help you analyze the EC2 instance metrics listed on the *Inventory* page, you can select +To help you analyze the EC2 instance metrics listed on the *Infrastructure inventory* page, you can select view filters based on the following predefined metrics or you can add <>. -|=== +|=== -| *CPU Usage* | Average of `aws.ec2.cpu.total.pct`. +| *CPU Usage* | Average of `aws.ec2.cpu.total.pct`. | *Inbound Traffic* | Average of `aws.ec2.network.in.bytes_per_sec`. @@ -33,12 +33,12 @@ view filters based on the following predefined metrics or you can add <>. -|=== +|=== -| *Bucket Size* | Average of `aws.s3_daily_storage.bucket.size.bytes`. +| *Bucket Size* | Average of `aws.s3_daily_storage.bucket.size.bytes`. | *Total Requests* | Average of `aws.s3_request.requests.total`. @@ -54,12 +54,12 @@ view filters based on the following predefined metrics or you can add <>. -|=== +|=== -| *Messages Available* | Max of `aws.sqs.messages.visible`. +| *Messages Available* | Max of `aws.sqs.messages.visible`. | *Messages Delayed* | Max of `aws.sqs.messages.delayed`. @@ -75,12 +75,12 @@ view filters based on the following predefined metrics or you can add <>. -|=== +|=== -| *CPU Usage* | Average of `aws.rds.cpu.total.pct`. +| *CPU Usage* | Average of `aws.rds.cpu.total.pct`. | *Connections* | Average of `aws.rds.database_connections`. diff --git a/docs/en/observability/monitor-infra/configure-metrics-sources.asciidoc b/docs/en/observability/monitor-infra/configure-metrics-sources.asciidoc index 3e066a5957..4509f7682a 100644 --- a/docs/en/observability/monitor-infra/configure-metrics-sources.asciidoc +++ b/docs/en/observability/monitor-infra/configure-metrics-sources.asciidoc @@ -1,16 +1,17 @@ [[configure-settings]] = Configure settings -To configure settings for the {infrastructure-app} in {kib}, go to -**{observability}** -> **Infrastructure** -> **Inventory** or **Hosts**, and click the **Settings** -link at the top of the page. The following settings are available: +To configure settings for the {infrastructure-app}, +go to any page under **Infrastructure** and click the **Settings** link at the top of the page. + +The following settings are available: |=== | Setting | Description -| *Name* | Name of the source configuration. +| *Name* | Name of the source configuration. -| *Indices* | {ipm-cap} or patterns used to match {es} indices that contain metrics. The default patterns are `metrics-*,metricbeat-*`. +| *Indices* | {ipm-cap} or patterns used to match {es} indices that contain metrics. The default patterns are `metrics-*,metricbeat-*`. | *{ml-cap}* | The minimum severity score required to display anomalies in the {infrastructure-app}. The default is 50. |=== diff --git a/docs/en/observability/monitor-infra/explore-metrics.asciidoc b/docs/en/observability/monitor-infra/explore-metrics.asciidoc index ca4b22ae4b..89c3d2744c 100644 --- a/docs/en/observability/monitor-infra/explore-metrics.asciidoc +++ b/docs/en/observability/monitor-infra/explore-metrics.asciidoc @@ -9,9 +9,7 @@ for one or more resources that you are monitoring. Additionally, for detailed analyses of your metrics, you can annotate and save visualizations for your custom dashboards by using the {kibana-ref}/tsvb.html[Time Series Visual Builder (TSVB)] within {kib}. -To access this page from the main {kib} menu, go to -*{observability} -> Infrastructure*, and then click *Metrics Explorer*. - +To open **Metrics Explorer**, find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. [role="screenshot"] image::images/metrics-explorer.png[Metrics Explorer] @@ -34,12 +32,12 @@ The graph displays the average values of the metrics you selected. + 2. In the *graph per* dropdown, add `host.name`. + -There is now an individual graph displaying the average values of the metrics for each host. +There is now an individual graph displaying the average values of the metrics for each host. + [role="screenshot"] image::images/metrics-explorer-filter.png[Metrics Explorer query] + -3. Select *Actions* in the top right-hand corner of one of the graphs and then click *Add filter*. +3. Select *Actions* in the top right-hand corner of one of the graphs and then click *Add filter*. + This graph now displays the metrics only for that host. The filter has added a {kibana-ref}/kuery-query.html[{kib} Query Language] filter for `host.name` in the second row of the Metrics Explorer configuration. @@ -64,8 +62,8 @@ image::images/metrics-time-series.png[Time series chart] The `derivative` aggregation is used to calculate the difference between each bucket. By default, the value of units is automatically set to `1s`, along with the `positive only` aggregation. + -8. To calculate the network traffic for all the interfaces, from the *group by* dropdown, select `Terms` and add the -`system.network.name` field. +8. To calculate the network traffic for all the interfaces, from the *group by* dropdown, select `Terms` and add the +`system.network.name` field. + 9. You will also need to add the *Series Agg* aggregation and the *Sum* function. From the *Aggregation* dropdown, select `Series Agg`, and from the *Function* dropdown, select `Sum`. diff --git a/docs/en/observability/monitor-infra/inspect-metric-anomalies.asciidoc b/docs/en/observability/monitor-infra/inspect-metric-anomalies.asciidoc index dc0d815db6..b6593eca39 100644 --- a/docs/en/observability/monitor-infra/inspect-metric-anomalies.asciidoc +++ b/docs/en/observability/monitor-infra/inspect-metric-anomalies.asciidoc @@ -1,11 +1,11 @@ [[inspect-metric-anomalies]] = Detect metric anomalies -When the {anomaly-detect} features of {ml} are enabled, you can create {ml} jobs -to detect and inspect memory usage and network traffic anomalies for hosts and +When the {anomaly-detect} features of {ml} are enabled, you can create {ml} jobs +to detect and inspect memory usage and network traffic anomalies for hosts and Kubernetes pods. -You can model system memory usage, along with inbound and outbound network +You can model system memory usage, along with inbound and outbound network traffic across hosts or pods. You can detect unusual increases in memory usage and unusually high inbound or outbound traffic across hosts or pods. @@ -13,39 +13,40 @@ and unusually high inbound or outbound traffic across hosts or pods. [[ml-jobs-hosts]] == Enable {ml} jobs for hosts or Kubernetes pods -Create a {ml} job to detect anomalous memory usage and network traffic +Create a {ml} job to detect anomalous memory usage and network traffic automatically. -Once you create {ml} jobs, you can not change the settings. You can +Once you create {ml} jobs, you can not change the settings. You can recreate these jobs later. However, you will remove any previously detected anomalies. // lint ignore anomaly-detection observability -1. Go to *Observability -> Infrastructure -> Inventory* and click the *Anomaly detection* link at the top of the page. -2. You’ll be prompted to create a {ml} job for *Hosts* or +. To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Click the *Anomaly detection* link at the top of the page. +. You’ll be prompted to create a {ml} job for *Hosts* or *Kubernetes Pods*. Click *Enable*. -3. Choose a start date for the {ml} analysis. +. Choose a start date for the {ml} analysis. + -{ml-cap} jobs analyze the last four weeks of data and continue to run +{ml-cap} jobs analyze the last four weeks of data and continue to run indefinitely. + -4. Select a partition field. +. Select a partition field. + [NOTE] ===== By default, the Kubernetes partition field `kubernetes.namespace` is selected. ===== + -Partitions allow you to create independent models for different groups of data -that share similar behavior. For example, you may want to build separate models -for machine type or cloud availability zone so that anomalies are not weighted +Partitions allow you to create independent models for different groups of data +that share similar behavior. For example, you may want to build separate models +for machine type or cloud availability zone so that anomalies are not weighted equally across groups. + -5. By default, {ml} jobs analyze all of your metric data, and the results are listed under +. By default, {ml} jobs analyze all of your metric data, and the results are listed under the *Anomalies* tab. You can filter this list to view only the jobs or metrics that you are interested in. For example, you can filter by job name and node name to view specific {anomaly-detect} jobs for that host. -6. Click *Enable jobs*. -7. You're now ready to explore your metric anomalies. Click *Anomalies*. +. Click *Enable jobs*. +. You're now ready to explore your metric anomalies. Click *Anomalies*. + [role="screenshot"] image::images/metrics-ml-jobs.png[Infrastructure {ml-app} anomalies] @@ -77,9 +78,9 @@ If you want to apply the changes to existing results, clone and rerun the job. [[history-chart]] == History chart -On the *Inventory* page, click *Show history* to view the metric values within -the selected time frame. Detected anomalies with an anomaly score equal to 50, -or higher, are highlighted in red. To examine the detected anomalies, use the +On the *Inventory* page, click *Show history* to view the metric values within +the selected time frame. Detected anomalies with an anomaly score equal to 50, +or higher, are highlighted in red. To examine the detected anomalies, use the {ml-docs}/ml-gs-results.html[Anomaly Explorer]. [role="screenshot"] diff --git a/docs/en/observability/monitor-infra/monitor-infrastructure-and-hosts.asciidoc b/docs/en/observability/monitor-infra/monitor-infrastructure-and-hosts.asciidoc index 2c2bb0cf10..b7ba9a48b7 100644 --- a/docs/en/observability/monitor-infra/monitor-infrastructure-and-hosts.asciidoc +++ b/docs/en/observability/monitor-infra/monitor-infrastructure-and-hosts.asciidoc @@ -1,7 +1,7 @@ [[monitor-infrastructure-and-hosts]] = Analyze infrastructure and host metrics -The {infrastructure-app} in {kib} enables you to visualize infrastructure +In the {infrastructure-app}, visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with logs and APM data in {es}. @@ -10,9 +10,10 @@ Using {agent} integrations, you can ingest and analyze metrics from servers, Docker containers, Kubernetes orchestrations, explore and analyze application telemetry, and more. -To access the {infrastructure-app} from the main {kib} menu, go to -**Observability -> Infrastructure**. The {infrastructure-app} provides a few -different views of your data. +To access the {infrastructure-app}, +find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +The {infrastructure-app} provides a few different views of your data. [cols="1,1"] |=== diff --git a/docs/en/observability/monitor-infra/view-infrastructure-metrics.asciidoc b/docs/en/observability/monitor-infra/view-infrastructure-metrics.asciidoc index 52a99c0aad..1726c32d89 100644 --- a/docs/en/observability/monitor-infra/view-infrastructure-metrics.asciidoc +++ b/docs/en/observability/monitor-infra/view-infrastructure-metrics.asciidoc @@ -6,8 +6,7 @@ the resources you are monitoring. All monitored resources emitting a core set of infrastructure metrics are displayed to give you a quick view of the overall health of your infrastructure. -To access this page from the main {kib} menu, go to -*Infrastructure -> Infrastructure inventory*. +To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. [role="screenshot"] image::images/metrics-app.png[Infrastructure inventory] @@ -81,12 +80,12 @@ page. [[analyze-containers-inventory]] == View container metrics -When you select **Docker containers**, the *Inventory* page displays a waffle map that shows the containers you +When you select **Docker containers**, the *Infrastructure inventory* page displays a waffle map that shows the containers you are monitoring and the current CPU usage for each container. Alternatively, you can click the *Table view* icon image:images/table-view-icon.png[] to switch to a table view. -Without leaving the *Inventory* page, you can view enhanced metrics relating to each container +Without leaving the *Infrastructure inventory* page, you can view enhanced metrics relating to each container running in your infrastructure. **** diff --git a/docs/en/observability/monitor-k8s/monitor-k8s-add-integration.asciidoc b/docs/en/observability/monitor-k8s/monitor-k8s-add-integration.asciidoc index 68e6bd2e0a..cdc096bff9 100644 --- a/docs/en/observability/monitor-k8s/monitor-k8s-add-integration.asciidoc +++ b/docs/en/observability/monitor-k8s/monitor-k8s-add-integration.asciidoc @@ -9,9 +9,8 @@ To start collecting logs and metrics from your Kubernetes clusters, first add th === Step 1: Add the Kubernetes integration to your deployment Follow these steps to add the Kubernetes integration to your policy: -//add screenshots if necessary -. Select *Add integrations* from your deployment's homepage. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Enter "Kubernetes" in the search bar, and select the *Kubernetes* integration. . Click *Add Kubernetes* at the top of the Kubernetes integration page. . Click *Add integration only (skip agent installation)* at the bottom of the Add integration page. diff --git a/docs/en/observability/monitor-k8s/monitor-k8s-application-performance.asciidoc b/docs/en/observability/monitor-k8s/monitor-k8s-application-performance.asciidoc index c24c44467c..ec3b0558bd 100644 --- a/docs/en/observability/monitor-k8s/monitor-k8s-application-performance.asciidoc +++ b/docs/en/observability/monitor-k8s/monitor-k8s-application-performance.asciidoc @@ -43,7 +43,7 @@ like when directing traces from multiple applications to separate {es} clusters. A {observability-guide}/apm-secret-token.html[secret token] is used to secure communication between APM agents and APM Server. To create or update your secret token in {kib}: -. Open Kibana and navigate to Fleet. +. Find **Fleet** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Under the *Agent policies* tab, select the policy you would like to configure. . Find the Elastic APM integration and select *Actions → Edit integration*. . Navigate to *Agent authorization → Secret token* and set the value of your token. @@ -208,11 +208,12 @@ kubectl apply -f demo.yml ---- [discrete] -=== View your traces in {kib} +=== View your application's traces in {kib} -To view your application's trace data, open {kib} and go to *{observability} → Service inventory*. +Application trace data is available in the **Service Inventory**. +To open **Service Inventory**, find **Applications** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. -The Applications UI allows you to monitor your software services and applications in real-time: +The **Applications** app allows you to monitor your software services and applications in real-time: visualize detailed performance information on your services, identify and analyze errors, and monitor host-level and agent-specific metrics like JVM and Go runtime metrics. diff --git a/docs/en/observability/monitor-k8s/monitor-k8s-explore-logs-and-metrics.asciidoc b/docs/en/observability/monitor-k8s/monitor-k8s-explore-logs-and-metrics.asciidoc index b753e17cb0..f694320028 100644 --- a/docs/en/observability/monitor-k8s/monitor-k8s-explore-logs-and-metrics.asciidoc +++ b/docs/en/observability/monitor-k8s/monitor-k8s-explore-logs-and-metrics.asciidoc @@ -12,8 +12,8 @@ Refer to the following sections for more on viewing your data. [[monitor-k8s-explore-metrics]] === View performance and health metrics -To view the performance and health metrics collected by {agent}, open -{kib} and go to **{observability} → Infrastructure**. +To view the performance and health metrics collected by {agent}, +find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. On the **Inventory** page, you can switch between different views to see an overview of the containers and pods running on Kubernetes: @@ -33,9 +33,10 @@ For more on using the **Metrics Explorer** page, refer to <>. [discrete] [[monitor-k8s-explore-logs]] -=== View logs +=== View Kubernetes logs + +Find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. -To view Kubernetes logs, open {kib} and go to *{observability} → Logs Explorer*. With **Logs Explorer**, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. [role="screenshot"] diff --git a/docs/en/observability/monitor-logs.asciidoc b/docs/en/observability/monitor-logs.asciidoc index 3d95d8a953..b92b4b2ae4 100644 --- a/docs/en/observability/monitor-logs.asciidoc +++ b/docs/en/observability/monitor-logs.asciidoc @@ -1,13 +1,13 @@ [[monitor-logs]] = Explore logs -The {logs-app} in {kib} enables you to search, filter, and tail all your logs +Logs Explorer in {kib} enables you to search, filter, and tail all your logs ingested into {es}. Instead of having to log into different servers, change -directories, and tail individual files, all your logs are available in the {logs-app}. +directories, and tail individual files, all your logs are available in Logs Explorer. Logs Explorer allows you to quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. Refer to the <> documentation for more on using Logs Explorer. -The {logs-app} also provides {ml} to detect specific <> automatically and <> to quickly identify patterns in your log events. +Logs Explorer also provides {ml} to detect specific <> automatically and <> to quickly identify patterns in your log events. -To view the {logs-app}, go to *{observability} > Logs*. \ No newline at end of file +To view Logs Explorer, find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]) \ No newline at end of file diff --git a/docs/en/observability/monitor-nginx-ml.asciidoc b/docs/en/observability/monitor-nginx-ml.asciidoc index f1d71c755e..ec0d8237df 100644 --- a/docs/en/observability/monitor-nginx-ml.asciidoc +++ b/docs/en/observability/monitor-nginx-ml.asciidoc @@ -40,7 +40,7 @@ Refer to {ml-docs}/setup.html[Set up ML features]. Add the nginx ML jobs from the nginx integration to start using anomaly detection: -. From the main {kib} menu, go to *Machine Learning*. Under *Anomaly Detection*, select *Jobs*. +* To open **Jobs**, find **Machine Learning** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Select *Create job*. . In the search bar, enter *nginx* and select *Nginx access logs [Logs Nginx]*. . Under *Use preconfigured jobs*, select the *Nginx access logs* card. diff --git a/docs/en/observability/monitor-nginx.asciidoc b/docs/en/observability/monitor-nginx.asciidoc index d6dba717b6..c538df12c4 100644 --- a/docs/en/observability/monitor-nginx.asciidoc +++ b/docs/en/observability/monitor-nginx.asciidoc @@ -54,7 +54,7 @@ Before you can monitor nginx, you need the following: Follow these steps to add the nginx integration to your deployment: -. Select *Add integrations* from your deployment's homepage. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Enter "nginx" in the search bar, and select the *Nginx* integration. . Select *Add Nginx* at the top of the integration page. . Select *Add integration only (skip agent installation)* at the bottom of the page. @@ -124,7 +124,7 @@ Follow the instructions from the *Add agent* screen to install the {agent} on yo Before installing and running the standalone {agent}, you need to create an API key. To create an {ecloud} API key: -. From the {kib} menu, go to *Stack Management* → *API keys*. +. To open **API keys**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Select *Create API key*. . Give the key a name. For example, `nginx API key`. . Leave the other default options and select *Create API key*. @@ -184,12 +184,13 @@ Refer to the following sections for more information on viewing your data: [discrete] [[monitor-nginx-explore-metrics]] -=== View metrics +=== View metrics in {kib} The nginx integration has a built-in dashboard that shows the full picture of your nginx metrics in one place. To open the nginx dashboard: -. Open the {kib} menu and go to *Management* → *Integrations* → *Installed integrations*. +. Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Select *Installed integrations*. . Select the *Nginx* card and open the *Assets* tab. . Select either the `[Metrics Nginx] Overview` dashboard. @@ -209,7 +210,8 @@ After your nginx logs are ingested, view and explore your logs using <>. Within the Synthetics UI, create a **Monitor Status** rule to receive notifications based on errors and outages. -. To access this page, go to **{observability}** → **Synthetics**. -. At the top of the page, click **Alerts and rules** → **Create rule**. -. Select **Monitor status rule**. - [discrete] [[synthetic-monitor-filters]] === Filters diff --git a/docs/en/observability/observability-ai-assistant.asciidoc b/docs/en/observability/observability-ai-assistant.asciidoc index 8e17a7e920..ef530df58f 100644 --- a/docs/en/observability/observability-ai-assistant.asciidoc +++ b/docs/en/observability/observability-ai-assistant.asciidoc @@ -86,7 +86,7 @@ To set up the AI Assistant: * https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html[Amazon Bedrock authentication keys and secrets] * https://cloud.google.com/iam/docs/keys-list-get[Google Gemini service account keys] -. From *{stack-manage-app}* -> *{connectors-ui}* in {kib}, create a connector for your AI provider: +. Create a connector for your AI provider. Refer to the connector documentation to learn how: * {kibana-ref}/openai-action-type.html[OpenAI] * {kibana-ref}/bedrock-action-type.html[Amazon Bedrock] * {kibana-ref}/gemini-action-type.html[Google Gemini] @@ -127,9 +127,8 @@ You can also add information to the knowledge base by asking the AI Assistant to To add external data to the knowledge base in {kib}: -. Go to *Stack Management*. -. In the _Kibana_ section, click *AI Assistants*. -. Then select the *Elastic AI Assistant for Observability*. +. To open AI Assistant settings, find `AI Assistants` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Under *Elastic AI Assistant for Observability*, click **Manage settings**. . Switch to the *Knowledge base* tab. . Click the *New entry* button, and choose either: + @@ -156,18 +155,19 @@ Search connectors are only needed when importing external data into the Knowledg {ref}/es-connectors.html[Connectors] allow you to index content from external sources thereby making it available for the AI Assistant. This can greatly improve the relevance of the AI Assistant’s responses. Data can be integrated from sources such as GitHub, Confluence, Google Drive, Jira, AWS S3, Microsoft Teams, Slack, and more. -These connectors are managed under *Search* -> *Content* -> *Connectors* in {kib}, they are outside of the {observability} Solution, and they require an {enterprise-search-ref}/server.html[Enterprise Search] server connected to the Elastic Stack. +These connectors are managed under the Search Solution in {kib}, and they require an {enterprise-search-ref}/server.html[Enterprise Search] server connected to the Elastic Stack. By default, the AI Assistant queries all search connector indices. To override this behavior and customize which indices are queried, adjust the *Search connector index pattern* setting on the <> page. This allows precise control over which data sources are included in AI Assistant knowledge base. To create a connector and make its content available to the AI Assistant knowledge base, follow these steps: -. In {kib} UI, go to *Search* -> *Content* -> *Connectors* and follow the instructions to create a new connector. +. To open **Connectors**, find `Content / Connectors` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + [NOTE] ==== -If your {kib} Space doesn't include the `Search` solution you will have to create the connector from a different space or change your space *Solution view* setting to `Classic`. +If your {kib} Space doesn't include the Search solution you will have to create the connector from a different space or change your space *Solution view* setting to `Classic`. ==== +. Follow the instructions to create a new connector. + For example, if you create a {ref}/es-connectors-github.html[GitHub native connector] you have to set a `name`, attach it to a new or existing `index`, add your `personal access token` and include the `list of repositories` to synchronize. + @@ -331,10 +331,10 @@ To learn more about alerting, actions, and connectors, refer to < [[obs-ai-settings]] == AI Assistant Settings -You can access the AI Assistant Settings page: +To access the AI Assistant Settings page, you can: -* From *{stack-manage-app}* -> *Kibana* -> *AI Assistants* -> *Elastic AI Assistant for Observability*. -* From the *More actions* menu inside the AI Assistant window. +* Find `AI Assistants` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +* Use the *More actions* menu inside the AI Assistant window. The AI Assistant Settings page contains the following tabs: diff --git a/docs/en/observability/profiling-index-lifecycle-management.asciidoc b/docs/en/observability/profiling-index-lifecycle-management.asciidoc index e7622a1a49..5592ae4f63 100644 --- a/docs/en/observability/profiling-index-lifecycle-management.asciidoc +++ b/docs/en/observability/profiling-index-lifecycle-management.asciidoc @@ -51,10 +51,10 @@ Complete the following steps to configure a custom index lifecycle policy. [[profiling-ilm-custom-policy-create-policy]] === Step 1: Create an index lifecycle policy -. Navigate to **{stack-manage-app}** → **Index Lifecycle Policies**. +. To open **Index Lifecycle Policies**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Click **Create policy**. -. Name your new policy, for example `custom-profiling-policy`. +. Name your new policy, for example `custom-profiling-policy`. . Customize the policy to your liking. . Click **Save policy**. @@ -84,8 +84,7 @@ NOTE: To apply a custom {ilm-init} policy, you must name the component template } ---- . Continue to the **Review** step, and select the *Request* tab. Your request should look similar to the following image. - - ++ If it does, click **Create component template**. + [role="screenshot"] @@ -95,8 +94,12 @@ image::images/profiling-create-component-template.png[Create component template] [[profiling-ilm-custom-policy-rollover]] === Step 3: Rollover indices -To confirm that Universal Profiling is now using the new index template and {ilm-init} policy, navigate to **{dev-tools-app}** and run the following: +Confirm that Universal Profiling is now using the new index template and {ilm-init} policy: +. Open **Console** by finding `Dev Tools` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +. Run the following: ++ [source,bash] ---- GET _ilm/policy/custom-profiling-policy <1> diff --git a/docs/en/observability/profiling-upgrade.asciidoc b/docs/en/observability/profiling-upgrade.asciidoc index 42db05b06e..597aebb13c 100644 --- a/docs/en/observability/profiling-upgrade.asciidoc +++ b/docs/en/observability/profiling-upgrade.asciidoc @@ -62,7 +62,8 @@ NOTE: When stopping incoming requests, Universal Profiling Agent replicas back o You can delete existing profiling data in Kibana: -. If you're upgrading from 8.9.0 or later, go to *Dev Tools* from the navigation menu, and execute the following snippet. If you're upgrading from an earlier version, skip this step. +. If you're upgrading from 8.9.0 or later, go to **Console** and execute the following snippet. (To open **Console**, find `Dev Tools` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field].) +If you're upgrading from an earlier version, skip this step. + [source,console] ---- @@ -73,7 +74,8 @@ PUT /_cluster/settings } } ---- -. From the navigation menu, go to *Stack Management → Index Management*. +. Open **Index Management** by using the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + . Make sure you're in the *Data Streams* tab, and search for `profiling-` in the search bar. . Select all resulting data streams, and click the *Delete data streams* button. . Switch to the *Indices* tab, enable *Include hidden indices*, and search for `profiling-` in the search bar. diff --git a/docs/en/observability/quickstarts/monitor-hosts-with-elastic-agent.asciidoc b/docs/en/observability/quickstarts/monitor-hosts-with-elastic-agent.asciidoc index da71374c86..b93990d16f 100644 --- a/docs/en/observability/quickstarts/monitor-hosts-with-elastic-agent.asciidoc +++ b/docs/en/observability/quickstarts/monitor-hosts-with-elastic-agent.asciidoc @@ -39,7 +39,7 @@ The script also generates an {agent} configuration file that you can use with yo [discrete] == Collect your data -. In {kib}, go to **Observability** and click **Add Data**. +. Go to the **Observability** UI and click **Add Data**. . Select **Collect and analyze logs**, and then select **Auto-detect logs and metrics**. . Copy the command that's shown. For example: + diff --git a/docs/en/observability/quickstarts/monitor-k8s-logs-metrics.asciidoc b/docs/en/observability/quickstarts/monitor-k8s-logs-metrics.asciidoc index b249c25ad2..340a909b6c 100644 --- a/docs/en/observability/quickstarts/monitor-k8s-logs-metrics.asciidoc +++ b/docs/en/observability/quickstarts/monitor-k8s-logs-metrics.asciidoc @@ -28,7 +28,7 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu [discrete] == Collect your data -. In {kib}, go to **Observability** and click **Add Data**. +. Go to the **Observability** UI and click **Add Data**. . Select **Monitor infrastructure**, and then select **Kubernetes**. + diff --git a/docs/en/observability/slo-create.asciidoc b/docs/en/observability/slo-create.asciidoc index e1da49a152..86501d74de 100644 --- a/docs/en/observability/slo-create.asciidoc +++ b/docs/en/observability/slo-create.asciidoc @@ -7,7 +7,7 @@ include::slo-overview.asciidoc[tag=slo-license] -To create an SLO, go to *Observability → SLOs*: +To create an SLO, find **SLOs** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. * If you're creating your first SLO, you'll see an introductory page. Click the *Create SLO* button. * If you've created SLOs before, click the *Create new SLO* button in the upper-right corner of the page. diff --git a/docs/en/observability/slo-privileges.asciidoc b/docs/en/observability/slo-privileges.asciidoc index 86b6c6df7b..6cc9949ac6 100644 --- a/docs/en/observability/slo-privileges.asciidoc +++ b/docs/en/observability/slo-privileges.asciidoc @@ -25,9 +25,8 @@ to manually create and assign the mentioned roles. To create a role: -. From the left navigation in {kib}, under *Management* select *Stack Management*. -. Under *Security*, select *Roles*. -. Click *Create role* in the upper-right corner of the page. +. To open **Roles**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. On the **Roles** page, click **Create role**. [discrete] [[slo-all-access]] diff --git a/docs/en/observability/splunk-get-started.asciidoc b/docs/en/observability/splunk-get-started.asciidoc index ebe74742ac..d35bd93955 100644 --- a/docs/en/observability/splunk-get-started.asciidoc +++ b/docs/en/observability/splunk-get-started.asciidoc @@ -34,7 +34,7 @@ include::{observability-docs-root}/docs/en/observability/logs-metrics-get-starte [[splunk-step-one]] == Step 1: Add integration -Go to the {kib} home page and click **Add integrations**. +Find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Search for and add the nginx integration. Refer to <> for detailed steps about adding integrations. diff --git a/docs/en/observability/synthetics-analyze.asciidoc b/docs/en/observability/synthetics-analyze.asciidoc index ab686e1b4a..13fc8e4f9b 100644 --- a/docs/en/observability/synthetics-analyze.asciidoc +++ b/docs/en/observability/synthetics-analyze.asciidoc @@ -15,7 +15,7 @@ availability and allows you to dig into details to diagnose what caused downtime The Synthetics *Overview* tab provides you with a high-level view of all the services you are monitoring to help you quickly diagnose outages and other connectivity issues within your network. -To access this page, go to *{observability}* -> *Synthetics* and make sure you're on the *Overview* tab. +To access this page, find `Synthetics` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field] and make sure you're on the *Overview* tab. This overview includes a snapshot of the current status of all monitors, the number of errors that occurred over the last 6 hours, and the number of alerts over the last 12 hours. @@ -160,7 +160,7 @@ From here, you can either drill down into: * The latest run of the full journey by clicking *image:images/icons/inspect.svg[Inspect icon] View test run* or a past run in the list of *Last 10 test runs*. This will take you to the view described below in <>. -* An individual step in this run by clicking the performance breakdown icon +* An individual step in this run by clicking the performance breakdown icon (image:images/icons/apmTrace.svg[Performance breakdown icon]) next to one of the steps. This will take you to the view described below in <>. @@ -199,7 +199,7 @@ when trying to diagnose the reason it failed. image:images/synthetics-analyze-one-run-compare-steps.png[Step list on a page detailing one run of a browser monitor in the {synthetics-app}] Drill down to see even more details for an individual step by clicking the performance breakdown icon -(image:images/icons/apmTrace.svg[Performance breakdown icon]) next to one of the steps. +(image:images/icons/apmTrace.svg[Performance breakdown icon]) next to one of the steps. This will take you to the view described below in <>. [discrete] diff --git a/docs/en/observability/synthetics-configuration.asciidoc b/docs/en/observability/synthetics-configuration.asciidoc index 0f44da73d3..cbdec0c539 100644 --- a/docs/en/observability/synthetics-configuration.asciidoc +++ b/docs/en/observability/synthetics-configuration.asciidoc @@ -239,7 +239,7 @@ Where to deploy the monitor. Monitors can be deployed in multiple locations so t To list available locations you can: + * Run the <>. -* Go to *Synthetics* -> *Management* and click *Create monitor*. +* Find `Synthetics` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field] and click *Create monitor*. Locations will be listed in _Locations_. `privateLocations` (`Array`):: @@ -250,7 +250,7 @@ To list available {private-location}s you can: + * Run the <> with the {kib} URL for the deployment from which to fetch available locations. -* Go to *Synthetics* -> *Management* and click *Create monitor*. +* Find `Synthetics` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field] and click *Create monitor*. {private-location}s will be listed in _Locations_. `throttling` (`boolean` | https://github.com/elastic/synthetics/blob/{synthetics_version}/src/common_types.ts#L194-L198[`ThrottlingOptions`]):: diff --git a/docs/en/observability/synthetics-get-started-project.asciidoc b/docs/en/observability/synthetics-get-started-project.asciidoc index f88f06fa14..254800bc0e 100644 --- a/docs/en/observability/synthetics-get-started-project.asciidoc +++ b/docs/en/observability/synthetics-get-started-project.asciidoc @@ -73,7 +73,7 @@ When complete, set the `SYNTHETICS_API_KEY` environment variable in your termina for authentication with your {stack}: . To generate an API key: -.. Go to **Synthetics** in {kib}. +.. Find `Synthetics` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. .. Click **Settings**. .. Switch to the **Project API Keys** tab. .. Click **Generate Project API key**. diff --git a/docs/en/observability/synthetics-get-started-ui.asciidoc b/docs/en/observability/synthetics-get-started-ui.asciidoc index f9152397c3..c90a719942 100644 --- a/docs/en/observability/synthetics-get-started-ui.asciidoc +++ b/docs/en/observability/synthetics-get-started-ui.asciidoc @@ -49,7 +49,7 @@ For more details, refer to <>. To use the {synthetics-app} to add a lightweight monitor: -. Go to **Synthetics** in {kib}. +. Find `Synthetics` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Click **Create monitor**. . Set the monitor type to *HTTP Ping*, *TCP Ping*, or *ICMP Ping*. . In _Locations_, select one or more locations. diff --git a/docs/en/observability/synthetics-private-location.asciidoc b/docs/en/observability/synthetics-private-location.asciidoc index 5f9b7e80f9..e60e0346c8 100644 --- a/docs/en/observability/synthetics-private-location.asciidoc +++ b/docs/en/observability/synthetics-private-location.asciidoc @@ -144,7 +144,7 @@ Learn how in {fleet-guide}/agent-environment-variables.html[{agent} environment When the {agent} is running you can add a new {private-location} in {kib}: -. Go to **{observability}** -> **Synthetics**. +. Find `Synthetics` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . Click **Settings**. . Click **{private-location}s**. . Click **Add location**. diff --git a/docs/en/observability/synthetics-reference/lightweight-config/common.asciidoc b/docs/en/observability/synthetics-reference/lightweight-config/common.asciidoc index 7b4c65d76c..323dd5c6ab 100644 --- a/docs/en/observability/synthetics-reference/lightweight-config/common.asciidoc +++ b/docs/en/observability/synthetics-reference/lightweight-config/common.asciidoc @@ -244,7 +244,7 @@ a| Where to deploy the monitor. You can deploy monitors in multiple locations to To list available locations you can: * Run the <>. -* Go to *Synthetics* -> *Management* and click *Create monitor*. Locations will be listed in _Locations_. +* Find `Synthetics` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field] and click *Create monitor*. Locations will be listed in _Locations_. *Examples*: @@ -280,7 +280,7 @@ a| The <> to which the monitors To list available {private-location}s you can: * Run the <> and specify the {kib} URL of the deployment. This will fetch all available private locations associated with the deployment. -* Go to *Synthetics* -> *Management* and click *Create monitor*. {private-location}s will be listed in _Locations_. +* Find `Synthetics` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field] and click *Create monitor*. {private-location}s will be listed in _Locations_. *Examples*: diff --git a/docs/en/observability/tab-widgets/add-apm-integration/content.asciidoc b/docs/en/observability/tab-widgets/add-apm-integration/content.asciidoc index 9bd1f28b54..0dad98668f 100644 --- a/docs/en/observability/tab-widgets/add-apm-integration/content.asciidoc +++ b/docs/en/observability/tab-widgets/add-apm-integration/content.asciidoc @@ -18,7 +18,8 @@ need this in the next step. // end::ess[] // tag::self-managed[] -. In {kib}, select **Add integrations** > **Elastic APM**. +. In {kib}, find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. Select **Elastic APM**. + [role="screenshot"] image::./images/kibana-fleet-integrations-apm.png[{fleet} showing APM integration] diff --git a/docs/en/observability/tail-logs.asciidoc b/docs/en/observability/tail-logs.asciidoc index 45e81b12c7..3ec48286a8 100644 --- a/docs/en/observability/tail-logs.asciidoc +++ b/docs/en/observability/tail-logs.asciidoc @@ -23,8 +23,8 @@ click *Stop streaming* to view historical logs from a specified time range. Because <> is replacing Logs Stream, Logs Stream and the Logs Stream dashboard panel are disabled by default. To activate Logs Stream and the Logs Stream dashboard panel complete the following steps: -. Go to **Management** → **Advanced Settings** -. Enter _Logs Stream_ in the search bar. +. To open **Advanced Settings**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +. In **Advanced Settings**, enter _Logs Stream_ in the search bar. . Turn on **Logs Stream**. After saving your settings, you'll see Logs Stream in the Observability navigation, and the Logs Stream dashboard panel will be available. diff --git a/docs/en/observability/threshold-alert.asciidoc b/docs/en/observability/threshold-alert.asciidoc index 11beeae8b2..71ddc84406 100644 --- a/docs/en/observability/threshold-alert.asciidoc +++ b/docs/en/observability/threshold-alert.asciidoc @@ -6,10 +6,6 @@ Create a custom threshold rule to trigger an alert when an {observability} data type reaches or exceeds a given value. -. To access this page, go to **{observability}** -> **Alerts**. -. Click **Manage Rules** -> **Create rule**. -. Under **Select rule type**, select **Custom threshold**. - [role="screenshot"] image::images/custom-threshold-rule.png[Custom threshold alert configuration,75%] diff --git a/docs/en/observability/triage-slo-burn-rate-breaches.asciidoc b/docs/en/observability/triage-slo-burn-rate-breaches.asciidoc index 2fd77eb6e8..8216630b19 100644 --- a/docs/en/observability/triage-slo-burn-rate-breaches.asciidoc +++ b/docs/en/observability/triage-slo-burn-rate-breaches.asciidoc @@ -9,7 +9,7 @@ When this happens, you are at risk of exhausting your error budget and violating To triage issues quickly, go to the alert details page: -. Go to **{observability}** → **Alerts** (or open the SLO and click **Alerts**). +. Open the SLO and click **Alerts**. . From the Alerts table, click the image:images/icons/boxesHorizontal.svg[More actions] icon next to the alert and select **View alert details**. The alert details page shows information about the alert, including when the alert was triggered, diff --git a/docs/en/observability/triage-threshold-breaches.asciidoc b/docs/en/observability/triage-threshold-breaches.asciidoc index 83a56c6dc9..43cd2f2ea9 100644 --- a/docs/en/observability/triage-threshold-breaches.asciidoc +++ b/docs/en/observability/triage-threshold-breaches.asciidoc @@ -9,7 +9,7 @@ For example, you might have a custom threshold rule that triggers an alert when To triage issues quickly, go to the alert details page: -. Go to **{observability}** → **Alerts**. +. Find **Alerts** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . From the Alerts table, click the image:images/icons/boxesHorizontal.svg[More actions] icon next to the alert and select **View alert details**. The alert details page shows information about the alert, including when the alert was triggered, diff --git a/docs/en/observability/universal-profiling.asciidoc b/docs/en/observability/universal-profiling.asciidoc index e8d47d6330..450a3daa73 100644 --- a/docs/en/observability/universal-profiling.asciidoc +++ b/docs/en/observability/universal-profiling.asciidoc @@ -14,7 +14,9 @@ On this page, you'll find information on: [[profiling-inspecting-data-in-kibana]] == Inspecting data in {kib} -You can find Universal Profiling in the *Observability* navigation menu. Clicking *Stacktraces* under *Universal Profiling* opens the <>. +To open **Universal Profiling**, find **Infrastructure** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. + +Under **Universal Profiling**, click **Stacktraces** to open the <>. NOTE: Universal Profiling currently only supports CPU profiling through stack sampling. diff --git a/docs/en/observability/uptime-duration-anomaly-alert.asciidoc b/docs/en/observability/uptime-duration-anomaly-alert.asciidoc index 3ce1ea8cae..0c47032b1f 100644 --- a/docs/en/observability/uptime-duration-anomaly-alert.asciidoc +++ b/docs/en/observability/uptime-duration-anomaly-alert.asciidoc @@ -9,10 +9,6 @@ based on the response durations for all of the geographic locations of each moni monitor runs for an unusual amount of time, at a particular time, an anomaly is recorded and highlighted on the <> chart. -// lint ignore anomaly-detection -. To access this page, go to **{observability}** -> **Uptime**. -. On the *Monitors* page, select on a monitor, and then click **Enable anomaly detection**. - [discrete] [[duration-alert-conditions]] == Conditions diff --git a/docs/en/observability/uptime-tls-alert.asciidoc b/docs/en/observability/uptime-tls-alert.asciidoc index 94a97e7967..6f6cba9364 100644 --- a/docs/en/observability/uptime-tls-alert.asciidoc +++ b/docs/en/observability/uptime-tls-alert.asciidoc @@ -8,10 +8,6 @@ Within the {uptime-app}, you can create a rule that notifies you when one or more of your monitors has a TLS certificate expiring within a specified threshold, or when it exceeds an age limit. -. To access this page, go to **{observability}** -> **Uptime**. -. At the top of the page, click **Alerts and rules** -> **Create rule**. -. Select **TLS rule**. - [discrete] [[tls-alert-conditions]] == Conditions diff --git a/docs/en/shared/install-configure-filebeat.asciidoc b/docs/en/shared/install-configure-filebeat.asciidoc index 124f9e7380..107b29703f 100644 --- a/docs/en/shared/install-configure-filebeat.asciidoc +++ b/docs/en/shared/install-configure-filebeat.asciidoc @@ -41,7 +41,7 @@ echo -n "" | ./filebeat keystore add CLOUD_ID --stdin . To store logs in {es} with minimal permissions, create an API key to send data from {filebeat} to {ess}. Log into {kib} (you can do so from the Cloud -Console without typing in any permissions) and select *Management* -> *{dev-tools-app}*. +Console without typing in any permissions) and find `Dev Tools` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. Send the following request: + [source,console] diff --git a/docs/en/shared/install-configure-metricbeat.asciidoc b/docs/en/shared/install-configure-metricbeat.asciidoc index 439399f8ef..e58088fbe5 100644 --- a/docs/en/shared/install-configure-metricbeat.asciidoc +++ b/docs/en/shared/install-configure-metricbeat.asciidoc @@ -41,8 +41,8 @@ echo -n "" | ./metricbeat keystore add CLOUD_ID --stdi . To store metrics in {es} with minimal permissions, create an API key to send data from {metricbeat} to {ess}. Log into {kib} (you can do so from the Cloud -Console without typing in any permissions) and select *Management* -> *{dev-tools-app}*. -Send the following request: +Console without typing in any permissions) and find `Dev Tools` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +From the **Console**, send the following request: + [source,console] ---- diff --git a/docs/en/shared/integrations-quick-guide.asciidoc b/docs/en/shared/integrations-quick-guide.asciidoc index cc1a3211f3..029975d915 100644 --- a/docs/en/shared/integrations-quick-guide.asciidoc +++ b/docs/en/shared/integrations-quick-guide.asciidoc @@ -6,7 +6,7 @@ ==== **** -. Go to the Observability UI and click **Add integrations**. +. In the Observability UI, find **Integrations** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. . In the query bar, search for and select the **{integration-name}** integration. . Read the overview to make sure you understand integration requirements and @@ -22,7 +22,7 @@ configure all required settings. . Choose where to add the integration policy. * If {agent} is not already deployed locally or on an EC2 instance, click **New hosts** and enter a name for the new agent policy. -* Otherwise, click **Existing hosts** and select an existing agent policy. +* Otherwise, click **Existing hosts** and select an existing agent policy. . Click **Save and continue**. This step takes a minute or two to complete. When it's done, you'll have an agent policy that contains an integration policy for the configuration you just specified. If an {agent} is already assigned to