Skip to content

Commit

Permalink
Merge pull request #7002 from jddocs/rc-v1.332.0
Browse files Browse the repository at this point in the history
[Release Candidate] v1.332.0
  • Loading branch information
jddocs authored Jun 13, 2024
2 parents c94cd94 + 694d000 commit e41cb82
Show file tree
Hide file tree
Showing 9 changed files with 101 additions and 24 deletions.
1 change: 1 addition & 0 deletions ci/vale/dictionary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2537,6 +2537,7 @@ tokenized
tokenizing
tokyo
tokyo2
tolerations
tomacat
tomcat6
toolchain
Expand Down
Original file line number Diff line number Diff line change
@@ -1,37 +1,35 @@
---
slug: observability-with-datastream-and-trafficpeak
title: "Large Data Observability With DataStream and Hydrolix TrafficPeak"
description: "This guide reviews Akamai's managed observability solution, Hydrolix TrafficPeak, including product features, how TrafficPeak overcomes observability challenges, and a proven implementation architecture."
title: "Large Data Observability With DataStream and TrafficPeak"
description: "This guide reviews Akamai's managed observability solution, TrafficPeak, including product features, how TrafficPeak overcomes observability challenges, and a proven implementation architecture."
authors: ["John Dutton"]
contributors: ["John Dutton"]
published: 2024-06-11
keywords: ['observability','datastream','trafficpeak','hydrolix','logging','data logging','visualization']
keywords: ['observability','datastream','trafficpeak','logging','data logging','visualization']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
- '[Official Hydrolix Documentation](https://docs.hydrolix.io/docs/welcome)'
- '[Official TrafficPeak Site](https://hydrolix.io/partner-program/trafficpeak/)'
- '[Akamai Solution Brief: Media TrafficPeak Observability Platform](https://www.akamai.com/resources/solution-brief/trafficpeak-observability-platform)'
- '[Akamai TechDocs: Stream logs to TrafficPeak](https://techdocs.akamai.com/datastream2/docs/stream-logs-trafficpeak)'
---

## Overview

Observability workflows are critical to gaining meaningful insight to your application’s health, customer traffic, and overall performance. However, there are challenges that come along with achieving true observability, including large volumes of traffic data, data retention, time to implementation, and the cost of each.

Hydrolix TrafficPeak is a ready-to-use, quickly deployable observability solution built for Akamai Cloud. TrafficPeak works with DataStream to ingest, index, compress, store, and search high-volume, real-time log data at up to 75% less cost than other observability platforms. TrafficPeak customers are provided with a Grafana login and customized dashboards where they can visualize, search, and set up alerting for their data.
TrafficPeak is a ready-to-use, quickly deployable observability solution built for Akamai Cloud. TrafficPeak works with DataStream to ingest, index, compress, store, and search high-volume, real-time log data at up to 75% less cost than other observability platforms. TrafficPeak customers are provided with a Grafana login and customized dashboards where they can visualize, search, and set up alerting for their data.

This guide looks at the TrafficPeak observability solution and reviews a tested, proven observability architecture built for a high-traffic delivery platform. This solution combines Akamai’s edge-based DataStream log streaming service, SIEM integration, and TrafficPeak built on Linode cloud infrastructure to support large-scale traffic, logging, and data retention.

## TrafficPeak On Akamai Cloud

### What Is TrafficPeak?

TrafficPeak is a fully managed observability solution that works with DataStream log streaming and Akamai Cloud Computing. TrafficPeak is managed and hosted by Hydrolix, and uses Linode Compute Instances alongside Linode Object Storage for data processing and storage. With TrafficPeak, customers are provided with access to a Grafana interface with preconfigured, customizable dashboards for data visualization and monitoring.
TrafficPeak is a fully managed observability solution that works with DataStream log streaming and Akamai Cloud Computing. TrafficPeak is managed and hosted by Akamai, and uses Linode Compute Instances alongside Linode Object Storage for data processing and storage. With TrafficPeak, customers are provided with access to a Grafana interface with preconfigured, customizable dashboards for data visualization and monitoring.

### Who Is TrafficPeak For?

TrafficPeak is for Akamai customers that need an all-in-one, cost-effective, turnkey observability solution for large, petabyte-scale volumes of data.

For more detailed information on features and pricing, see: [Observability on Akamai cloud computing: TrafficPeak](https://hydrolix.io/partner-program/trafficpeak/)

## Overcoming Challenges

### Cost Reduction & Visibility
Expand All @@ -54,7 +52,7 @@ With TrafficPeak, logs are sent directly from Akamai edge to Linode Compute usin

*Achieve observability for complex types of data with visual monitoring and data reporting.*

Complex data (for example, media delivery and gaming data) can have additional challenges: extreme sensitivity to latency, high data volumes, audience insights, application-specific data types, data compliance and security, and more. TrafficPeak’s visual monitoring and data reporting allows you to track audience size, unique viewership, SIEM data, and other audience-specific data. TrafficPeak is also monitored by Hydrolix (so you don’t need to worry about scaling tasks), includes configurable alerting, and supports CMCD when using Akamai’s [Adaptive Media Delivery](https://www.akamai.com/products/adaptive-media-delivery).
Complex data (for example, media delivery and gaming data) can have additional challenges: extreme sensitivity to latency, high data volumes, audience insights, application-specific data types, data compliance and security, and more. TrafficPeak’s visual monitoring and data reporting allows you to track audience size, unique viewership, SIEM data, and other audience-specific data. TrafficPeak is also monitored by Akamai (so you don’t need to worry about scaling tasks), includes configurable alerting, and supports CMCD when using Akamai’s [Adaptive Media Delivery](https://www.akamai.com/products/adaptive-media-delivery).

### Implementation

Expand All @@ -76,18 +74,16 @@ Below is a high-level diagram and walkthrough of a DataStream and TrafficPeak ar

![DataStream With TrafficPeak Diagram](DataStream-With-TrafficPeak-Diagram.png)

For an in-depth overview of Hydrolix’s architecture, see: [The Hydrolix Data Platform](https://docs.hydrolix.io/docs/platform-overview)

### Systems and Components

- **DataStream:** Akamai’s edge-native log streaming service.

- **Hydrolix TrafficPeak:** Akamai’s managed observability solution that runs on Akamai Cloud Computing platform. Comprised of Compute Instances, Object Storage, and a Grafana dashboard.
- **TrafficPeak:** Akamai’s managed observability solution that runs on Akamai Cloud Computing platform. Comprised of Compute Instances, Object Storage, and a Grafana dashboard.

- **Edge Server:** The edge infrastructure that receives, processes, and serves client requests. In this workflow, edge server activity is logged and sent to TrafficPeak for observability purposes.

- **Data Analysis:** Grafana dashboard; a web-based analytics and visualization platform preconfigured for monitoring log activity processed by TrafficPeak. Configured and made accessible to TrafficPeak customers.

- **VMs:** Compute Instances used to run TrafficPeak’s log ingest and processing software. Managed by Hydrolix.
- **VMs:** Compute Instances used to run TrafficPeak’s log ingest and processing software. Managed by Akamai.

- **Object Storage:** S3 compatible object storage used to store log data from TrafficPeak. Managed by Hydrolix.
- **Object Storage:** S3 compatible object storage used to store log data from TrafficPeak. Managed by Akamai.
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ keywords: ['docker remove image', 'docker remove container', 'docker remove volu
tags: ['docker', 'container']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
- '[DigitalOcean: How To Remove Docker Images, Containers, and Volumes](https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes)'
- '[freeCodeCamp: How to Remove Images and Containers in Docker](https://www.freecodecamp.org/news/how-to-remove-images-in-docker/)'
- '[Linuxize: How To Remove Docker Containers, Images, Volumes, and Networks](https://linuxize.com/post/how-to-remove-docker-images-containers-volumes-and-networks/)'
- '[Docker Docs: docker image](https://docs.docker.com/engine/reference/commandline/image/)'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ published: 2022-06-21
keywords: ['database sharding','database sharding vs partitioning','what is database sharding']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
- '[DigitalOcean: Understanding Database Sharding](https://www.digitalocean.com/community/tutorials/understanding-database-sharding)'
- '[MongoDB: Database Sharding — Concepts and Examples](https://www.mongodb.com/features/database-sharding-explained)'
- '[Educative: What Is Database Sharding?](https://www.educative.io/edpresso/what-is-database-sharding)'
- '[GeeksforGeeks: Database Sharding – System Design Interview Concept](https://www.geeksforgeeks.org/database-sharding-a-system-design-concept/)'
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
---
slug: how-to-use-filter-method-javascript
title: "How to Use the filter() Method for Arrays in JavaScript"
description: "Want to know what JavaScript’s filter() array method is and how to use it? This guide gives you everything you need to understand what filter() does and how to apply it in your JavaScript development."
title: "How to Use the filter Method for Arrays in JavaScript"
description: "Want to know what JavaScript’s filter array method is and how to use it? This guide gives you everything you need to understand what filter does and how to apply it in your JavaScript development."
authors: ["Nathaniel Stickman"]
contributors: ["Nathaniel Stickman"]
published: 2022-03-13
keywords: ['javascript filter array', 'javascript filter function', 'javascript filter method']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
- '[DigitalOcean: How To Use the filter() Array Method in JavaScript](https://www.digitalocean.com/community/tutorials/js-filter-array-method)'
- '[MDN Web Docs: Array.prototype.filter()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter)'
---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ keywords: ['javascript fetch', 'javascript fetch api', 'javascript fetch example
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
- '[MDN Web Docs: Using the Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch)'
- '[DigitalOcean: How To Use the JavaScript Fetch API to Get Data](https://www.digitalocean.com/community/tutorials/how-to-use-the-javascript-fetch-api-to-get-data)'
---

The JavaScript Filter API gives you a convenient and native way to make requests and handle responses for HTTP and other network APIs. It provides a built-in function for making `GET`, `POST`, and other HTTP requests in JavaScript.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ tags: ['linux']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
- '[ack!: Documentation](https://beyondgrep.com/documentation/)'
- '[DigitalOcean: How To Install and Use Ack, a Grep Replacement for Developers, on Ubuntu 14.04](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-ack-a-grep-replacement-for-developers-on-ubuntu-14-04)'
- '[Linux Shell Tips: How to Install and Use Ack Command in Linux with Examples](https://www.linuxshelltips.com/ack-command-in-linux/)'
---

Expand Down
1 change: 0 additions & 1 deletion docs/guides/web-servers/nginx/using-openresty/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ keywords: ['what is openresty', 'openresty example', 'openresty vs nginx']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
- '[OpenResty: Getting Started](https://openresty.org/en/getting-started.html)'
- '[DigitalOcean: How to Use the OpenResty Web Framework for NGINX on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-use-the-openresty-web-framework-for-nginx-on-ubuntu-16-04)'
- "[Ketzal's $HOME: Intro to Lua and Openresty, Part 1 - Hello World Examples](https://ketzacoatl.github.io/posts/2017-03-02-lua-and-openresty-hello-world-examples.html)"
---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title_meta: "Deploy and Manage a Kubernetes Cluster with the Linode API"
description: "Learn how to deploy a cluster on Linode Kubernetes Engine (LKE) through the Linode API."
og_description: "The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. This guide shows you how to use the Linode API to Deploy and Manage an LKE Cluster."
published: 2019-11-11
modified: 2023-02-09
modified: 2024-06-13
keywords: ["kubernetes", "linode kubernetes engine", "managed kubernetes", "lke", "kubernetes cluster"]
image: deploy-and-manage-cluster-copy.png
aliases: ['/applications/containers/kubernetes/deploy-and-manage-lke-cluster-with-api-a-tutorial/','/kubernetes/deploy-and-manage-lke-cluster-with-api-a-tutorial/','/guides/deploy-and-manage-lke-cluster-with-api-a-tutorial/']
Expand Down Expand Up @@ -396,6 +396,92 @@ The response body resembles the following:
Each Linode account has a limit to the number of resources they can deploy. This includes services, like Compute Instances, NodeBalancers, Block Storage, etc. If you run into issues deploying the number of nodes you designate for a given cluster's node pool, you may have run into a limit on the number of resources allowed on your account. Contact [Linode Support](/docs/products/platform/get-started/guides/support/) if you believe this may be the case.
{{< /note >}}
### Add Labels and Taints to your LKE Node Pools
When creating or updating an LKE node pool, you can optionally add custom labels and taints to all nodes using the `labels` and `taints` parameters. Defining labels and taints on a per-pool basis through the Linode API has several benefits compared to managing them manually with `kubectl`, including:
- Custom labels and taints automatically apply to new nodes when a pool is recycled or scaled up (either manually or through autoscaling).
- LKE ensures that nodes have the desired taints in place before they become ready for pod scheduling. This prevents newly created nodes from attracting workloads that don't have the intended tolerations.

The following cURL command provides an example of using the Linode API to create a new node pool with a custom taint and label. If you are copying this command to run on your own LKE cluster, replace {{< placeholder "12345" >}} with the ID of your LKE cluster.

```command {title="Linode API cURL example for creating a new node pool:"}
curl -H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-X POST -d '{
"type": "g6-standard-1",
"count": 3,
"taints": [
{
"key": "myapp.io/app",
"value": "test",
"effect": "NoSchedule"
}
],
"labels": {
"myapp.io/app": "test"
}
}' https://api.linode.com/v4/lke/clusters/{{< placeholder "12345" >}}/pools
```

In the above command, labels are defined in the `labels` field as key-value pairs within a single object. Taints are defined as an array of dictionary objects in the `taints` field.

- **Labels:** The `labels` field expects a dictionary object with one or more key-value pairs. These key-value pairs should adhere to the specifications and restrictions outlined in the Kubernetes [Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) documentation.

```command
"labels": {
"myapp.io/app": "test"
}
```

A label's key and value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters each. Optionally, the key can begin with a valid DNS subdomain prefix and a single slash (`/`). In this case, the maximum allowed length of the domain prefix is 253 characters. For instance, `example.com/my-app` is a valid key for a label.
- **Taints:** The `taints` field expects an array of one or more dictionary objects, adhering to the guidelines outlined in the Kubernetes [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) documentation. A taint consists of a `key`, `value`, and `effect`:
```command
"taints": [
{
"key": "myapp.io/app",
"value": "test",
"effect": "NoSchedule"
}
]
```
- **Key:** The `key` value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 253 characters. Optionally, the `key` value can begin with a DNS subdomain prefix and a single slash (`/`), like `example.com/my-app`. In this case the maximum allowed length of the domain prefix is 253 characters.
- **Value:** The `value` key is optional. If given, it must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters.
- **Effect:** The `effect` value must be NoSchedule, PreferNoSchedule, or NoExecute.
{{< note >}}
Taint and label values cannot contain `kubernetes.io` or `linode.com` domains as these are reserved for LKE's own usage.
{{< /note >}}

You can also add, edit, or remove labels and taints on existing node pools using the Linode API. The example cURL command below demonstrates how to remove taints and update the labels on an existing node pool. If you are copying this command to run on your own LKE cluster, replace {{< placeholder "12345" >}} with the ID of your LKE cluster and {{< placeholder "196" >}} with the ID of your node pool.

```command {title="Linode API cURL example for updating a node pool:"}
curl -H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-X PUT -d '{
"type": "g6-standard-1",
"count": 3,
"taints": [],
"labels": {
"myapp.io/app": "prod",
"example": "foo",
}
}' https://api.linode.com/v4/lke/clusters/{{< placeholder "12345" >}}/pools/{{< placeholder "196" >}}
```

The above command results in the following changes to the node pool, assuming the labels and taints were originally entered as shown in the first create command.

- Removes the "myapp.io/app" taint by specifying an empty array in the `taint` field.
- Changes the label "myapp.io/app" to have a value of "prod" instead of "test".
- Adds the new label "example=foo".

{{< note >}}
When updating or adding labels and taints to an existing node pool, it is not necessary to recycle it. This is because the values are updated live on the running nodes.
{{< /note >}}

### Resize your LKE Node Pool

You can resize an LKE cluster's node pool to add or decrease its number of nodes. You need your cluster's ID and the node pool's ID in order to resize it. If you don’t know your cluster’s ID, see the [List LKE Clusters](#list-lke-clusters) section. If you don’t know your node pool's ID, see the [List a Cluster’s Node Pools](#list-a-cluster-s-node-pools) section.
Expand Down

0 comments on commit e41cb82

Please sign in to comment.