diff --git a/website/docs/autoscaling/compute/karpenter/consolidation.md b/website/docs/autoscaling/compute/karpenter/consolidation.md index b3d9d9445..990e05356 100644 --- a/website/docs/autoscaling/compute/karpenter/consolidation.md +++ b/website/docs/autoscaling/compute/karpenter/consolidation.md @@ -39,8 +39,8 @@ This changes the total memory request for this deployment to around 12Gi, which ```bash $ kubectl get nodes -l type=karpenter --label-columns node.kubernetes.io/instance-type NAME STATUS ROLES AGE VERSION INSTANCE-TYPE -ip-10-42-44-164.us-west-2.compute.internal Ready 3m30s vVAR::KUBERNETES_NODE_VERSION m5.large -ip-10-42-9-102.us-west-2.compute.internal Ready 14m vVAR::KUBERNETES_NODE_VERSION m5.large +ip-10-42-44-164.us-west-2.compute.internal Ready 3m30s vVAR::KUBERNETES_NODE_VERSION m5.large +ip-10-42-9-102.us-west-2.compute.internal Ready 14m vVAR::KUBERNETES_NODE_VERSION m5.large ``` Next, scale the number of replicas back down to 5: diff --git a/website/docs/autoscaling/workloads/cluster-proportional-autoscaler/autoscaling-coredns-usecase.md b/website/docs/autoscaling/workloads/cluster-proportional-autoscaler/autoscaling-coredns-usecase.md index f86a2be9b..f7891b396 100644 --- a/website/docs/autoscaling/workloads/cluster-proportional-autoscaler/autoscaling-coredns-usecase.md +++ b/website/docs/autoscaling/workloads/cluster-proportional-autoscaler/autoscaling-coredns-usecase.md @@ -9,9 +9,9 @@ Lets test the CPA that we installed in the previous section. Currently we're run ```bash $ kubectl get nodes NAME STATUS ROLES AGE VERSION -ip-10-42-109-155.us-east-2.compute.internal Ready 76m vVAR::KUBERNETES_NODE_VERSION -ip-10-42-142-113.us-east-2.compute.internal Ready 76m vVAR::KUBERNETES_NODE_VERSION -ip-10-42-80-39.us-east-2.compute.internal Ready 76m vVAR::KUBERNETES_NODE_VERSION +ip-10-42-109-155.us-east-2.compute.internal Ready 76m vVAR::KUBERNETES_NODE_VERSION +ip-10-42-142-113.us-east-2.compute.internal Ready 76m vVAR::KUBERNETES_NODE_VERSION +ip-10-42-80-39.us-east-2.compute.internal Ready 76m vVAR::KUBERNETES_NODE_VERSION ``` Based on autoscaling parameters defined in the `ConfigMap`, we see cluster proportional autoscaler scale CoreDNS to 2 replicas: diff --git a/website/docs/fundamentals/managed-node-groups/graviton/configuring-graviton.md b/website/docs/fundamentals/managed-node-groups/graviton/configuring-graviton.md index b36c34ea8..05b6e2627 100644 --- a/website/docs/fundamentals/managed-node-groups/graviton/configuring-graviton.md +++ b/website/docs/fundamentals/managed-node-groups/graviton/configuring-graviton.md @@ -10,9 +10,9 @@ To start with lets confirm the current state of nodes available in our cluster: ```bash $ kubectl get nodes -L kubernetes.io/arch NAME STATUS ROLES AGE VERSION ARCH -ip-192-168-102-2.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION amd64 -ip-192-168-137-20.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION amd64 -ip-192-168-19-31.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION amd64 +ip-192-168-102-2.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION amd64 +ip-192-168-137-20.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION amd64 +ip-192-168-19-31.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION amd64 ``` The output shows our existing nodes with columns that show the CPU architecture of each node. All of these are currently using `amd64` nodes. @@ -53,10 +53,10 @@ $ kubectl get nodes \ --label-columns eks.amazonaws.com/nodegroup,kubernetes.io/arch NAME STATUS ROLES AGE VERSION NODEGROUP ARCH -ip-192-168-102-2.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION default amd64 -ip-192-168-137-20.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION default amd64 -ip-192-168-19-31.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION default amd64 -ip-10-42-172-231.us-west-2.compute.internal Ready 2m5s vVAR::KUBERNETES_NODE_VERSION graviton arm64 +ip-192-168-102-2.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION default amd64 +ip-192-168-137-20.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION default amd64 +ip-192-168-19-31.us-west-2.compute.internal Ready 6h56m vVAR::KUBERNETES_NODE_VERSION default amd64 +ip-10-42-172-231.us-west-2.compute.internal Ready 2m5s vVAR::KUBERNETES_NODE_VERSION graviton arm64 ``` The above command makes use of the `--selector` flag to query for all nodes that have a label of `eks.amazonaws.com/nodegroup` that matches the name of our managed node group `graviton`. The `--label-columns` flag also allows us to display the value of the `eks.amazonaws.com/nodegroup` label as well as the processor architecture in the output. Note that the `ARCH` column shows our tainted node group running Graviton `arm64` processors. diff --git a/website/docs/fundamentals/managed-node-groups/spot/create-spot-capacity.md b/website/docs/fundamentals/managed-node-groups/spot/create-spot-capacity.md index e33aff326..78e9b21ee 100644 --- a/website/docs/fundamentals/managed-node-groups/spot/create-spot-capacity.md +++ b/website/docs/fundamentals/managed-node-groups/spot/create-spot-capacity.md @@ -12,9 +12,9 @@ The following command shows that our nodes are currently **on-demand** instances ```bash $ kubectl get nodes -L eks.amazonaws.com/capacityType NAME STATUS ROLES AGE VERSION CAPACITYTYPE -ip-10-42-103-103.us-east-2.compute.internal Ready 133m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND -ip-10-42-142-197.us-east-2.compute.internal Ready 133m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND -ip-10-42-161-44.us-east-2.compute.internal Ready 133m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND +ip-10-42-103-103.us-east-2.compute.internal Ready 133m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND +ip-10-42-142-197.us-east-2.compute.internal Ready 133m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND +ip-10-42-161-44.us-east-2.compute.internal Ready 133m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND ``` :::tip @@ -69,11 +69,11 @@ Once our new managed node group is **Active**, run the following command. $ kubectl get nodes -L eks.amazonaws.com/capacityType,eks.amazonaws.com/nodegroup NAME STATUS ROLES AGE VERSION CAPACITYTYPE NODEGROUP -ip-10-42-103-103.us-east-2.compute.internal Ready 3h38m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND default -ip-10-42-142-197.us-east-2.compute.internal Ready 3h38m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND default -ip-10-42-161-44.us-east-2.compute.internal Ready 3h38m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND default -ip-10-42-178-46.us-east-2.compute.internal Ready 103s vVAR::KUBERNETES_NODE_VERSION SPOT managed-spot -ip-10-42-97-19.us-east-2.compute.internal Ready 104s vVAR::KUBERNETES_NODE_VERSION SPOT managed-spot +ip-10-42-103-103.us-east-2.compute.internal Ready 3h38m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND default +ip-10-42-142-197.us-east-2.compute.internal Ready 3h38m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND default +ip-10-42-161-44.us-east-2.compute.internal Ready 3h38m vVAR::KUBERNETES_NODE_VERSION ON_DEMAND default +ip-10-42-178-46.us-east-2.compute.internal Ready 103s vVAR::KUBERNETES_NODE_VERSION SPOT managed-spot +ip-10-42-97-19.us-east-2.compute.internal Ready 104s vVAR::KUBERNETES_NODE_VERSION SPOT managed-spot ``` The output shows that two additional nodes got provisioned under the node group `managed-spot` with capacity type as `SPOT`. diff --git a/website/docs/networking/vpc-cni/custom-networking/provision-new-node-group.md b/website/docs/networking/vpc-cni/custom-networking/provision-new-node-group.md index 0fabf9fbd..7b14d59cf 100644 --- a/website/docs/networking/vpc-cni/custom-networking/provision-new-node-group.md +++ b/website/docs/networking/vpc-cni/custom-networking/provision-new-node-group.md @@ -26,10 +26,10 @@ Once this is complete we can see the new nodes registered in the EKS cluster: ```bash $ kubectl get nodes -L eks.amazonaws.com/nodegroup NAME STATUS ROLES AGE VERSION NODEGROUP -ip-10-42-104-242.us-west-2.compute.internal Ready 84m vVAR::KUBERNETES_NODE_VERSION default -ip-10-42-110-28.us-west-2.compute.internal Ready 61s vVAR::KUBERNETES_NODE_VERSION custom-networking -ip-10-42-139-60.us-west-2.compute.internal Ready 65m vVAR::KUBERNETES_NODE_VERSION default -ip-10-42-180-105.us-west-2.compute.internal Ready 65m vVAR::KUBERNETES_NODE_VERSION default +ip-10-42-104-242.us-west-2.compute.internal Ready 84m vVAR::KUBERNETES_NODE_VERSION default +ip-10-42-110-28.us-west-2.compute.internal Ready 61s vVAR::KUBERNETES_NODE_VERSION custom-networking +ip-10-42-139-60.us-west-2.compute.internal Ready 65m vVAR::KUBERNETES_NODE_VERSION default +ip-10-42-180-105.us-west-2.compute.internal Ready 65m vVAR::KUBERNETES_NODE_VERSION default ``` You can see that 1 new node provisioned labeled with the name of the new node group.