Skip to content

Latest commit

 

History

History
559 lines (458 loc) · 18.1 KB

quick_start.md

File metadata and controls

559 lines (458 loc) · 18.1 KB
title custom_edit_url
Quick Start

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

This guide provides copy-and-paste instructions to try out the Admiralty open source cluster agent with or without Admiralty Cloud. We use kind (Kubernetes in Docker) to create Kubernetes clusters, but feel free to use something else—though don't just copy and paste instructions then.

Example Use Case

We're going to model a centralized cluster topology made of a management cluster (named cd) where applications are deployed, and two workload clusters (named us and eu) where containers actually run. We'll deploy a batch job utilizing both workload clusters, and another targeting a specific region. If you're interested in other topologies or other kinds of applications (e.g., micro-services), this guide is still helpful to get familiar with Admiralty in general. When you're done, you may want to continue with the "Multi-Region AWS Fargate on EKS" tutorial.

<Tabs defaultValue="global" values={[ {label: 'Global batch', value: 'global'}, {label: 'Regional batch', value: 'regional'}, ]}>

Prerequisites

  1. Install Helm v3 and kind if not already installed.

  2. We recommend you to use a separate kubeconfig for this exercise, so you can simply delete it when you're done:

    export KUBECONFIG=kubeconfig-admiralty-getting-started
  3. Create three clusters (a management cluster named cd and two workload clusters named us and eu):

    for CLUSTER_NAME in cd us eu
    do
      kind create cluster --name $CLUSTER_NAME
    done
  4. Label the workload cluster nodes as if they were in different regions (we'll use these labels as node selectors):

    for CLUSTER_NAME in us eu
    do
      kubectl --context kind-$CLUSTER_NAME label nodes --all topology.kubernetes.io/region=$CLUSTER_NAME
    done

    :::tip Most cloud distributions of Kubernetes pre-label nodes with the names of their cloud regions. :::

  5. (optional speed-up) Pull images on your machine and load them into the kind clusters. Otherwise, each kind cluster would pull images, which could take three times as long.

    images=(
      # cert-manager dependency
      quay.io/jetstack/cert-manager-controller:v0.16.1
      quay.io/jetstack/cert-manager-webhook:v0.16.1
      quay.io/jetstack/cert-manager-cainjector:v0.16.1
      # admiralty open source
      quay.io/admiralty/multicluster-scheduler-agent:0.13.2
      quay.io/admiralty/multicluster-scheduler-scheduler:0.13.2
      quay.io/admiralty/multicluster-scheduler-remove-finalizers:0.13.2
      quay.io/admiralty/multicluster-scheduler-restarter:0.13.2
      # admiralty cloud/enterprise
      quay.io/admiralty/admiralty-cloud-controller-manager:0.13.2
      quay.io/admiralty/kube-mtls-proxy:0.10.0
      quay.io/admiralty/kube-oidc-proxy:v0.3.0 # jetstack's image rebuilt for multiple architectures
    )
    for image in "${images[@]}"
    do
      docker pull $image
      for CLUSTER_NAME in cd us eu
      do
        kind load docker-image $image --name $CLUSTER_NAME
      done
    done
  6. Install cert-manager in each cluster:

    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    
    for CLUSTER_NAME in cd us eu
    do
      kubectl --context kind-$CLUSTER_NAME create namespace cert-manager
      kubectl --context kind-$CLUSTER_NAME apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.16.1/cert-manager.crds.yaml
      helm install cert-manager jetstack/cert-manager \
        --kube-context kind-$CLUSTER_NAME \
        --namespace cert-manager \
        --version v0.16.1 \
        --wait --debug
      # --wait to ensure release is ready before next steps
      # --debug to show progress, for lack of a better way,
      # as this may take a few minutes
    done

    :::note Admiralty Open Source uses cert-manager to generate a server certificate for its mutating pod admission webhook. In addition, Admiralty Cloud and Admiralty Enterprise use cert-manager to generate server certificates for Kubernetes API authenticating proxies (mTLS for clusters, OIDC for users), and client certificates for cluster identities (talking to the mTLS proxies of other clusters). :::

Installation

Admiralty Cloud, its command line interface (CLI), and additional cluster-agent components complement the open-source cluster agent in useful ways. The CLI makes it easy to register clusters; Kubernetes custom resource definitions (CRDs) make it easy to connect them (with automatic certificate rotations), so you don't have to craft (and re-craft) cross-cluster kubeconfigs and think about routing and certificates.

Admiralty Cloud works with private clusters too. In this context, a private cluster is a cluster whose Kubernetes API isn't routable from another cluster. Cluster-to-cluster communications to private clusters transit through HTTPS/WebSocket/HTTPS tunnels exposed on the Admiralty Cloud API.

:::note Privacy Notice We don't want to see your data. Admiralty Cloud cannot decrypt cluster-to-cluster communications, because private keys never leave the clusters. All clusters ever share with Admiralty Cloud are their CA certificates (public keys) to give to other clusters. Admiralty Cloud acts as a public key directory—"Keybase for Kubernetes clusters" if you'd like. :::

If you decide to use the open-source cluster agent only, no problem. There's no CLI nor cluster registration, but configuring cross-cluster authentication takes more care, and doesn't support private clusters. In production, you would have to rotate tokens manually.

<Tabs groupId="oss-or-cloud" defaultValue="cloud" values={[ {label: 'Cloud/Enterprise', value: 'cloud'}, {label: 'Open Source', value: 'oss'}, ] }>

  1. Download the Admiralty CLI:

    <Tabs groupId="os" defaultValue="linux-amd64" values={[ {label: 'Linux/amd64', value: 'linux-amd64'}, {label: 'Mac', value: 'mac'}, {label: 'Windows', value: 'windows'}, {label: 'Linux/arm64', value: 'linux-arm64'}, {label: 'Linux/ppc64le', value: 'linux-ppc64le'}, {label: 'Linux/s390x', value: 'linux-s390x'}, ] }>

    curl -Lo admiralty "https://artifacts.admiralty.io/admiralty-v0.13.2-linux-amd64"
    chmod +x admiralty
    sudo mv admiralty /usr/local/bin
    curl -Lo admiralty "https://artifacts.admiralty.io/admiralty-v0.13.2-darwin-amd64"
    chmod +x admiralty
    sudo mv admiralty /usr/local/bin
    curl -Lo admiralty "https://artifacts.admiralty.io/admiralty-v0.13.2-windows-amd64"
    curl -Lo admiralty "https://artifacts.admiralty.io/admiralty-v0.13.2-linux-arm64"
    chmod +x admiralty
    sudo mv admiralty /usr/local/bin
    curl -Lo admiralty "https://artifacts.admiralty.io/admiralty-v0.13.2-linux-ppc64le"
    chmod +x admiralty
    sudo mv admiralty /usr/local/bin
    curl -Lo admiralty "https://artifacts.admiralty.io/admiralty-v0.13.2-linux-s390x"
    chmod +x admiralty
    sudo mv admiralty /usr/local/bin
  2. Log in (sign up) to Admiralty Cloud:

    admiralty configure

    :::note The admiralty configure command takes you through an OIDC log-in/sign-up flow, and eventually saves an Admiralty Cloud API kubeconfig—used to register clusters—and user tokens under ~/.admiralty. Don't forget to run admiralty logout to delete the tokens if needed when you're done. :::

  3. Install Admiralty in each cluster:

    helm repo add admiralty https://charts.admiralty.io
    helm repo update
    
    for CLUSTER_NAME in cd us eu
    do
      kubectl --context kind-$CLUSTER_NAME create namespace admiralty
      helm install admiralty admiralty/admiralty \
        --kube-context kind-$CLUSTER_NAME \
        --namespace admiralty \
        --version 0.13.2 \
        --set accountName=$(admiralty get-account-name) \
        --set clusterName=$CLUSTER_NAME \
        --wait --debug
      # --wait to ensure release is ready before next steps
      # --debug to show progress, for lack of a better way,
      # as this may take a few minutes
    done
  4. Register each cluster:

    for CLUSTER_NAME in cd us eu
    do
      admiralty register-cluster --context kind-$CLUSTER_NAME
    done

Install Admiralty in each cluster:

helm repo add admiralty https://charts.admiralty.io
helm repo update

for CLUSTER_NAME in cd us eu
do
  kubectl --context kind-$CLUSTER_NAME create namespace admiralty
  helm install admiralty admiralty/multicluster-scheduler \
    --kube-context kind-$CLUSTER_NAME \
    --namespace admiralty \
    --version 0.13.2 \
    --wait --debug
  # --wait to ensure release is ready before next steps
  # --debug to show progress, for lack of a better way,
  # as this may take a few minutes
done

Configuration

Cross-Cluster Authentication

<Tabs groupId="oss-or-cloud" defaultValue="cloud" values={[ {label: 'Cloud/Enterprise', value: 'cloud'}, {label: 'Open Source', value: 'oss'}, ] }>

  1. In the management cluster, create a Kubeconfig for each workload cluster:

    for CLUSTER_NAME in us eu
    do
      cat <<EOF | kubectl --context kind-cd apply -f -
    apiVersion: multicluster.admiralty.io/v1alpha1
    kind: Kubeconfig
    metadata:
      name: $CLUSTER_NAME
    spec:
      secretName: $CLUSTER_NAME
      cluster:
        admiraltyReference:
          clusterName: $CLUSTER_NAME
    EOF
    done
  2. In each workload cluster, create a TrustedIdentityProvider for the management cluster:

    for CLUSTER_NAME in us eu
    do
      cat <<EOF | kubectl --context kind-$CLUSTER_NAME apply -f -
    apiVersion: multicluster.admiralty.io/v1alpha1
    kind: TrustedIdentityProvider
    metadata:
      name: cd
    spec:
      prefix: "spiffe://cd/"
      admiraltyReference:
        clusterName: cd
    EOF
    done
  1. Install jq, the command-line JSON processor, if not already installed.

  2. For each workload cluster,

    1. create a Kubernetes service account in the workload cluster for the management cluster,
    2. extract its default token,
    3. get a Kubernetes API address that is routable from the management cluster—here, the IP address of the kind workload cluster's only (master) node container in your machine's shared Docker network,
    4. prepare a kubeconfig using the token and address found above, and the server certificate from your kubeconfig (luckily also valid for this address, not just the address in your kubeconfig),
    5. save the prepared kubeconfig in a secret in the management cluster:
    for CLUSTER_NAME in us eu
    do
      # i.
      kubectl --context kind-$CLUSTER_NAME create serviceaccount cd
    
      # ii.
      SECRET_NAME=$(kubectl --context kind-$CLUSTER_NAME get serviceaccount cd \
        --output json | \
        jq -r '.secrets[0].name')
      TOKEN=$(kubectl --context kind-$CLUSTER_NAME get secret $SECRET_NAME \
        --output json | \
        jq -r '.data.token' | \
        base64 --decode)
    
      # iii.
      IP=$(docker inspect $CLUSTER_NAME-control-plane \
        --format "{{ .NetworkSettings.Networks.kind.IPAddress }}")
    
      # iv.
      CONFIG=$(kubectl --context kind-$CLUSTER_NAME config view \
        --minify --raw --output json | \
        jq '.users[0].user={token:"'$TOKEN'"} | .clusters[0].cluster.server="https://'$IP':6443"')
    
      # v.
      kubectl --context kind-cd create secret generic $CLUSTER_NAME \
        --from-literal=config="$CONFIG"
    done

    :::note Security Notice Kubernetes service account tokens exposed as secrets are valid forever, or until those secrets are deleted. A leak may go undetected indefinitely. If you use Kubernetes service account tokens as a cross-cluster authentication method in production, we recommend rotating the tokens as often as practical. However, there are other methods, including using Admiralty Cloud. :::

    :::note Other Platforms If you're not using kind, your mileage may vary. The Kubernetes API address in your kubeconfig may or may not be routable from other clusters. If not, the server certificate in your kubeconfig may or may not be valid for the routable address that you'll find instead. :::

Multi-Cluster Scheduling

  1. In the management cluster, create a Target for each workload cluster:

    for CLUSTER_NAME in us eu
    do
      cat <<EOF | kubectl --context kind-cd apply -f -
    apiVersion: multicluster.admiralty.io/v1alpha1
    kind: Target
    metadata:
      name: $CLUSTER_NAME
    spec:
      kubeconfigSecret:
        name: $CLUSTER_NAME
    EOF
    done
  2. In the workload clusters, create a Source for the management cluster:

    <Tabs groupId="oss-or-cloud" defaultValue="cloud" values={[ {label: 'Cloud/Enterprise', value: 'cloud'}, {label: 'Open Source', value: 'oss'}, ] }>

    for CLUSTER_NAME in us eu
    do
      cat <<EOF | kubectl --context kind-$CLUSTER_NAME apply -f -
    apiVersion: multicluster.admiralty.io/v1alpha1
    kind: Source
    metadata:
      name: cd
    spec:
      userName: spiffe://cd/ns/default/id/default
    EOF
    done
    for CLUSTER_NAME in us eu
    do
      cat <<EOF | kubectl --context kind-$CLUSTER_NAME apply -f -
    apiVersion: multicluster.admiralty.io/v1alpha1
    kind: Source
    metadata:
      name: cd
    spec:
      serviceAccountName: cd
    EOF
    done

Demo

  1. Check that virtual nodes have been created in the management cluster to represent workload clusters:

    kubectl --context kind-cd get nodes --watch
    # --watch until virtual nodes are created,
    # this may take a few minutes, then control-C
  2. Label the default namespace in the management cluster to enable multi-cluster scheduling at the namespace level:

    kubectl --context kind-cd label ns default multicluster-scheduler=enabled
  3. Create Kubernetes Jobs in the management cluster, utilizing all workload clusters (multi-cluster scheduling is enabled at the pod level with the multicluster.admiralty.io/elect annotation):

    for i in $(seq 1 10)
    do
      cat <<EOF | kubectl --context kind-cd apply -f -
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: global-$i
    spec:
      template:
        metadata:
          annotations:
            multicluster.admiralty.io/elect: ""
        spec:
          containers:
          - name: c
            image: busybox
            command: ["sh", "-c", "echo Processing item $i && sleep 5"]
            resources:
              requests:
                cpu: 100m
          restartPolicy: Never
    EOF
    done
  4. Check that proxy pods for this job have been created in the management cluster, "running" on virtual nodes, and delegate pods have been created in the workload clusters, actually running their containers on real nodes:

    while true
    do
      clear
      for CLUSTER_NAME in cd us eu
      do
        kubectl --context kind-$CLUSTER_NAME get pods -o wide
      done
      sleep 2
    done
    # control-C when all pods have Completed
  5. Create Kubernetes Jobs in the management cluster, targeting a specific region with a node selector:

    for i in $(seq 1 10)
    do
      cat <<EOF | kubectl --context kind-cd apply -f -
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: eu-$i
    spec:
      template:
        metadata:
          annotations:
            multicluster.admiralty.io/elect: ""
        spec:
          nodeSelector:
            topology.kubernetes.io/region: eu
          containers:
          - name: c
            image: busybox
            command: ["sh", "-c", "echo Processing item $i && sleep 5"]
            resources:
              requests:
                cpu: 100m
          restartPolicy: Never
    EOF
    done
  6. Check that proxy pods for this job have been created in the management cluster, and delegate pods have been created in the eu cluster only:

    while true
    do
      clear
      for CLUSTER_NAME in cd us eu
      do
        kubectl --context kind-$CLUSTER_NAME get pods -o wide
      done
      sleep 2
    done
    # control-C when all pods have Completed

    You may observe transient pending candidate pods in the us cluster.

Cleanup

for CLUSTER_NAME in cd us eu
do
  kind delete cluster --name $CLUSTER_NAME
done
rm kubeconfig-admiralty-getting-started