Skip to content

Latest commit

 

History

History
444 lines (341 loc) · 9.2 KB

README.md

File metadata and controls

444 lines (341 loc) · 9.2 KB

Demo: hello world with variants

Steps:

  1. Clone an existing configuration as a base.
  2. Customize it.
  3. Create two different overlays (staging and production) from the customized base.
  4. Run kustomize and kubectl to deploy staging and production.

First define a place to work:

DEMO_HOME=$(mktemp -d)

Alternatively, use

DEMO_HOME=~/hello

Establish the base

Let's run the hello service.

To use overlays to create variants, we must first establish a common base.

To keep this document shorter, the base resources are off in a supplemental data directory rather than declared here as HERE documents. Download them:

BASE=$DEMO_HOME/base
mkdir -p $BASE

curl -s -o "$BASE/#1.yaml" "https://raw.githubusercontent.com\
/kubernetes-sigs/kustomize\
/master/examples/helloWorld\
/{configMap,deployment,kustomization,service}.yaml"

Look at the directory:

tree $DEMO_HOME

Expect something like:

/tmp/tmp.IyYQQlHaJP
└── base
    ├── configMap.yaml
    ├── deployment.yaml
    ├── kustomization.yaml
    └── service.yaml

One could immediately apply these resources to a cluster:

kubectl apply -f $DEMO_HOME/base

to instantiate the hello service. kubectl would only recognize the resource files.

The Base Kustomization

The base directory has a kustomization file:

more $BASE/kustomization.yaml

Optionally, run kustomize on the base to emit customized resources to stdout:

kustomize build $BASE

Customize the base

A first customization step could be to change the app label applied to all resources:

sed -i 's/app: hello/app: my-hello/' \
    $BASE/kustomization.yaml

On a Mac, use:

sed -i '' $pattern $file

to get in-place editing.

See the effect:

kustomize build $BASE | grep -C 3 app:

Create Overlays

Create a staging and production overlay:

  • Staging enables a risky feature not enabled in production.
  • Production has a higher replica count.
  • Web server greetings from these cluster variants will differ from each other.
OVERLAYS=$DEMO_HOME/overlays
mkdir -p $OVERLAYS/staging
mkdir -p $OVERLAYS/production

Staging Kustomization

In the staging directory, make a kustomization defining a new name prefix, and some different labels.

cat <<'EOF' >$OVERLAYS/staging/kustomization.yaml
namePrefix: staging-
commonLabels:
  variant: staging
  org: acmeCorporation
commonAnnotations:
  note: Hello, I am staging!
bases:
- ../../base
patches:
- map.yaml
EOF

Staging Patch

Add a configMap customization to change the server greeting from Good Morning! to Have a pineapple!

Also, enable the risky flag.

cat <<EOF >$OVERLAYS/staging/map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: the-map
data:
  altGreeting: "Have a pineapple!"
  enableRisky: "true"
EOF

Production Kustomization

In the production directory, make a kustomization with a different name prefix and labels.

cat <<EOF >$OVERLAYS/production/kustomization.yaml
namePrefix: production-
commonLabels:
  variant: production
  org: acmeCorporation
commonAnnotations:
  note: Hello, I am production!
bases:
- ../../base
patches:
- deployment.yaml
EOF

Production Patch

Make a production patch that increases the replica count (because production takes more traffic).

cat <<EOF >$OVERLAYS/production/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: the-deployment
spec:
  replicas: 10
EOF

Compare overlays

DEMO_HOME now contains:

  • a base directory - a slightly customized clone of the original configuration, and

  • an overlays directory, containing the kustomizations and patches required to create distinct staging and production variants in a cluster.

Review the directory structure and differences:

tree $DEMO_HOME

Expecting something like:

/tmp/tmp.IyYQQlHaJP1
├── base
│   ├── configMap.yaml
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
    ├── production
    │   ├── deployment.yaml
    │   └── kustomization.yaml
    └── staging
        ├── kustomization.yaml
        └── map.yaml

Compare the output directly to see how staging and production differ:

diff \
  <(kustomize build $OVERLAYS/staging) \
  <(kustomize build $OVERLAYS/production) |\
  more

The first part of the difference output should look something like

<   altGreeting: Have a pineapple!
<   enableRisky: "true"
---
>   altGreeting: Good Morning!
>   enableRisky: "false"
8c8
<     note: Hello, I am staging!
---
>     note: Hello, I am production!
11c11
<     variant: staging
---
>     variant: production
13c13
(...truncated)

Deploy

The individual resource sets are:

kustomize build $OVERLAYS/staging
kustomize build $OVERLAYS/production

To deploy, pipe the above commands to kubectl apply:

kustomize build $OVERLAYS/staging |\
    kubectl apply -f -
kustomize build $OVERLAYS/production |\
   kubectl apply -f -

Rolling updates

Review

The hello-world deployment running in this cluster is configured with data from a configMap.

The deployment refers to this map by name:

grep -C 2 configMapKeyRef $DEMO_HOME/base/deployment.yaml

Changing the data held by a live configMap in a cluster is considered bad practice. Deployments have no means to know that the configMaps they refer to have changed, so such updates have no effect.

The recommended way to change a deployment's configuration is to

  1. create a new configMap with a new name,
  2. patch the deployment, modifying the name value of the appropriate configMapKeyRef field.

This latter change initiates rolling update to the pods in the deployment. The older configMap, when no longer referenced by any other resource, is eventually garbage collected.

How this works with kustomize

The staging variant here has a configMap patch:

cat $OVERLAYS/staging/map.yaml

This patch is by definition a named but not necessarily complete resource spec intended to modify a complete resource spec.

The resource it modifies is here:

cat $DEMO_HOME/base/configMap.yaml

For a patch to work, the names in the metadata/name fields must match.

However, the name values specified in the file are not what gets used in the cluster. By design, kustomize modifies these names. To see the names ultimately used in the cluster, just run kustomize:

kustomize build $OVERLAYS/staging |\
    grep -B 8 -A 1 staging-the-map

The configMap name is prefixed by staging-, per the namePrefix field in $OVERLAYS/staging/kustomization.yaml.

The suffix to the configMap name is generated from a hash of the maps content - in this case the name suffix is hhhhkfmgmk:

kustomize build $OVERLAYS/staging | grep hhhhkfmgmk

Now modify the map patch, to change the greeting the server will use:

sed -i 's/pineapple/kiwi/' $OVERLAYS/staging/map.yaml

See the new greeting:

kustomize build $OVERLAYS/staging |\
  grep -B 2 -A 3 kiwi

Run kustomize again to see the new configMap names:

kustomize build $OVERLAYS/staging |\
    grep -B 8 -A 1 staging-the-map

Confirm that the change in configMap content resulted in three new names ending in khk45ktkd9 - one in the configMap name itself, and two in the deployment that uses the map:

test 3 == \
  $(kustomize build $OVERLAYS/staging | grep khk45ktkd9 | wc -l); \
  echo $?

Applying these resources to the cluster will result in a rolling update of the deployments pods, retargetting them from the hhhhkfmgmk maps to the khk45ktkd9 maps. The system will later garbage collect the unused maps.

Rollback

To rollback, one would undo whatever edits were made to the configuation in source control, then rerun kustomize on the reverted configuration and apply it to the cluster.