Skip to content

Latest commit

 

History

History
585 lines (446 loc) · 20.9 KB

getting-started-with-tako.md

File metadata and controls

585 lines (446 loc) · 20.9 KB
weight title
10
Getting started with Tako

Getting started with Tako

This tutorial will walk you through how to connect your Docker Compose Workflow to Kubernetes - using Tako.

This is NOT a migration. On the contrary, we're going to create a continuous development workflow.

Meaning, your hard-earned Docker Compose skills will make it faster to develop and iterate on Kubernetes.

We'll set up Tako, iterate and deploy a WordPress application onto Kubernetes.

The tutorial assumes that you have,

As we walk through the tutorial we'll cover some Kubernetes concepts and how they relate to Docker Compose and Tako.

These will be explained under a Kube Notes heading.

Finally, we'll use the term "Compose" to mean "Docker Compose".

Create your docker-compose config

Let's start by creating an empty project directory and cd into it.

$ mkdir tako-wordpress
$ cd tako-wordpress

Then, add a bare bones docker-compose file with a basic description of our wordpress service that looks like this,

$ cat <<EOT >> docker-compose.yaml
version: '3.7'
services:
  wordpress:
    image: wordpress:latest
    ports:
      - "8000:80"
    restart: always
EOT

This Compose config,

  • Sets a wordpress service.
  • Exposes the service on port 8000.
  • Adds a restart always restart policy.

To confirm this is a valid configuration, let's start wordpress locally,

$ docker-compose up -d    # Run in the background. Force recreate the containers.

Navigating to http://0.0.0.0:8000 in a browser, should display wordpress's setup page.

Ace, we're all good, let's stop the service,

$ docker-compose down -v    # Stop all containers. Remove named volumes.

Preparing for Kubernetes

Compose and Kubernetes address different problems.

Compose, helps you wire, develop and run your application locally as a set of services. It's super for development.

Kubernetes, however, is designed to help you run your application in a highly available mode using clusters and service replications. It is production grade.

Describing Compose services as a Kubernetes application requires an extra layer that translates concepts from one to the other.

Furthermore, on Kubernetes, you might also want to deploy or promote your app to different "stages", commonly known as environments. Application configuration may vary depending on the environment it is deployed to due to various infrastructure or operational constraints.

So, a good approach to managing your app configuration in different environments is a must.

Tako will help you with all the above! So let's get cracking.

Compose + Tako

Let's instruct Tako to track our source Compose file, docker-compose.yaml, that we've just created.

Tako will introspect the Compose config and infer the key attributes to enable Compose services to run on Kubernetes.

Also, as we're moving beyond development, we'll instruct Tako to create two environment overrides to target two different sets of parameters (annotated in a service's x-k8s extension).

Please note, a dev sandbox environment is always created alongside any specified environments.

No time to lose, let's get started...

$ tako init -e local -e stage

This generates the following output:

» Verifying project...
 ✓  Ensuring this project has not already been initialised

» Detecting compose sources...
 ✓  Scanning for compose configuration
 ✓  Using: docker-compose.yaml

» Validating compose sources...
Detecting secrets in: docker-compose.yaml
 ✓  None detected in service: wordpress

» Creating deployment environments...
 ✓  Creating the dev sandbox env file: docker-compose.env.dev.yaml
 ✓  Creating the local env file: docker-compose.env.local.yaml
 ✓  Creating the stage env file: docker-compose.env.stage.yaml

» Detecting Skaffold settings...
Skipping - no Skaffold options detected

Project initialised!
A 'tako.yaml' file was created. Do not edit this file.
It syncs your deployment environments to updates made
to your compose sources.

And, the following deployment env files have been created:
    dev: docker-compose.env.dev.yaml
  local: docker-compose.env.local.yaml
  stage: docker-compose.env.stage.yaml

Update these to configure your deployments per related environment.

You may now call `tako render` to prepare your project for deployment.

You can see that Tako tells you exactly what it is doing at each stage of execution, and also tells you what files it creates.

Tako has now been initialised and configured. It has,

  • Started tracking the docker-compose.yaml file as the source application definition.
  • Inferred configuration details from the docker-compose.yaml file.
  • Assigned sensible defaults for any config it couldn't infer.
  • Created dev (a sandbox used by Tako for continuous development), local (useful for testing on our own machine) and staging (useful for testing on a remote machine) Compose environment overrides.

It has also generated four files:

  • tako.yaml, a project metadata file that describes our source application definition and Compose environment overrides.
  • Three docker-compose.env.*.yaml files to represent our Compose environment overrides.

Project metadata file

The tako.yaml metadata file contains references to all required files in the conversion process. Its creation confirms a successful init,

id: b903b060-9762-4a59-8131-47e129f70256
compose:
  - docker-compose.yaml
environments:
  dev: docker-compose.env.dev.yaml
  local: docker-compose.env.local.yaml
  stage: docker-compose.env.stage.yaml

Compose environment overrides files

The created docker-compose.env.dev.yaml (and local and stage equivalents) are generated for each of the -e switches we used in the tako init command. These Compose environment overrides are currently identical.

The x-k8s extension section for each service enables you to control how the app runs on Kubernetes. See the configuration reference to find all the available options and understand how they affect deployments.

We'll be adjusting these values soon per target environment. For now, they look as below,

version: "3.7"
services:
  wordpress:
    x-k8s:
      workload:
        livenessProbe:
          type: exec
          exec:
            command:
              - echo
              - Define healthcheck command for service wordpress
        replicas: 1

Moving to Kubernetes

Admittedly, our wordpress app is very basic, it only starts a wordpress container.

However, all the translation wiring is now in place, so let's run it on Kubernetes!

Generate Kubernetes manifests

First, we instruct Tako to generate manifests for the required Kubernetes resources.

Kube Notes

Our single wordpress Compose service requires Deployment, Service and (an optional) NetworkPolicy Kubernetes resources.

Simply run,

$ tako render

which outputs the following:

» Loading...

» Validating compose sources...
Detecting secrets in: docker-compose.yaml
 ✓  None detected in service: wordpress

» Validating compose environment overrides...
Detecting secrets in: docker-compose.env.dev.yaml
 ✓  None detected in service: wordpress
Detecting secrets in: docker-compose.env.local.yaml
 ✓  None detected in service: wordpress
Detecting secrets in: docker-compose.env.stage.yaml
 ✓  None detected in service: wordpress

» Detecting project updates...
dev: docker-compose.env.dev.yaml
 ✓  No version update detected
 ✓  No service additions detected
 ✓  No service removals detected
 ✓  No env var removals detected
 ✓  No volume additions detected
 ✓  No volume removals detected
local: docker-compose.env.local.yaml
 ✓  No version update detected
 ✓  No service additions detected
 ✓  No service removals detected
 ✓  No env var removals detected
 ✓  No volume additions detected
 ✓  No volume removals detected
stage: docker-compose.env.stage.yaml
 ✓  No version update detected
 ✓  No service additions detected
 ✓  No service removals detected
 ✓  No env var removals detected
 ✓  No volume additions detected
 ✓  No volume removals detected

» Rendering manifests, format: kubernetes...
dev: docker-compose.env.dev.yaml
 ✓  Converted service: wordpress
   | rendered Deployment
   | rendered Service
 ✓  Networking
   | rendered NetworkPolicy
local: docker-compose.env.local.yaml
 ✓  Converted service: wordpress
   | rendered Deployment
   | rendered Service
 ✓  Networking
   | rendered NetworkPolicy
stage: docker-compose.env.stage.yaml
 ✓  Converted service: wordpress
   | rendered Deployment
   | rendered Service
 ✓  Networking
   | rendered NetworkPolicy

Project manifests rendered!
A set of 'kubernetes' manifests have been generated:
    dev: k8s/dev
  local: k8s/local
  stage: k8s/stage

The project can now be deployed to a Kubernetes cluster.

To test locally:
 - Ensure you have a local cluster up and running with a configured context.
 - Create a namespace: `kubectl create ns ns-example`.
 - Apply the manifests to the cluster: `kubectl apply -f <manifests-dir>/<env> -n ns-example`.
 - Discover the main service: `kubectl get svc -n ns-example`.
 - Port forward to the main service: `kubectl port-forward service/<service_name> <service_port>:<destination_port> -n ns-example`.

In this case, Tako,

  • Has re-introspected our source application definition.
  • Has NOT detected any config changes that need to be applied to our Compose environment overrides.
  • Has generated Kubernetes manifests to enable our app to run in dev, local and stage mode.

We're now ready to run our app on Kubernetes!

Running on Kubernetes

This means we need to deploy our newly minted manifests to a Kubernetes cluster.

Run the following commands on your local Kubernetes (we use Docker Desktop).

Kube Notes

  • We're using kubectl, the Kubernetes CLI, to apply our manifests onto Kubernetes.
  • We utilise the Namespace tako-local to isolate our project resources from other resources in the cluster.
  • Our wordpress container runs as a single Pod as we're only running 1 replica.
  • The service/wordpress is a Service that proxies the Pod running the container.
  • To access the wordpress container from our localhost we port forward traffic from service/wordpress port 8000 to our localhost on port 8080.

We'll be deploying our app in local environment mode.

# create a namespace to host our app
$ kubectl create namespace tako-local
namespace/tako-local created

# apply the generated k8s/local to our namespace
$ kubectl apply -f k8s/local -n tako-local
networkpolicy.networking.k8s.io/default created
deployment.apps/wordpress created
service/wordpress created

# make the wordpress service accessible on port 8080
$ kubectl port-forward service/wordpress 8080:8000 -n tako-local
Forwarding from 127.0.0.1:8080 -> 8000
Forwarding from [::1]:8080 -> 8000
Handling connection for 8080       # When we connect to 8080 via browser

Navigate to http://localhost:8080 in your browser. This should display wordpress's setup page - the same wordpress web page you saw when we ran docker-compose up -d earlier.

Hurray!! We're up and running on K8s using JUST our Compose config (with sensible Tako defaults).

For now, ctrl+c to stop the wordpress service. We need to move beyond a basic container.

Add a DB service

Let's wire in a database to make our basic wordpress app more useful.

In this case this means adding a db service backed by a mysql container to our Compose config.

Update Compose config

Update the source docker-compose.yaml to,

version: '3.7'
services:
  db:
    image: mysql:5.7
    volumes:
        - db_data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
  wordpress:
    image: wordpress:latest
    ports:
      - "8000:80"
    restart: always
    depends_on:
      - db
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
volumes:
  db_data:

This adds,

  • A new mysql service.
  • A volume db_data to store the mysql data.
  • Environment variables to configure the mysql service.
  • Environment variables to configure the wordpress service to use the mysql db.

Running,

$ docker-compose up -d
...
Creating network "wordpress-mysql_default" with the default driver
Creating wordpress-mysql_wordpress_1 ... done
Creating wordpress-mysql_db_1        ... done

Navigate to http://0.0.0.0:8000 in a browser.

You should now see a Welcome screen for the famous five-minute WordPress installation process.

This confirms that all is well.

Stop the service by running,

# Stop all containers. Remove named volumes.
$ docker-compose down -v

Re-sync Kubernetes

Now, that we have a new db service and db_data volume we need to let Tako infer the key attributes to enable the new Compose service and volume to run on Kubernetes.

Also, we've made some minor adjustments to the wordpress service. Tako will reconcile those changes.

This will be applied to all Compose environment overrides.

We have already initialised Tako, so to catch the new changes to the Compose file we simply need to re-run,

$ tako render

which now gives us some slightly different output to our first run earlier in the tutorial (note the following is abridged for brevity, highlighting relevant new output):

» Loading...

» Validating compose sources...
Detecting secrets in: docker-compose.yaml
 !  Detected in service: wordpress
   | env var [WORDPRESS_DB_PASSWORD] - Contains word: password
   | env var [WORDPRESS_DB_HOST] - Contains word: host
   | env var [WORDPRESS_DB_USER] - Contains word: user
 !  Detected in service: db
   | env var [MYSQL_USER] - Contains word: user
   | env var [MYSQL_PASSWORD] - Contains word: password
   | env var [MYSQL_ROOT_PASSWORD] - Contains word: password

To prevent secrets leaking, see help page:
https://github.com/appvia/tako/blob/master/docs/reference/config-params.md#reference-k8s-secret-key-value

» Validating compose environment overrides...
...

» Detecting project updates...
...

» Rendering manifests, format: kubernetes...
dev: docker-compose.env.dev.yaml
 ✓  Converted service: db
   | rendered StatefulSet
   | rendered Service
   | rendered PersistentVolumeClaim
 ✓  Converted service: wordpress
    ...
 ✓  Networking
    ...
local: docker-compose.env.local.yaml
 ✓  Converted service: db
   | rendered StatefulSet
   | rendered Service
   | rendered PersistentVolumeClaim
 ✓  Converted service: wordpress
    ...
 ✓  Networking
    ...
stage: docker-compose.env.stage.yaml
 ✓  Converted service: db
   | rendered StatefulSet
   | rendered Service
   | rendered PersistentVolumeClaim
 ✓  Converted service: wordpress
    ...
 ✓  Networking
    ...

Project manifests rendered!
...

This time round, Tako has:

  • Found secrets in our update Compose file, and warned us about the potential for leakage,
  • Detected and inferred config for the new mysql service and db_data volume,
  • Assigned sensible defaults for any config it couldn't infer, and
  • Re-generated the kubernetes manifests for the dev, local and stage deployment environments.

Kube Notes

To accommodate the db service, Tako uses the StatefulSet Kubernetes resource as the db service requires persistent storage. Tako uses the PersistentVolumeClaim resource to provide the db service with the required db_data volume it needs to store data.

We'll be re-deploying our app in local environment mode.

Run the following command on your local Kubernetes instance (we use Docker Desktop).

# re-apply the re-generated k8s/local manifests to our namespace
$ kubectl apply -f k8s/local -n tako-local
persistentvolumeclaim/db-data created
service/db created
statefulset.apps/db created
networkpolicy.networking.k8s.io/default configured
deployment.apps/wordpress configured
service/wordpress configured

This BREAKS our running app - to fix it we need to understand how service discovery differs between Compose and Kubernetes.

Fix DB service discovery

In our Compose config, the db service does not have a ports attribute, meaning it is not exposed externally as there are no published ports.

This is not an issue for dependent Compose services as containers connected to the same user-defined bridge network effectively expose all ports to each other and communicate using service names or aliases.

Kubernetes is different. To help our wordpress containers connect to the db, Kubernetes requires an explicit Service resource.

The fix is simple, we need to instruct Tako to recognise db as service that will be accessed from other services.

Simply add the ports attribute to the docker-compose.yaml file as below,

version: '3.7'
services:
  db:
    ...
    ...
    ports:
      - "3306"

  wordpress:
    ...
volumes:
  ...

Then, re-render and re-deploy,

$ tako render
...

# re-apply the re-generated k8s/local manifests to our namespace
$ kubectl apply -f k8s/local -n tako-local
service/db created
...

# make the wordpress service accessible on port 8080
$ kubectl port-forward service/wordpress 8080:8000 -n tako-local
Forwarding from 127.0.0.1:8080 -> 8000
Forwarding from [::1]:8080 -> 8000
Handling connection for 8080

Navigate to http://0.0.0.0:8000 in a browser.

... and Yay!! Live on Kubernetes, you should now see the Welcome screen for the famous five-minute WordPress installation process.

ctrl+c to stop the wordpress service.

Run more replicas

As it happens, we have a requirement that our stage environment should mirror production as much as possible.

In this case, we need to run 5 instances of the wordpress service to simulate how the app works in a heavy user traffic setting.

Let's make this happen. We need to edit our docker-compose.env.stage.yaml Compose environment override file.

We'll change the: x-k8s.workload.replicas value from 1 to 5.

version: "3.7"
services:
  wordpress:
    x-k8s:
      workload:
        ...
        replicas: 5

Re-sync Kubernetes

When we re-sync Tako, the stage environment's generated manifests will reflect the new number of replicas.

$ tako render
...

Re-deploying the manifests to Kubernetes on a stage environment will run 5 wordpress Pods on Kubernetes - meaning 5 wordpress instances.

We now have 3 different target environments,

  • dev will only run a single wordpress instance.
  • local will only run a single wordpress instance.
  • stage will only run a 5 wordpress instances.

These are easily tracked in easy to understand Compose files.

Check the configuration reference if you want to configure other params.

Conclusion

We have successfully moved a wordpress app from a local Docker Compose development flow to a connected multi-environment Kubernetes setup.

Tako facilitated all the heavy lifting. It enabled us to easily iterate on and manage our target environments.

We also have an understanding of the gotchas we can face when moving from Compose to Kubernetes.

All the generated manifests can be tracked in source control and shared in a team context.

Finally, you can find the artefacts for this tutorial here: wordpress-mysql example.