Skip to content

Latest commit

 

History

History
134 lines (96 loc) · 3.88 KB

06.multicluster.md

File metadata and controls

134 lines (96 loc) · 3.88 KB

Working with multi-clusters deployments

We will need two kinds of clusters for this part of the tutorial. Connectivity from k3s will not be able to reach the target cluster when in a local host without modifications. It is recommended to use two different terminal windows (Terminal 1 and Terminal 2) having the proper kubeconfig configured for each of them. Be aware that after creating a kind cluster, the default kubeconfig context is automatically modified, which may alter the target cluster unless a manual KUBECONFIG environment variable is set.

  1. Create another cluster named kubevela-managed, complementing the previous one, kubevela. This new cluster will then be linked to the previous one so applications can target it. Before performing this operation, open a new terminal window (Terminal 2) for the managed cluster and execute.

On Terminal 2

kind create cluster --config=installation/managed_cluster/kind_managed_cluster_config.yaml --name=kubevela-managed
kubectl --context kind-kubevela-managed wait --for=condition=Ready nodes --all --timeout=600s
helm install --create-namespace -n vela-system kubevela kubevela/vela-core --wait
  1. Export the kubeconfig

On Terminal 2

 kind get kubeconfig --name=kubevela-managed > /tmp/kind-kubevela-managed-kubeconfig
  1. Get the IP address of the managed cluster
$ docker inspect kubevela-managed-control-plane | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "172.19.0.3",

Edit the kubeconfig file (/tmp/kind-kubevela-managed-kubeconfig), point the server to your IP address, and make sure to change the port number to 6443 as we are accessing the API directly.

    server: https://172.19.0.3:6443
  1. Join the cluster

Make sure you are using the first cluster, in that terminal

On Terminal 1

kind get kubeconfig --name=kubevela > /tmp/kind-kubevela-kubeconfig
export KUBECONFIG=/tmp/kind-kubevela-kubeconfig

On Terminal 1

vela cluster join /tmp/kind-kubevela-managed-kubeconfig --name managed
I0411 11:52:50.957629   26227 virtual_cluster.go:337] joining cluster managed with version: v1.21.1
Successfully add cluster managed, endpoint: https://172.19.0.3:6443.

On Terminal 1

$ vela cluster list
CLUSTER	ALIAS	TYPE           	ENDPOINT               	ACCEPTED	LABELS
local  	     	Internal       	-                      	true
managed	     	X509Certificate	https://172.19.0.3:6443	true
  1. Deploy an app

On Terminal 1

vela up -f scenarios/multicluster_app/nginx.multicluster.yaml

On Terminal 1

$ vela status remote-deployment -n kubecon
About:

  Name:      	remote-deployment
  Namespace: 	kubecon
  Created at:	2023-04-11 12:05:19 +0200 CEST
  Status:    	running

Workflow:

  mode: DAG-DAG
  finished: true
  Suspend: false
  Terminated: false
  Steps
  - id: csd8735yvd
    name: deploy-topology-clusters
    type: deploy
    phase: succeeded

Services:

  - Name: nginx-basic
    Cluster: managed  Namespace: kubecon
    Type: webservice
    Healthy Ready:1/1
    Traits:
      ✅ expose: ClusterIP: 10.96.185.237

If you check the contents of the remote cluster, you will find the pods, while the local cluster has the app.

On Terminal 1

kubectl -n kubecon get app
NAME                COMPONENT     TYPE         PHASE     HEALTHY   STATUS      AGE
remote-deployment   nginx-basic   webservice   running   true      Ready:1/1   3m41s

kubectl -n kubecon get pods
No resources found in kubecon namespace.

On Terminal 2

kubectl -n kubecon get app
No resources found in kubecon namespace.

kubectl -n kubecon get pods
NAME                           READY   STATUS    RESTARTS   AGE
nginx-basic-6689558469-jk6pk   1/1     Running   0          5m9s

Next