Skip to content

Working with Minikube

Filippo Ledda edited this page Sep 7, 2023 · 5 revisions

General steps:

  1. Install Minikube
  2. Initialize Minikube cluster
  3. Initialize Helm
  4. Install cloud-harness Python package
  5. Set up kubectl command line tool
  6. Set up the Docker registry
  7. Run the build script
  8. Deploy with helm
  9. Install Argo (temporary)
  10. Do manual configurations

Step 5,6,7 differ in case we run Minikube in the same or different machine from the client in which we make the development and build.

Install Minikube

A Minikube installation must be accessible and activated on the command line tool kubectl. See also https://kubernetes.io/docs/tasks/tools/install-minikube/

Initialize Minikube cluster

At least 6GB of ram and 4 processors are needed to run MNP

To create a new cluster, run

minikube start --memory="6000mb" --cpus=4

To verify the installation, run

kubectl cluster-info

Enable ingress addon:

minikube addons enable ingress

Initialize helm

First, install Helm if it's not installed in your client machine. Currently helm 3.x is supported.

Helm 2

In any case you use Helm 2, some commands may slightly change and need to run

helm init

for every new cluster. This installs the server side of helm (tiller).

Install Cloud Harness

is built on top of CloudHarness. The deployment process is based on Python 3.7+ scripts. It is recommended to setup a virtual environment first.

With conda:

conda create --name ch python=3.7
conda activate ch

To install CloudHarness:

git clone https://github.com/MetaCell/cloud-harness.git
cd cloud-harness
pip install -r requirements.txt

Procedure if Minikube and the client are in the same machine

Set up Kubectl

The easiest way is to install Minikube in the same machine in which we make the build.

Kubectl will be available right after the installation.

Set up Docker

Running

eval $(minikube docker-env)

will build images directly on the Minikube environment.

Run the build script

Run the following

cd deployment
harness-deployment cloud-harness . -b -l

Procedure with Minikube and client/build on different machines

Setup kubectl

If Minikube is installed in a different machine, the following procedure will allow to connect kubectl.

  1. Install kubectl in the client machine
  2. copy ~/.minikube from the client to the server (skip cache and machines)
  3. Copy ~/.kube/config from the Minikube server to the client machine (make a backup of the previous version) and adjust paths to match the home folder on the client machine

Kube configuration copy

If you don't want to replace the whole content of the configuration you can copy only the relevant entries in ~/.kube/config from the server to the client on clusters, context

Examples:

On clusters

- cluster:
    certificate-authority: /home/user/.minikube/ca.crt
    server: https://192.168.99.106:8443
  name: minikube

On context

- context:
    cluster: minikube
    user: minikube
  name: minikube

On users

- name: minikube
  user:
    client-certificate: /home/user/.minikube/client.crt
    client-key: /home/user/.minikube/client.key

Set default context:

current-context: minikube

Set up the Docker registry

In the case we are not building from the same machine as the cluster (which will always happen without Minikube), we need a way to share the registry.

Procedure to share localhost:5000 from a kube cluster

In the minikube installation:

minikube addons enable registry

In the machine running the infrastructure-generate script, run

kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep registry | grep -v proxy | \awk '{print $1;}') 5000:5000

Run the build script

After the registry is forwarded to localhost:5000, we can deploy by specifying the registry the image names will be adjusted and all images will be pushed to the minikube registry.

cd deployment
harness-deployment cloud-harness . -b -l -r localhost:5000

Deploy with helm

Once everything is set up correctly, run helm as in any k8s cluster.

kubectl create ns mnp
kubectl create ns argo-workflows
helm install mnp helm/mnp  --namespace mnp

To visually monitor the installation, run

minikube dashboard

Install Argo (temporary)

Argo is not yet part of the helm chart (issue https://github.com/MetaCell/mnp/issues/31)

In order to install it in the cluster, run

kubectl create ns argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo/v2.4.3/manifests/install.yaml
kubectl create rolebinding argo-workflows --clusterrole=admin --serviceaccount=argo-workflows:argo-workflows -n argo-workflows
kubectl create rolebinding argo-workflows-default --clusterrole=admin --serviceaccount=mnp:default -n argo-workflows

Manual configurations

User accounts

A user account must be provided to access to the MNP secured applications.

  1. Login to the administration console on https://accounts.mnp.metacell.us with user mnp:metacell
  2. Add a user (menu Users on the left)
  3. Set a password to the user (tab credentials). Set temporary as off
  4. On Role Mappings, assign all roles to the user

Kafka Manager (optional)

If you want to use the Kafka manager, configure it as in https://github.com/MetaCell/mnp/wiki/Kafka-deployment#configuration

Access and test

Applications are deployed in the default domain *.mnp.metacell.us. In order to access the applications from your browser, set up your hosts file as indicated by the infrastructure-generate script output. Example

192.168.99.108  monitoring.mnp.metacell.us events.mnp.metacell.us argo.mnp.metacell.us neuroimaging.mnp.metacell.us workflows.mnp.metacell.us accounts.mnp.metacell.us graph.mnp.metacell.us test.mnp.metacell.us docs.mnp.metacell.us

Certificates

By default, your browser won't access SSL secured applications. You can either ignore the errors (On Chrome use flag --ignore-certificate-errors) or manually trust the certificates located on infrastructure/helm/certs.

Forward pods and services

Kafka manager to Argo UI to http://localhost:8100

kubectl port-forward --namespace mnp kafka-manager-779c6d6f7b-rvr2x 8100:80

Argo UI to http://localhost:8001

kubectl -n argo port-forward deployment/argo-ui 8001:8001

Locally test Kafka queue calls

The following allows to call/test to Kafka installed on MNP in Minikube locally. It is useful to test and debug an application which listens/writes to the queue

Kafka broker to local 9092

kubectl port-forward --namespace mnp $(kubectl get po -n mnp | grep kafka-0 | \awk '{print $1;}') 9092:9092

Also add to your hosts file

127.0.0.1      kafka-0.broker.mnp.svc.cluster.local bootstrap.mnp.svc.cluster.local