-
Notifications
You must be signed in to change notification settings - Fork 5
Working with Minikube
General steps:
- Install Minikube
- Initialize Minikube cluster
- Initialize Helm
- Install
cloud-harness
Python package - Set up
kubectl
command line tool - Set up the Docker registry
- Run the build script
- Deploy with helm
- Install Argo (temporary)
- Do manual configurations
Step 5,6,7 differ in case we run Minikube in the same or different machine from the client in which we make the development and build.
A Minikube installation must be accessible and activated on the command line tool kubectl
.
See also https://kubernetes.io/docs/tasks/tools/install-minikube/
At least 6GB of ram and 4 processors are needed to run MNP
To create a new cluster, run
minikube start --memory="6000mb" --cpus=4
To verify the installation, run
kubectl cluster-info
Enable ingress addon:
minikube addons enable ingress
First, install Helm if it's not installed in your client machine. Currently helm 3.x is supported.
In any case you use Helm 2, some commands may slightly change and need to run
helm init
for every new cluster. This installs the server side of helm (tiller).
is built on top of CloudHarness. The deployment process is based on Python 3.7+ scripts. It is recommended to setup a virtual environment first.
With conda:
conda create --name ch python=3.7
conda activate ch
To install CloudHarness:
git clone https://github.com/MetaCell/cloud-harness.git
cd cloud-harness
pip install -r requirements.txt
The easiest way is to install Minikube in the same machine in which we make the build.
Kubectl will be available right after the installation.
Running
eval $(minikube docker-env)
will build images directly on the Minikube environment.
Run the following
cd deployment
harness-deployment cloud-harness . -b -l
If Minikube is installed in a different machine, the following procedure will allow to connect kubectl.
- Install kubectl in the client machine
- copy
~/.minikube
from the client to the server (skip cache and machines) - Copy
~/.kube/config
from the Minikube server to the client machine (make a backup of the previous version) and adjust paths to match the home folder on the client machine
If you don't want to replace the whole content of the configuration you can copy only
the relevant entries in ~/.kube/config
from the server to the client on clusters
, context
Examples:
On clusters
- cluster:
certificate-authority: /home/user/.minikube/ca.crt
server: https://192.168.99.106:8443
name: minikube
On context
- context:
cluster: minikube
user: minikube
name: minikube
On users
- name: minikube
user:
client-certificate: /home/user/.minikube/client.crt
client-key: /home/user/.minikube/client.key
Set default context:
current-context: minikube
In the case we are not building from the same machine as the cluster (which will always happen without Minikube), we need a way to share the registry.
Procedure to share localhost:5000 from a kube cluster
In the minikube installation:
minikube addons enable registry
In the machine running the infrastructure-generate script, run
kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep registry | grep -v proxy | \awk '{print $1;}') 5000:5000
After the registry is forwarded to localhost:5000, we can deploy by specifying the registry the image names will be adjusted and all images will be pushed to the minikube registry.
cd deployment
harness-deployment cloud-harness . -b -l -r localhost:5000
Once everything is set up correctly, run helm as in any k8s cluster.
kubectl create ns mnp
kubectl create ns argo-workflows
helm install mnp helm/mnp --namespace mnp
To visually monitor the installation, run
minikube dashboard
Argo is not yet part of the helm chart (issue https://github.com/MetaCell/mnp/issues/31)
In order to install it in the cluster, run
kubectl create ns argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo/v2.4.3/manifests/install.yaml
kubectl create rolebinding argo-workflows --clusterrole=admin --serviceaccount=argo-workflows:argo-workflows -n argo-workflows
kubectl create rolebinding argo-workflows-default --clusterrole=admin --serviceaccount=mnp:default -n argo-workflows
A user account must be provided to access to the MNP secured applications.
- Login to the administration console on https://accounts.mnp.metacell.us with user
mnp
:metacell
- Add a user (menu Users on the left)
- Set a password to the user (tab credentials). Set temporary as off
- On Role Mappings, assign all roles to the user
If you want to use the Kafka manager, configure it as in https://github.com/MetaCell/mnp/wiki/Kafka-deployment#configuration
Applications are deployed in the default domain *.mnp.metacell.us. In order to access the applications from your browser, set up your hosts file as indicated by the infrastructure-generate script output. Example
192.168.99.108 monitoring.mnp.metacell.us events.mnp.metacell.us argo.mnp.metacell.us neuroimaging.mnp.metacell.us workflows.mnp.metacell.us accounts.mnp.metacell.us graph.mnp.metacell.us test.mnp.metacell.us docs.mnp.metacell.us
By default, your browser won't access SSL secured applications. You can either ignore the errors (On Chrome use flag --ignore-certificate-errors
) or manually trust the certificates located on infrastructure/helm/certs
.
Kafka manager to Argo UI to http://localhost:8100
kubectl port-forward --namespace mnp kafka-manager-779c6d6f7b-rvr2x 8100:80
Argo UI to http://localhost:8001
kubectl -n argo port-forward deployment/argo-ui 8001:8001
The following allows to call/test to Kafka installed on MNP in Minikube locally. It is useful to test and debug an application which listens/writes to the queue
Kafka broker to local 9092
kubectl port-forward --namespace mnp $(kubectl get po -n mnp | grep kafka-0 | \awk '{print $1;}') 9092:9092
Also add to your hosts file
127.0.0.1 kafka-0.broker.mnp.svc.cluster.local bootstrap.mnp.svc.cluster.local