From 706be0aa56c09a8c04954de1213f28a7416429c5 Mon Sep 17 00:00:00 2001 From: Dominik Gedon Date: Fri, 22 Dec 2023 15:01:15 +0100 Subject: [PATCH] docs: Fix layout of container Markdown files This will fix - code block syntax - unnecessary empty lines - typos Signed-off-by: Dominik Gedon --- containers/BUILDING.md | 11 ++++---- containers/README.md | 43 ++++++++++++++++++++----------- containers/doc/README.md | 55 ++++++++++++++++++++-------------------- 3 files changed, 61 insertions(+), 48 deletions(-) diff --git a/containers/BUILDING.md b/containers/BUILDING.md index 7da6292b9fe2..ff7227295f29 100644 --- a/containers/BUILDING.md +++ b/containers/BUILDING.md @@ -28,12 +28,11 @@ module "registry" { } ``` -More information at https://github.com/uyuni-project/sumaform/blob/master/README_ADVANCED.md. - +More information can be found in the [advanced README](https://github.com/uyuni-project/sumaform/blob/master/README_ADVANCED.md) of sumaform. ## Running a local registry (as a container) -``` +```bash mkdir registry_storage podman run --publish 5000:5000 -v `pwd`/registry_storage:/var/lib/registry docker.io/library/registry:2 ``` @@ -41,12 +40,14 @@ podman run --publish 5000:5000 -v `pwd`/registry_storage:/var/lib/registry docke Registry will be available on port 5000. If `wget :5000/` does not work, check that port is open in your firewall. In case you get into this error while pulling: -``` + +```bash Error processing tar file(exit status 1): there might not be enough IDs available in the namespace (requested 0:42 for /etc/shadow): lchown /etc/shadow: invalid argument ``` Use the following commands to fix the problem: -``` + +```bash sudo touch /etc/sub{u,g}id sudo usermod --add-subuids 10000-75535 $(whoami) sudo usermod --add-subgids 10000-75535 $(whoami) diff --git a/containers/README.md b/containers/README.md index 3fbdeaad0f0f..81cc8868b295 100644 --- a/containers/README.md +++ b/containers/README.md @@ -2,9 +2,11 @@ ### Installing k3s -On the proxy host machine, install `k3s` without the load balancer and traefik router: +On the proxy host machine, install `k3s` without the load balancer and Traefik router: - curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik --disable=servicelb --tls-san=" sh - +```bash +curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik --disable=servicelb --tls-san=" sh - +``` ### Configuring cluster access @@ -14,7 +16,9 @@ This file is usually called a `kubeconfig` On the cluster server machine run the following command You can optionally transfer the resulting `kubeconfig-k3s.yaml` to your work machine: - kubectl config view --flatten=true | sed 's/127.0.0.1//' >kubeconfig-k3s.yaml +```bash +kubectl config view --flatten=true | sed 's/127.0.0.1//' >kubeconfig-k3s.yaml +``` Before calling `helm`, run `export KUBECONFIG=/path/to/kubeconfig-k3s.yaml`. @@ -23,15 +27,19 @@ Before calling `helm`, run `export KUBECONFIG=/path/to/kubeconfig-k3s.yaml`. On a SUSE Linux Enterprise Server machine, the **Containers Module** is required to install `helm`. Simply run: - zypper in helm +```bash +zypper in helm +``` ### Installing metalLB MetalLB is the LoadBalancer that will expose the proxy pod services to the outside world. To install it, run: - helm repo add metallb https://metallb.github.io/metallb - helm install --create-namespace -n metallb metallb metallb/metallb +```bash +helm repo add metallb https://metallb.github.io/metallb +helm install --create-namespace -n metallb metallb metallb/metallb +``` MetalLB still requires a configuration to know the virtual IP address range to be used. In this example, the virtual IP addresses will be from `192.168.122.240` to `192.168.122.250`, but we could lower that range since only one address will be used in the end. @@ -61,8 +69,9 @@ spec: Apply this configuration by running: - kubectl apply -f metallb-config.yaml - +```bash +kubectl apply -f metallb-config.yaml +``` ### Deploying the proxy helm chart @@ -72,13 +81,15 @@ This example will use `192.168.122.241`. Create a `custom-values.yaml` file with the following content: - services: - annotations: - metallb.universe.tf/allow-shared-ip: key-to-share-ip - metallb.universe.tf/loadBalancerIPs: 192.168.122.241 +```yaml +services: + annotations: + metallb.universe.tf/allow-shared-ip: key-to-share-ip + metallb.universe.tf/loadBalancerIPs: 192.168.122.241 +``` If you want to configure the storage of the volumes to be used by the proxy pod, define persistent volumes for the following claims. -Please refer to the [kubernetes documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) for more details. +Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) for more details. * default/squid-cache-pv-claim * default/package-cache-pv-claim @@ -86,7 +97,9 @@ Please refer to the [kubernetes documentation](https://kubernetes.io/docs/concep Copy and extract the proxy configuration file and then deploy the proxy helm chart: - tar xf /path/to/config.tar.gz - helm install uyuni-proxy oci://registry.opensuse.org/uyuni/proxy -f config.yaml -f httpd.yaml -f ssh.yaml -f custom-values.yaml +```bash +tar xf /path/to/config.tar.gz +helm install uyuni-proxy oci://registry.opensuse.org/uyuni/proxy -f config.yaml -f httpd.yaml -f ssh.yaml -f custom-values.yaml +``` To install the helm chart from SUSE Manager, use the `oci://registry.suse.com/suse/manager/4.3/proxy` URL instead. diff --git a/containers/doc/README.md b/containers/doc/README.md index 92d7c327b7bd..e2113f5a8440 100644 --- a/containers/doc/README.md +++ b/containers/doc/README.md @@ -10,14 +10,14 @@ Note that in the case of a k3s or rke2 cluster the kubeconfig will be discovered ## Podman specific setup Podman stores its volumes in `/var/lib/containers/storage/volumes/`. -In order to provide custom storage for the volumes, mount disks on that path oreven the expected volume path inside it like `/var/lib/containers/storage/volumes/var-spacewalk`. +In order to provide custom storage for the volumes, mount disks on that path or even the expected volume path inside it like `/var/lib/containers/storage/volumes/var-spacewalk`. **This needs to be performed before installing Uyuni as the volumes will be populated at that time.** ## RKE2 specific setup RKE2 doesn't have automatically provisioning Persistent Volume by default. -Either the expected Persisten Volumes need to be created before hand or a storage class with automatic provisioning has to be defined before installing Uyuni. +Either the expected Persistent Volumes need to be created before hand or a storage class with automatic provisioning has to be defined before installing Uyuni. ## K3s specific setup @@ -33,7 +33,7 @@ With K3s it is possible to preload the container images and avoid it to be fetch For this, on a machine with internet access, pull the image using `podman`, `docker` or `skopeo` and save it as a `tar` archive. For example: -``` +```bash cert_manager_version=$(helm show chart --repo https://charts.jetstack.io/ cert-manager | grep '^version:' | cut -f 2 -d ' ') for image in cert-manager-cainjector cert-manager-controller cert-manager-ctl cert-manager-webhook; do podman pull quay.io/jetstack/$image:$cert_manager_version @@ -47,7 +47,7 @@ podman save --output server.tar registry.opensuse.org/uyuni/server:latest or -``` +```bash cert_manager_version=$(helm show chart --repo https://charts.jetstack.io/ cert-manager | grep '^version:' | cut -f 2 -d ' ') for image in cert-manager-cainjector cert-manager-controller cert-manager-ctl cert-manager-webhook; do skopeo copy docker://quay.io/jetstack/$image:$cert_manager_version docker-archive:$image.tar:quay.io/jetstack/$image:$cert_manager_version @@ -59,33 +59,33 @@ skopeo copy docker://registry.opensuse.org/uyuni/server:latest docker-archive:se If using K3S's default local-path-provider, also pull the helper pod image for offline use: Run the following command on the K3S node to find out the name of the image to pull: -``` +```bash grep helper-pod -A1 /var/lib/rancher/k3s/server/manifests/local-storage.yaml | grep image | sed 's/^ \+image: //' ``` Then set the `helper_pod_image` variable with the returned output on the machine having internet access and run the next commands to pull the image: -``` +```bash podman pull $helper_pod_image podman save --output helper_pod.tar $helper_pod_image ``` or -``` +```bash skopeo copy docker://$(helper_pod_image) docker-archive:helper-pod.tar:$(helper_pod_image) ``` Copy the `cert-manager` and `uyuni/server` helm charts locally: -``` +```bash helm pull --repo https://charts.jetstack.io --destination . cert-manager helm pull --destination . oci://registry.opensuse.org/uyuni/server-helm ``` Transfer the resulting `*.tar` images to the K3s node and load them using the following command: -``` +```bash for archive in `ls *.tar`; do k3s ctr images import $archive done @@ -98,7 +98,7 @@ To prevent Helm from pulling the images pass the `--image-pullPolicy=never` para To use the downloaded helm charts instead of the default ones, pass `--helm-uyuni-chart=server-helm-2023.10.0.tgz` and `--helm-certmanager-chart=cert-manager-v1.13.1.tgz` or add the following to the `mgradm` configuration file. Of course the versions in the file name need to be adjusted to what you downloaded: -``` +```yaml helm: uyuni: chart: server-helm-2023.10.0.tgz @@ -108,7 +108,7 @@ helm: If using K3S's default local-path-provisioner, set the helper-pod `imagePullPolicy` to `Never` in `/var/lib/rancher/k3s/server/manifests/local-storage.yaml` using the following command: -``` +```bash sed 's/imagePullPolicy: IfNotPresent/imagePullPolicy: Never/' -i /var/lib/rancher/k3s/server/manifests/local-storage.yaml ``` @@ -121,7 +121,7 @@ Instead, use `skopeo` to import the images in a local registry and use this one Copy the `cert-manager` and `uyuni/server-helm` helm charts locally: -``` +```bash helm pull --repo https://charts.jetstack.io --destination . cert-manager helm pull --destination . oci://registry.opensuse.org/uyuni/server-helm ``` @@ -138,20 +138,20 @@ With Podman it is possible to preload the container images and avoid it to be fe For this, on a machine with internet access, pull the image using `podman`, `docker` or `skopeo` and save it as a `tar` archive. For example: -``` +```bash podman pull registry.opensuse.org/uyuni/server:latest podman save --output server.tar registry.opensuse.org/uyuni/server:latest ``` or -``` +```bash skopeo copy docker://registry.opensuse.org/uyuni/server:latest docker-archive:server.tar:registry.opensuse.org/uyuni/server:latest ``` Transfer the resulting `server-image.tar` to the server and load it using the following command: -``` +```bash podman load -i server.tar ``` @@ -172,7 +172,7 @@ This means the DNS records need to be adjusted after the migration to use the ne Stop the source services: -``` +```bash spacewalk-service stop systemctl stop postgresql ``` @@ -206,13 +206,13 @@ Refer to the installation section for more details on the volumes preparation. Run the following command to install a new Uyuni server from the source one after replacing the `uyuni.source.fqdn` by the proper source server FQDN: This command will synchronize all the data from the source server to the new one: this can take time! -``` +```bash mgradm migrate podman uyuni.source.fqdn ``` or -``` +```bash mgradm migrate kubernetes uyuni.source.fqdn ``` @@ -225,7 +225,7 @@ For security reason, using command line parameters to specify passwords should b Prepare an `mgradm.yaml` file like the following: -``` +```yaml db: password: MySuperSecretDBPass cert: @@ -236,13 +236,13 @@ To dismiss the email prompts add the `email` and `emailFrom` configurations to t Run one of the following command to install after replacing the `uyuni.example.com` by the FQDN of the server to install: -``` +```bash mgradm -c mgradm.yaml install podman uyuni.example.com ``` or -``` +```bash mgradm -c mgradm.yaml install kubernetes uyuni.example.com ``` @@ -255,19 +255,18 @@ Additional parameters can be passed to Podman using `--podman-arg` parameters. The `mgradm install` command comes with parameters and thus configuration values for advanced helm chart configuration. To pass additional values to the Uyuni helm chart at installation time, use the `--helm-uyuni-values chart-values.yaml` parameter or a configuration like the following: -``` +```yaml helm: uyuni: values: chart-values.yaml ``` The path set as value for this configuration is a YAML file passed to the Uyuni Helm chart. -Be aware that some of the values in this file will be overriden by the `mgradm install` parameters. +Be aware that some of the values in this file will be overridden by the `mgradm install` parameters. Note that the Helm chart installs a deployment with one replica. The pod name is automatically generated by Kubernetes and changes at every start. - # Using Uyuni in containers To get a shell in the pod run `mgrctl exec -ti bash`. @@ -278,11 +277,11 @@ Conversely to copy files from the server use `mgrctl cp server: