Skip to content

Commit

Permalink
docs: Fix layout of container Markdown files
Browse files Browse the repository at this point in the history
This will fix

- code block syntax
- unnecessary empty lines
- typos

Signed-off-by: Dominik Gedon <dominik.gedon@suse.com>
  • Loading branch information
nodeg committed Dec 22, 2023
1 parent ede5762 commit 706be0a
Show file tree
Hide file tree
Showing 3 changed files with 61 additions and 48 deletions.
11 changes: 6 additions & 5 deletions containers/BUILDING.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,25 +28,26 @@ module "registry" {
}
```

More information at https://github.com/uyuni-project/sumaform/blob/master/README_ADVANCED.md.

More information can be found in the [advanced README](https://github.com/uyuni-project/sumaform/blob/master/README_ADVANCED.md) of sumaform.

## Running a local registry (as a container)

```
```bash
mkdir registry_storage
podman run --publish 5000:5000 -v `pwd`/registry_storage:/var/lib/registry docker.io/library/registry:2
```

Registry will be available on port 5000. If `wget <hostname_or_IP>:5000/` does not work, check that port is open in your firewall.

In case you get into this error while pulling:
```

```bash
Error processing tar file(exit status 1): there might not be enough IDs available in the namespace (requested 0:42 for /etc/shadow): lchown /etc/shadow: invalid argument
```

Use the following commands to fix the problem:
```

```bash
sudo touch /etc/sub{u,g}id
sudo usermod --add-subuids 10000-75535 $(whoami)
sudo usermod --add-subgids 10000-75535 $(whoami)
Expand Down
43 changes: 28 additions & 15 deletions containers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@

### Installing k3s

On the proxy host machine, install `k3s` without the load balancer and traefik router:
On the proxy host machine, install `k3s` without the load balancer and Traefik router:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik --disable=servicelb --tls-san=<K3S_HOST_FQDN>" sh -
```bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik --disable=servicelb --tls-san=<K3S_HOST_FQDN>" sh -
```

### Configuring cluster access

Expand All @@ -14,7 +16,9 @@ This file is usually called a `kubeconfig`
On the cluster server machine run the following command
You can optionally transfer the resulting `kubeconfig-k3s.yaml` to your work machine:

kubectl config view --flatten=true | sed 's/127.0.0.1/<K3S_HOST_FQDN>/' >kubeconfig-k3s.yaml
```bash
kubectl config view --flatten=true | sed 's/127.0.0.1/<K3S_HOST_FQDN>/' >kubeconfig-k3s.yaml
```

Before calling `helm`, run `export KUBECONFIG=/path/to/kubeconfig-k3s.yaml`.

Expand All @@ -23,15 +27,19 @@ Before calling `helm`, run `export KUBECONFIG=/path/to/kubeconfig-k3s.yaml`.
On a SUSE Linux Enterprise Server machine, the **Containers Module** is required to install `helm`.
Simply run:

zypper in helm
```bash
zypper in helm
```

### Installing metalLB

MetalLB is the LoadBalancer that will expose the proxy pod services to the outside world.
To install it, run:

helm repo add metallb https://metallb.github.io/metallb
helm install --create-namespace -n metallb metallb metallb/metallb
```bash
helm repo add metallb https://metallb.github.io/metallb
helm install --create-namespace -n metallb metallb metallb/metallb
```

MetalLB still requires a configuration to know the virtual IP address range to be used.
In this example, the virtual IP addresses will be from `192.168.122.240` to `192.168.122.250`, but we could lower that range since only one address will be used in the end.
Expand Down Expand Up @@ -61,8 +69,9 @@ spec:
Apply this configuration by running:
kubectl apply -f metallb-config.yaml
```bash
kubectl apply -f metallb-config.yaml
```

### Deploying the proxy helm chart

Expand All @@ -72,21 +81,25 @@ This example will use `192.168.122.241`.

Create a `custom-values.yaml` file with the following content:

services:
annotations:
metallb.universe.tf/allow-shared-ip: key-to-share-ip
metallb.universe.tf/loadBalancerIPs: 192.168.122.241
```yaml
services:
annotations:
metallb.universe.tf/allow-shared-ip: key-to-share-ip
metallb.universe.tf/loadBalancerIPs: 192.168.122.241
```
If you want to configure the storage of the volumes to be used by the proxy pod, define persistent volumes for the following claims.
Please refer to the [kubernetes documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) for more details.
Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) for more details.
* default/squid-cache-pv-claim
* default/package-cache-pv-claim
* default/tftp-boot-pv-claim
Copy and extract the proxy configuration file and then deploy the proxy helm chart:
tar xf /path/to/config.tar.gz
helm install uyuni-proxy oci://registry.opensuse.org/uyuni/proxy -f config.yaml -f httpd.yaml -f ssh.yaml -f custom-values.yaml
```bash
tar xf /path/to/config.tar.gz
helm install uyuni-proxy oci://registry.opensuse.org/uyuni/proxy -f config.yaml -f httpd.yaml -f ssh.yaml -f custom-values.yaml
```

To install the helm chart from SUSE Manager, use the `oci://registry.suse.com/suse/manager/4.3/proxy` URL instead.
55 changes: 27 additions & 28 deletions containers/doc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ Note that in the case of a k3s or rke2 cluster the kubeconfig will be discovered
## Podman specific setup

Podman stores its volumes in `/var/lib/containers/storage/volumes/`.
In order to provide custom storage for the volumes, mount disks on that path oreven the expected volume path inside it like `/var/lib/containers/storage/volumes/var-spacewalk`.
In order to provide custom storage for the volumes, mount disks on that path or even the expected volume path inside it like `/var/lib/containers/storage/volumes/var-spacewalk`.

**This needs to be performed before installing Uyuni as the volumes will be populated at that time.**

## RKE2 specific setup

RKE2 doesn't have automatically provisioning Persistent Volume by default.
Either the expected Persisten Volumes need to be created before hand or a storage class with automatic provisioning has to be defined before installing Uyuni.
Either the expected Persistent Volumes need to be created before hand or a storage class with automatic provisioning has to be defined before installing Uyuni.

## K3s specific setup

Expand All @@ -33,7 +33,7 @@ With K3s it is possible to preload the container images and avoid it to be fetch
For this, on a machine with internet access, pull the image using `podman`, `docker` or `skopeo` and save it as a `tar` archive.
For example:

```
```bash
cert_manager_version=$(helm show chart --repo https://charts.jetstack.io/ cert-manager | grep '^version:' | cut -f 2 -d ' ')
for image in cert-manager-cainjector cert-manager-controller cert-manager-ctl cert-manager-webhook; do
podman pull quay.io/jetstack/$image:$cert_manager_version
Expand All @@ -47,7 +47,7 @@ podman save --output server.tar registry.opensuse.org/uyuni/server:latest

or

```
```bash
cert_manager_version=$(helm show chart --repo https://charts.jetstack.io/ cert-manager | grep '^version:' | cut -f 2 -d ' ')
for image in cert-manager-cainjector cert-manager-controller cert-manager-ctl cert-manager-webhook; do
skopeo copy docker://quay.io/jetstack/$image:$cert_manager_version docker-archive:$image.tar:quay.io/jetstack/$image:$cert_manager_version
Expand All @@ -59,33 +59,33 @@ skopeo copy docker://registry.opensuse.org/uyuni/server:latest docker-archive:se
If using K3S's default local-path-provider, also pull the helper pod image for offline use:
Run the following command on the K3S node to find out the name of the image to pull:

```
```bash
grep helper-pod -A1 /var/lib/rancher/k3s/server/manifests/local-storage.yaml | grep image | sed 's/^ \+image: //'
```

Then set the `helper_pod_image` variable with the returned output on the machine having internet access and run the next commands to pull the image:

```
```bash
podman pull $helper_pod_image
podman save --output helper_pod.tar $helper_pod_image
```

or

```
```bash
skopeo copy docker://$(helper_pod_image) docker-archive:helper-pod.tar:$(helper_pod_image)
```

Copy the `cert-manager` and `uyuni/server` helm charts locally:

```
```bash
helm pull --repo https://charts.jetstack.io --destination . cert-manager
helm pull --destination . oci://registry.opensuse.org/uyuni/server-helm
```

Transfer the resulting `*.tar` images to the K3s node and load them using the following command:

```
```bash
for archive in `ls *.tar`; do
k3s ctr images import $archive
done
Expand All @@ -98,7 +98,7 @@ To prevent Helm from pulling the images pass the `--image-pullPolicy=never` para

To use the downloaded helm charts instead of the default ones, pass `--helm-uyuni-chart=server-helm-2023.10.0.tgz` and `--helm-certmanager-chart=cert-manager-v1.13.1.tgz` or add the following to the `mgradm` configuration file. Of course the versions in the file name need to be adjusted to what you downloaded:

```
```yaml
helm:
uyuni:
chart: server-helm-2023.10.0.tgz
Expand All @@ -108,7 +108,7 @@ helm:
If using K3S's default local-path-provisioner, set the helper-pod `imagePullPolicy` to `Never` in `/var/lib/rancher/k3s/server/manifests/local-storage.yaml` using the following command:

```
```bash
sed 's/imagePullPolicy: IfNotPresent/imagePullPolicy: Never/' -i /var/lib/rancher/k3s/server/manifests/local-storage.yaml
```

Expand All @@ -121,7 +121,7 @@ Instead, use `skopeo` to import the images in a local registry and use this one

Copy the `cert-manager` and `uyuni/server-helm` helm charts locally:

```
```bash
helm pull --repo https://charts.jetstack.io --destination . cert-manager
helm pull --destination . oci://registry.opensuse.org/uyuni/server-helm
```
Expand All @@ -138,20 +138,20 @@ With Podman it is possible to preload the container images and avoid it to be fe
For this, on a machine with internet access, pull the image using `podman`, `docker` or `skopeo` and save it as a `tar` archive.
For example:
```
```bash
podman pull registry.opensuse.org/uyuni/server:latest
podman save --output server.tar registry.opensuse.org/uyuni/server:latest
```

or

```
```bash
skopeo copy docker://registry.opensuse.org/uyuni/server:latest docker-archive:server.tar:registry.opensuse.org/uyuni/server:latest
```

Transfer the resulting `server-image.tar` to the server and load it using the following command:

```
```bash
podman load -i server.tar
```

Expand All @@ -172,7 +172,7 @@ This means the DNS records need to be adjusted after the migration to use the ne

Stop the source services:

```
```bash
spacewalk-service stop
systemctl stop postgresql
```
Expand Down Expand Up @@ -206,13 +206,13 @@ Refer to the installation section for more details on the volumes preparation.
Run the following command to install a new Uyuni server from the source one after replacing the `uyuni.source.fqdn` by the proper source server FQDN:
This command will synchronize all the data from the source server to the new one: this can take time!

```
```bash
mgradm migrate podman uyuni.source.fqdn
```

or

```
```bash
mgradm migrate kubernetes uyuni.source.fqdn
```

Expand All @@ -225,7 +225,7 @@ For security reason, using command line parameters to specify passwords should b

Prepare an `mgradm.yaml` file like the following:

```
```yaml
db:
password: MySuperSecretDBPass
cert:
Expand All @@ -236,13 +236,13 @@ To dismiss the email prompts add the `email` and `emailFrom` configurations to t

Run one of the following command to install after replacing the `uyuni.example.com` by the FQDN of the server to install:

```
```bash
mgradm -c mgradm.yaml install podman uyuni.example.com
```

or

```
```bash
mgradm -c mgradm.yaml install kubernetes uyuni.example.com
```

Expand All @@ -255,19 +255,18 @@ Additional parameters can be passed to Podman using `--podman-arg` parameters.
The `mgradm install` command comes with parameters and thus configuration values for advanced helm chart configuration.
To pass additional values to the Uyuni helm chart at installation time, use the `--helm-uyuni-values chart-values.yaml` parameter or a configuration like the following:

```
```yaml
helm:
uyuni:
values: chart-values.yaml
```

The path set as value for this configuration is a YAML file passed to the Uyuni Helm chart.
Be aware that some of the values in this file will be overriden by the `mgradm install` parameters.
Be aware that some of the values in this file will be overridden by the `mgradm install` parameters.

Note that the Helm chart installs a deployment with one replica.
The pod name is automatically generated by Kubernetes and changes at every start.


# Using Uyuni in containers

To get a shell in the pod run `mgrctl exec -ti bash`.
Expand All @@ -278,11 +277,11 @@ Conversely to copy files from the server use `mgrctl cp server:<remote_path> <lo

# Developping with the containers

## Deploying code
## Deploying code

To deploy java code on the pod change to the `java` directory and run:

```
```bash
ant -f manager-build.xml refresh-branding-jar deploy-restart-container
```

Expand All @@ -292,13 +291,13 @@ ant -f manager-build.xml refresh-branding-jar deploy-restart-container

In order to attach a Java debugger Uyuni need to have been installed using the `--debug-java` option to setup the container to listen on JDWP ports and expose them.

The debugger can now be attached to the usual ports (8003 for tomcat and 8001 for taskomatic and 8002 for the search server) on the host FQDN.
The debugger can now be attached to the usual ports (`8003` for tomcat and `8001` for Taskomatic and `8002` for the search server) on the host FQDN.

# Uninstalling

To remove everything including the volumes, run the following command:

```
```bash
mgradm uninstall --purge-volumes
```

Expand Down

0 comments on commit 706be0a

Please sign in to comment.