diff --git a/adoc/admin-cap-integration.adoc b/adoc/admin-cap-integration.adoc index e010f0edd..52a437b0f 100644 --- a/adoc/admin-cap-integration.adoc +++ b/adoc/admin-cap-integration.adoc @@ -1,15 +1,15 @@ = {suse} {cap} Integration -{productname} offers {cap} for modern application delivery, -this chapter describes the steps required for successful integration. +{productname} offers {cap} for modern application delivery. +This chapter describes the steps required for successful integration. == Prerequisites -Before you start with integrating {cap}, you need to ensure the following: +Before you start integrating {cap}, you need to ensure the following: * The {productname} cluster did not use the `--strict-cap-defaults` option -during the initial setup when you run `skuba cluster init`. -This ensures presence of extra CRI-O capabilities compatible for docker containers. +during the initial setup when you ran `skuba cluster init`. +This ensures the presence of extra CRI-O capabilities compatible with docker containers. For more details refer to the _{productname} Deployment Guide, Transitioning from Docker to CRI-O_. * The {productname} cluster has `swapaccount=1` set on all worker nodes. @@ -19,9 +19,9 @@ sudo sed -i -r 's|^(GRUB_CMDLINE_LINUX_DEFAULT=)\"(.*.)\"|\1\"\2swapaccount=1 \" sudo grub2-mkconfig -o /boot/grub2/grub.cfg sudo systemctl reboot ---- -* The {productname} cluster has no restrictions to {cap} ports. +* The {productname} cluster has no restrictions for {cap} ports. For more details refer to the {cap} documentation: https://www.suse.com/documentation/cloud-application-platform-1/singlehtml/book_cap_guides/book_cap_guides.html . -* `Helm` and `Tiller` installed on node where you run `skuba` and `kubectl` command. +* `Helm` and `Tiller` are installed on the node where you run the `skuba` and `kubectl` commands. + ---- sudo zypper install helm @@ -33,9 +33,9 @@ helm init --tiller-image registry.suse.com/caasp/v4/helm-tiller:{helm_tiller_ver ---- == Procedures -. Create a storage class. For precise steps, refer to <<_RBD-dynamic-persistent-volumes>> +. Create a storage class. For precise steps, refer to <<_RBD-dynamic-persistent-volumes>>. -. Add `Helm` chart repository. +. Add the `Helm` chart repository. + ---- helm repo add suse https://kubernetes-charts.suse.com/ @@ -43,15 +43,15 @@ helm repo add suse https://kubernetes-charts.suse.com/ . Map the {productname} master node external IP address to the `` and `uaa.` on your DNS server. -For testing purposes you can also use `/etc/hosts` +For testing purposes you can also use `/etc/hosts`. + ---- .omg.howdoi.website uaa..omg.howdoi.website ---- -. Create shared value file. This will be used for CAP `uaa`, `cf`, and -`console` charts. Substitute the values enclosed in `< >` for specific values. +. Create a shared value file. This will be used for CAP `uaa`, `cf`, and +`console` charts. Substitute the values enclosed in `< >` with specific values. + ---- cat << *EOF* > custom_values.yaml @@ -87,7 +87,7 @@ uaa-0 1/1 Running 1 21h ... ---- -. Verify uaa OAuth - this should return a JSON Object: +. Verify uaa OAuth -- this should return a JSON object: + ---- curl --insecure https://uaa.:2793/.well-known/openid-configuration @@ -145,6 +145,6 @@ volume-migration-1-s96cc 0/1 Completed 0 54m .... ---- -A successful deployment allows you to access {cap} console via a web browser at +A successful deployment allows you to access {cap} console via a Web browser at https://:8443/login. The default username is admin and the password is the `secure_password` you have set in one of the steps above. diff --git a/adoc/admin-centralized-logging.adoc b/adoc/admin-centralized-logging.adoc index 6fde0dfe3..dca66275e 100644 --- a/adoc/admin-centralized-logging.adoc +++ b/adoc/admin-centralized-logging.adoc @@ -1,14 +1,14 @@ = Centralized Logging -Centralized Logging is a means of collecting logs from CaaS Platform for centralized management. +Centralized Logging is a means of collecting logs from the {productname} for centralized management. It forwards system and Kubernetes cluster logs to a specified external logging service, for example, Rsyslog server. -Collecting logs in a central location can serve for audit or debug purposes or to analyze and visually present data. +Collecting logs in a central location can be useful for audit or debug purposes or to analyze and visually present data. -== Types of logs +== Types of Logs -You can log the following groups of services. See the <> +You can log the following groups of services. See <> for more information on how to select and customize the logs. Kubernetes System Components:: @@ -32,10 +32,10 @@ OS Components:: * Zypper * Network (wicked) -Centralized logging is also restricted to the following protocols: UDP, TCP, TCP + TLS, TCP + mTLS. +Centralized Logging is also restricted to the following protocols: UDP, TCP, TCP + TLS, TCP + mTLS. -== Log formats +== Log Formats The two supported syslog message formats are *RFC 5424* and *RFC 3164*. @@ -72,7 +72,7 @@ sudo zypper install helm - As of {productname} {productversion}, Tiller is not part of the {productname} package repository, -so to install the Tiller container image run: +so to install the Tiller container image, run: [source,bash] ---- @@ -137,13 +137,13 @@ for instance after log agents shutdown, restart or in case of an unresponsive re The queue files are located under `/var/lib/{RELEASE_NAME}-log-agent-rsyslog` on every node in the cluster. Queue files remain even after the log agents are deleted. -The buffered queue can be enable/disable with following parameter: +The buffered queue can be enabled/disabled with the following parameter: `*queue.enabled*`, default value = false Setting `queue.enabled` to `false` means that data will be stored in-memory only. Setting the parameter to `true` will set the data store to a mixture of in-memory and in-disk. -Data will then store in memory until the queue is filled up, after which storing is switched to disk. +Data will then be stored in memory until the queue is filled up, after which storing is switched to disk. Enabling the queue also automatically saves the queue to disk at service shutdown. Additional parameters to define queue size and its disk usage are: @@ -151,7 +151,7 @@ Additional parameters to define queue size and its disk usage are: `*queue.size*`, default value = 50000 This option sets the number of messages allowed for the in-memory queue. -This setting effects the Kubernetes cluster logs (`kubernetes-control-plane` and `kubernetes-user-name-space`). +This setting affects the Kubernetes cluster logs (`kubernetes-control-plane` and `kubernetes-user-name-space`). `*queue.maxDiskSpace*`, default value = 2147483648 @@ -184,9 +184,9 @@ Options with empty default values are set as not specified. |queue.maxDiskSpace|sets maximum Rsyslog queue disk space in bytes|2147483648 |queue.size|sets Rsyslog queue size in bytes|50000 |resources.limits.cpu|sets CPU limits| -|resources.limits.memory|sets memory limits|512Mi -|resources.requests.cpu|sets CPU for request|100m -|resources.requests.memory|sets memory for request|512Mi +|resources.limits.memory|sets memory limits|512 Mi +|resources.requests.cpu|sets CPU for requests|100m +|resources.requests.memory|sets memory for requests|512 Mi |resumeInterval|specifies time (seconds) after failure before retry is attempted|30 |resumeRetryCount|sets number of retries after first failure before the log is discarded. -1 is unlimited|-1 |server.tls.clientCert|sets TLS client certificate| diff --git a/adoc/admin-cluster-management.adoc b/adoc/admin-cluster-management.adoc index 4b96f34f2..424d40feb 100644 --- a/adoc/admin-cluster-management.adoc +++ b/adoc/admin-cluster-management.adoc @@ -5,11 +5,11 @@ its individual nodes: bootstrapping, joining and removing nodes. For maximum automation and ease {productname} uses the `skuba` tool, which simplifies Kubernetes cluster creation and reconfiguration. -== Bootstrap and initial configuration +== Bootstrap and Initial Configuration Bootstrapping the cluster is the initial process of starting up a minimal viable cluster and joining the first master node. Only the first master node needs to be bootstrapped, -later nodes can simply be joined as described in <>. +later nodes can simply be joined as described in <>. Before bootstrapping any nodes to the cluster, you need to create an initial cluster definition folder (initialize the cluster). @@ -19,7 +19,7 @@ For a step by step guide on how to initialize the cluster, configure updates usi and subsequently bootstrap nodes to it, refer to the _{productname} Deployment Guide_. [[adding_nodes]] -== Adding nodes +== Adding Nodes Once you have added the first master node to the cluster using `skuba node bootstrap`, use the `skuba node join` command to add more nodes. Joining master or worker nodes to @@ -31,9 +31,9 @@ skuba node join --role --user --sudo --target ` is the name of the user that exists on your SLES machine (default: `sles`). - `--target ` is the IP address or FQDN of the relevant machine. - `` is how you decide to name the node you are adding. @@ -51,7 +51,7 @@ To add a new *worker* node, you would run something like: skuba node join --role worker --user sles --sudo --target 10.86.2.164 worker1 [[removing_nodes]] -== Removing nodes +== Removing Nodes === Temporary Removal @@ -67,7 +67,7 @@ https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#use-kubec [IMPORTANT] ==== -Nodes removed with this method can not be added back to the cluster or any other +Nodes removed with this method cannot be added back to the cluster or any other skuba-initiated cluster. You must reinstall the entire node and then join it again to the cluster. ==== @@ -81,14 +81,14 @@ skuba node remove [IMPORTANT] ==== -After the removal of a master node you have to manually delete its entries +After the removal of a master node, you have to manually delete its entries from your load balancer's configuration. ==== -== Reconfiguring nodes +== Reconfiguring Nodes -To reconfigure a node, for example to change the node role from worker to master, you will need to use a combination of commands. +To reconfigure a node, for example to change the node's role from worker to master, you will need to use a combination of commands. . Run `skuba node remove `. . Reinstall the node from scratch. -. Run `skuba node join --role --user --sudo --target ` +. Run `skuba node join --role --user --sudo --target `. diff --git a/adoc/admin-crio-proxy.adoc b/adoc/admin-crio-proxy.adoc index bbaadb0fc..ebaa9870d 100644 --- a/adoc/admin-crio-proxy.adoc +++ b/adoc/admin-crio-proxy.adoc @@ -1,20 +1,20 @@ -== Configuring HTTP/HTTPS proxy for {crio} +== Configuring HTTP/HTTPS Proxy for {crio} In some cases you must configure the container runtime to use a proxy to pull container images. To configure this for {crio} you must modify the file `/etc/sysconfig/crio`. -. First define the hostnames that should be used without a proxy (`NO_PROXY`). +. First define the host names that should be used without a proxy (`NO_PROXY`). . Then define which proxies should be used by the HTTP and HTTPS connections (`HTTP_PROXY` and `HTTPS_PROXY`). -. After you have saved the changes restart the container runtime with +. After you have saved the changes, restart the container runtime with + [source,bash] ---- systemctl restart crio ---- -=== Configuration example +=== Configuration Example * Proxy server without authentication + diff --git a/adoc/admin-crio-registries.adoc b/adoc/admin-crio-registries.adoc index e30c1a90a..cebaad0be 100644 --- a/adoc/admin-crio-registries.adoc +++ b/adoc/admin-crio-registries.adoc @@ -1,6 +1,6 @@ -== Configuring container registries for {crio} +== Configuring Container Registries for {crio} -Every registry related configuration needs to be done in the TOML file +Every registry-related configuration needs to be done in the TOML file `/etc/containers/registries.conf`. After any change of this file, CRI-O needs to be restarted. @@ -40,7 +40,7 @@ table to be considered. Only the TOML entry with the longest match is used. As a special case, the `prefix` field can be missing. If so, it defaults to the value of the `location` field. -=== Per-namespace settings +=== Per-namespace Settings - `insecure` (`true` or `false`): By default, container runtimes require TLS when retrieving images from a registry. If `insecure` is set to `true`, @@ -50,21 +50,21 @@ value of the `location` field. - `blocked` (`true` or `false`): If `true`, pulling images with matching names is forbidden. -=== Remapping and mirroring registries +=== Remapping and Mirroring Registries The user-specified image reference is, primarily, a "logical" image name, always used for naming the image. By default, the image reference also directly specifies the registry and repository to use, but the following options can be used to redirect the underlying accesses to different registry servers or locations. This can be used to support configurations with no access to the -internet without having to change `Dockerfile`s, or to add redundancy. +Internet without having to change Dockerfiles, or to add redundancy. ==== `location` Accepts the same format as the `prefix` field, and specifies the physical -location of the `prefix`-rooted namespace. By default, this equal to `prefix` +location of the `prefix`-rooted namespace. By default, this is equal to `prefix` (in which case `prefix` can be omitted and the `\[[registry]]` TOML table can -only specify `location`). +just specify `location`). ===== Example @@ -79,7 +79,7 @@ the `internal-registry-for-example.net/bar/myimage:latest` image. ==== `mirror` -An array of TOML tables specifying (possibly-partial) mirrors for the +An array of TOML tables specifying (possibly partial) mirrors for the `prefix`-rooted namespace. The mirrors are attempted in the specified order. The first one that can be @@ -97,7 +97,7 @@ the same semantics as if specified in the `\[[registry]]` TOML table directly: Can be `true` or `false`. If `true`, mirrors will only be used during pulling if the image reference includes a digest. Referencing an image by digest -ensures that the same is always used (whereas referencing an image by a tag may +ensures that the same one is always used (whereas referencing an image by a tag may cause different registries to return different images if the tag mapping is out of sync). diff --git a/adoc/admin-flexvolume.adoc b/adoc/admin-flexvolume.adoc index 22415b121..02f9d3aab 100644 --- a/adoc/admin-flexvolume.adoc +++ b/adoc/admin-flexvolume.adoc @@ -1,17 +1,17 @@ include::entities.adoc[] -= FlexVolume configuration += FlexVolume Configuration FlexVolume drivers are external (out-of-tree) drivers usually provided by a specific vendor. -They are executable files, which are placed in a predefined directory in the cluster on both worker and master nodes. +They are executable files that are placed in a predefined directory in the cluster on both worker and master nodes. Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin. The vendor driver first has to be installed on each worker and master node in a Kubernetes cluster. -On {productname} {productmajor} the path to install the drivers is `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`. +On {productname} {productmajor}, the path to install the drivers is `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`. If the drivers are deployed with `DaemonSet`, this will require changing -the flexvolume directory path, which is usually stored as an environment -variable, e.g. for rook: +the FlexVolume directory path, which is usually stored as an environment +variable, for example for rook: [source,bash] FLEXVOLUME_DIR_PATH=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ diff --git a/adoc/admin-monitoring-health-checks.adoc b/adoc/admin-monitoring-health-checks.adoc index 5fc9626c4..b9df97f59 100644 --- a/adoc/admin-monitoring-health-checks.adoc +++ b/adoc/admin-monitoring-health-checks.adoc @@ -179,7 +179,7 @@ curl -i http://localhost:10248/healthz ==== Remote Check -There are two ways to fetch endpoints remotely (metrics, healthz etc.). +There are two ways to fetch endpoints remotely (metrics, healthz, etc.). Both methods use HTTPS and a token. *The first method* is executed against the APIServer and mostly used with Prometheus diff --git a/adoc/admin-monitoring-stack.adoc b/adoc/admin-monitoring-stack.adoc index 641cf185f..fc089bbbf 100644 --- a/adoc/admin-monitoring-stack.adoc +++ b/adoc/admin-monitoring-stack.adoc @@ -73,7 +73,7 @@ Or add this entry to /etc/hosts . Create certificates + You will need SSL certificates for the shared resources. -If you are deploying in a pre-defined network environment, please get proper certificates from your network administrator. +If you are deploying in a predefined network environment, please get proper certificates from your network administrator. In this example, the domains are named after the components they represent. `prometheus.example.com`, `prometheus-alertmanager.example.com` and `grafana.example.com` == Installation @@ -411,7 +411,7 @@ The configuration sets one "receiver" to get notified by email when a node meets * Node has memory pressure * Node has disk pressure -The first two are critical because the node can not accept new pods, the last two are just warnings. +The first two are critical because the node cannot accept new pods, the last two are just warnings. The Alertmanager configuration can be added to [path]`prometheus-config-values.yaml` by adding the `alertmanagerFiles` section. @@ -681,7 +681,7 @@ You can find a couple of dashboard examples for {productname} in the https://git === Prometheus Jobs -The Prometheus upstream helm chart includes the following pre-defined jobs that will scrapes metrics from these jobs using service discovery. +The Prometheus upstream helm chart includes the following predefined jobs that will scrapes metrics from these jobs using service discovery. * prometheus: Get metrics from prometheus server * kubernetes-apiservers: Get metrics from {kube} apiserver diff --git a/adoc/admin-security-ldap.adoc b/adoc/admin-security-ldap.adoc index 3da9bc8e8..e111575e1 100644 --- a/adoc/admin-security-ldap.adoc +++ b/adoc/admin-security-ldap.adoc @@ -21,7 +21,7 @@ SUFFIX="dc=example,dc=org" # Domain Suffix DATA_DIR=/389_ds_data # Directory Server Data on Host Machine to Mount ---- -. Execute the following `docker` command to deploy the 389 Directory Server in same terminal. +. Execute the following `docker` command to deploy the 389 Directory Server in the same terminal. This will start a non-TLS port (389) and a TLS port (636) together with an automatically self-signed certificate and key. + @@ -49,7 +49,7 @@ docker stop 389-ds . Copy the external certificate `` to a mounted data directory `/config/`. -. Run bash in entrypoint to access the container with the mounted data directory from previous step: +. Run bash in entrypoint to access the container with the mounted data directory from the previous step: + ---- docker run --rm -it \ @@ -98,18 +98,18 @@ certutil -F -k 6d4f98efe45328ff852b7e2ca5a24ad46844163a -d /etc/dirsrv/slapd-`: + -Replace by the input key. +Replace with the input key. + -Replace by the input certificate. +Replace with the input certificate. + -Replace by the output certificate in pkcs12. +Replace with the output certificate in pkcs12. + ---- openssl pkcs12 -export -inkey -in -out -nodes -name "Server-Cert" pk12util -i -d /etc/dirsrv/slapd- ---- -. Import the rootCA into `/etc/dirsrv/slapd-` +. Import the rootCA into `/etc/dirsrv/slapd-` . + ---- certutil -A -d /etc/dirsrv/slapd- -n "CA certificate" -t "CT,," -i @@ -117,8 +117,8 @@ certutil -A -d /etc/dirsrv/slapd- -n "CA certificate" -t "CT,," - . Exit the container. -. Execute `docker` command to run the 389 Directory Server with a mounted data -directory from previous step: +. Execute the `docker` command to run the 389 Directory Server with a mounted data +directory from the previous step: + ---- docker run -d \ @@ -135,10 +135,10 @@ docker run -d \ == Examples of Usage In both directories, `user-regular1` and `user-regular2` are members of the `k8s-users` group, -`user-admin` is a member of the `k8s-admins` group. +and `user-admin` is a member of the `k8s-admins` group. -In Active Directory, `user-bind` is a simple user who is a member of the default Domain Users group. -Hence, we can use him to authenticate, because has read-only access to Active Directory. +In Active Directory, `user-bind` is a simple user that is a member of the default Domain Users group. +Hence, we can use it to authenticate, because it has read-only access to Active Directory. The mail attribute is used to create the RBAC rules. === 389 Directory Server: @@ -147,7 +147,6 @@ The mail attribute is used to create the RBAC rules. Example LDIF configuration to create user `user-regular1` using an LDAP command: ==== -# user-regular1, Users, example.org dn: cn=user-regular1,ou=Users,dc=example,dc=org cn: User Regular1 @@ -164,7 +163,6 @@ Example LDIF configuration to create user `user-regular1` using an LDAP command: Example LDIF configuration to create user `user-regular2` using an LDAP command: ==== -# user-regular2, Users, example.org dn: cn=user-regular2,ou=Users,dc=example,dc=org cn: User Regular2 @@ -181,7 +179,6 @@ Example LDIF configuration to create user `user-regular2` using an LDAP command: Example LDIF configuration to create user `user-admin` using an LDAP command: ==== -# user-admin, Users, example.org dn: cn=user-admin,ou=Users,dc=example,dc=org cn: User Admin @@ -198,7 +195,6 @@ Example LDIF configuration to create user `user-admin` using an LDAP command: Example LDIF configuration to create group `k8s-users` using an LDAP command: ==== -# k8s-users, Groups, example.org dn: cn=k8s-users,ou=Groups,dc=example,dc=org gidNumber: 500 @@ -210,7 +206,6 @@ Example LDIF configuration to create group `k8s-users` using an LDAP command: Example LDIF configuration to create group `k8s-admins` using an LDAP command: ==== -# k8s-admins, Groups, example.org dn: cn=k8s-admins,ou=Groups,dc=example,dc=org gidNumber: 100 @@ -219,7 +214,7 @@ Example LDIF configuration to create group `k8s-admins` using an LDAP command: memberUid: user-admin ==== -==== Example 2: Dex LDAP TLS Connector configuration (`addons/dex/dex.yaml`) +==== Example 2: Dex LDAP TLS Connector Configuration (`addons/dex/dex.yaml`) Dex connector template configured to use 389-DS: ---- connectors: @@ -313,7 +308,6 @@ connectors: Example LDIF configuration to create user `user-regular1` using an LDAP command: ==== -# user-regular1, Users, example.org dn: cn=user-regular1,ou=Users,dc=example,dc=org objectClass: top @@ -335,7 +329,6 @@ Example LDIF configuration to create user `user-regular1` using an LDAP command: Example LDIF configuration to create user `user-regular2` using an LDAP command: ==== -# user-regular2, Users, example.org dn: cn=user-regular2,ou=Users,dc=example,dc=org objectClass: top @@ -357,7 +350,6 @@ Example LDIF configuration to create user `user-regular2` using an LDAP command: Example LDIF configuration to create user `user-bind` using an LDAP command: ==== -# user-bind, Users, example.org dn: cn=user-bind,ou=Users,dc=example,dc=org objectClass: top @@ -378,7 +370,6 @@ Example LDIF configuration to create user `user-bind` using an LDAP command: Example LDIF configuration to create user `user-admin` using an LDAP command: ==== -# user-admin, Users, example.org dn: cn=user-admin,ou=Users,dc=example,dc=org objectClass: top @@ -398,9 +389,8 @@ Example LDIF configuration to create user `user-admin` using an LDAP command: mail: user-admin@example.org ==== -Example LDIF configuration to create group `k8s-users` suing an LDAP command: +Example LDIF configuration to create group `k8s-users` using an LDAP command: ==== -# k8s-users, Groups, example.org dn: cn=k8s-users,ou=Groups,dc=example,dc=org objectClass: top @@ -416,7 +406,6 @@ Example LDIF configuration to create group `k8s-users` suing an LDAP command: Example LDIF configuration to create group `k8s-admins` using an LDAP command: ==== -# k8s-admins, Groups, example.org dn: cn=k8s-admins,ou=Groups,dc=example,dc=org objectClass: top @@ -429,7 +418,7 @@ Example LDIF configuration to create group `k8s-admins` using an LDAP command: objectCategory: cn=Group,cn=Schema,cn=Configuration,dc=example,dc=org ==== -==== Example 2: Dex Active Directory TLS Connector configuration (addons/dex/dex.yaml) +==== Example 2: Dex Active Directory TLS Connector Configuration (addons/dex/dex.yaml) Dex connector template configured to use Active Directory: ---- connectors: diff --git a/adoc/admin-security-psp.adoc b/adoc/admin-security-psp.adoc index 9266360c8..6d425c405 100644 --- a/adoc/admin-security-psp.adoc +++ b/adoc/admin-security-psp.adoc @@ -10,12 +10,12 @@ measure implemented by {kube} to control which specifications a pod must meet to be allowed to run in the cluster. They control various aspects of execution of pods and interactions with other parts of the software infrastructure. -You can find more general information about {psp} in the link:https://kubernetes.io/docs/concepts/policy/pod-security-policy/[Kubernetes Docs] +You can find more general information about {psp} in the link:https://kubernetes.io/docs/concepts/policy/pod-security-policy/[Kubernetes Docs]. User access to the cluster is controlled via "Role Based Access Control (RBAC)". -Each {psp} is associated to one or more users or +Each {psp} is associated with one or more users or service accounts so they are allowed to launch pods with the associated -specifications. The policies are associated to users or service accounts via +specifications. The policies are associated with users or service accounts via role bindings. [WARNING] @@ -53,9 +53,9 @@ The policy definitions are embedded in the link:https://github.com/SUSE/skuba/bl During the bootstrap with `skuba`, the policy files will be stored on your workstation in the cluster definition folder under `addons/psp`. These policy files -will be installed automatically to all cluster nodes. +will be installed automatically for all cluster nodes. -The filenames of the files created are: +The file names of the files created are: * `podsecuritypolicy-unprivileged.yaml` + @@ -168,7 +168,7 @@ subjects: == Creating a PodSecurityPolicy In order to properly secure and run your {kube} workloads you must configure -RBAC rules for your desired users and create {psp} that enable your respective +RBAC rules for your desired users create a {psp} adequate for your respective workloads and then link the user accounts to the {psp} using (Cluster)RoleBinding. https://kubernetes.io/docs/concepts/policy/pod-security-policy/ diff --git a/adoc/admin-security-rbac.adoc b/adoc/admin-security-rbac.adoc index 1866aae52..d3de9f476 100644 --- a/adoc/admin-security-rbac.adoc +++ b/adoc/admin-security-rbac.adoc @@ -4,9 +4,12 @@ RBAC uses the `rbac.authorization.k8s.io` API group to drive authorization decisions, allowing administrators to dynamically configure policies through the {kube} API. -The authentication components are deployed with {productname} installation. Administrators can update LDAP identity providers before or after platform deployment. -After your {productname} deployment, administrators can then use {kube} RBAC to design user or group authorization. -Users can access with a web browser or command line to do the authentication and self-configure `kubectl` to access authorized resources. +The authentication components are deployed as part of the {productname} installation. +Administrators can update LDAP identity providers before or after platform deployment. +After deploying {productname}, administrators can use {kube} RBAC to design +user or group authorizations. +Users can access with a Web browser or command line to do the authentication and +self-configure `kubectl` to access authorized resources. == Authentication Flow @@ -14,15 +17,15 @@ Authentication is composed of: * *Dex* (https://github.com/dexidp/dex) is an identity provider service (idP) that uses OIDC (Open ID Connect: https://openid.net/connect/) -to drive authentication for client application. +to drive authentication for client applications. It acts as a portal to defer authentication to provider through connected -identity providers(connectors). +identity providers (connectors). * *Client*: . Web browser: *Gangway* (https://github.com/heptiolabs/gangway): - a web application that enables authentication flow for your {productname}. - User can login, authorize access, download `kubeconfig` or self-configure `kubectl`. + a Web application that enables authentication flow for your {productname}. + The user can login, authorize access, download `kubeconfig` or self-configure `kubectl`. . Command line: `skuba auth login`, a CLI application that enables authentication - flow for your {productname}. User can login, authorize access, and got `kubeconfig`. + flow for your {productname}. The user can log in, authorize access, and get `kubeconfig`. For RBAC, administrators can use `kubectl` to create corresponding `RoleBinding` or `ClusterRoleBinding` for a user or group to limit resource access. @@ -56,14 +59,14 @@ image::oidc_flow_cli.png[] . User requests access through `skuba auth login` with the Dex server URL, username and password. -. Dex uses received username and password to login and approve the access +. Dex uses received username and password to log in and approve the access request to the connected identity providers (connectors). . Dex continues with the OIDC authentication flow on behalf of the user and creates/updates data to the {kube} CRDs. -. Dex responds the ID token and refresh token to `skuba auth login`. +. Dex returns the ID token and refresh token to `skuba auth login`. . `skuba auth login` generates the kubeconfig file `kubeconf.txt`. . User uses `kubectl` to connect the {kube} API server. -. {kube} CRDs validate the {kube} API server request and returns a response. +. {kube} CRDs validate the {kube} API server request and return a response. . The `kubectl` connects to the authorized {kube} resources through {kube} API server. == RBAC Operations @@ -82,12 +85,12 @@ using the `ClusterRole` `admin` you would run the following: $ kubectl create rolebinding admin --clusterrole=admin --user= --user= --group= ---- -==== Update The Authentication Connector +==== Update the Authentication Connector Administrators can update the authentication connector settings after {productname} deployment as follows: -. Open the `dex` configmap in `/addons/dex/dex.yaml` +. Open the `dex` configmap in `/addons/dex/dex.yaml` . . Adapt ConfigMap by adding LDAP configuration to the connector section of the `config.yaml` data. For detailed configuration of the LDAP connector, refer to Dex documentation: https://github.com/dexidp/dex/blob/v2.16.0/Documentation/connectors/ldap.md. @@ -138,7 +141,7 @@ kubectl replace --force -f /addons/dex/dex.yaml ==== Setting up `kubectl` -===== Web +===== In the Web Browser . Go to the login page at `+https://:32001+` in your browser. . Click "Sign In". @@ -146,11 +149,11 @@ kubectl replace --force -f /addons/dex/dex.yaml . Enter the login credentials. . Download `kubeconfig` or self-configure `kubectl` with the provided setup instructions. -===== CLI +===== Using the CLI . Use `skuba auth login` with Dex server URL `+https://:32000+`, login username and password. -. The kubeconfig `kubeconf.txt` generated locally. +. The kubeconfig `kubeconf.txt` is generated locally. ==== Access {kube} Resources diff --git a/adoc/admin-security-role-management.adoc b/adoc/admin-security-role-management.adoc index 65e58330e..d8403e1cc 100644 --- a/adoc/admin-security-role-management.adoc +++ b/adoc/admin-security-role-management.adoc @@ -2,7 +2,7 @@ = Role Management {productname} -uses _role-based access control_ authorization for {kube} +uses _role-based access control_ authorization for {kube}. . Roles define, which _subjects_ (users or groups) can use which _verbs_ (operations) on which __resources__. The following sections provide an overview of the resources, verbs and how to create roles. @@ -22,7 +22,7 @@ delete:: Delete resources. deletecollection:: -Delete a collection of a resource (can only be invoked using the {kube} API). +Delete a collection of a resource (can only be invoked using the {kube} API). get:: Display individual resource. @@ -34,8 +34,7 @@ patch:: Update an API object in place. proxy:: -Allows running {kubectl} -in a mode where it acts as a reverse proxy. +Allows running `kubectl` in a mode where it acts as a reverse proxy. update:: Update fields of a resource, for example annotations or labels. @@ -175,20 +174,20 @@ This example shows how to bind a group to a defined role. kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: - name:`ROLE_BINDING_NAME` <1> - namespace:`NAMESPACE` <2> + name: ROLE_BINDING_NAME <1> + namespace: NAMESPACE <2> subjects: - kind: Group - name:`LDAP_GROUP_NAME` <3> + name: LDAP_GROUP_NAME <3> apiGroup: rbac.authorization.k8s.io roleRef: - kind: Role - name:`ROLE_NAME` <4> + name: ROLE_NAME <4> apiGroup: rbac.authorization.k8s.io ---- <1> Defines a name for this new role binding. -<2> Name of the namespace for which the binding applies. +<2> Name of the namespace to which the binding applies. <3> Name of the LDAP group to which this binding applies. @@ -198,7 +197,7 @@ roleRef: ==== [[_ex.admin.security.groups.cluster.role]] -.Binding a Group to a CluseterRole +.Binding a Group to a Cluster Role ==== This example shows how to bind a group to a defined cluster role. @@ -206,21 +205,21 @@ This example shows how to bind a group to a defined cluster role. kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: - name: `CLUSTER_ROLE_BINDING_NAME` <1> - namespace:`NAMESPACE` <2> + name: CLUSTER_ROLE_BINDING_NAME <1> + namespace: NAMESPACE <2> subjects: kind: Group - name: `CLUSTER_GROUP_NAME` <3> + name: CLUSTER_GROUP_NAME <3> apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole - name: `CLUSER_ROLE_NAME` <4> + name: CLUSER_ROLE_NAME <4> apiGroup: rbac.authorization.k8s.io ---- <1> Defines a name for this new cluster role binding. -<2> Name of the namespace for which the cluster binding applies. +<2> Name of the namespace to which the cluster binding applies. <3> Name of the cluster group to which this binding applies. diff --git a/adoc/admin-security-user-group-management.adoc b/adoc/admin-security-user-group-management.adoc index 8e2a3420f..c2b4baafc 100644 --- a/adoc/admin-security-user-group-management.adoc +++ b/adoc/admin-security-user-group-management.adoc @@ -3,11 +3,11 @@ You can use standard LDAP administration tools for managing organizations, groups and users remotely. To do so, install the `openldap2-client` package on a computer in your network and make sure that the computer can connect to the LDAP server -(Ex: 389 Directory Server) on port `389` or secure port `636`. +(389 Directory Server) on port `389` or secure port `636`. == Adding a New Organizational Unit -. To add a new Organizational unit, create an LDIF file (`create_ou_groups.ldif`) like this: +. To add a new organizational unit, create an LDIF file (`create_ou_groups.ldif`) like this: + ---- dn: ou=OU_NAME,dc=example,dc=org @@ -18,7 +18,7 @@ ou: OU_NAME ---- + * Substitute OU_NAME with an organizational unit name of your choice. -. Run `ladapmodify` to add the new Organizational unit: +. Run `ladapmodify` to add the new organizational unit: + ---- LDAP_PROTOCOL=ldap # ldap, ldaps @@ -33,14 +33,14 @@ ldapmodify -v -H :// -D ":// -D ":// -D "" -f -w ---- -== Removing a Group from Organizational unit +== Removing a Group from an Organizational Unit . To remove a group from an organizational unit, create an LDIF file (`delete_ou_groups.ldif`) like this: + @@ -93,8 +93,8 @@ changetype: delete ---- + * GROUP: Group name -* OU_NAME: Organizational unit name -. Execute `ladapmodify` to remove the group from the Organizational unit: +* OU_NAME: organizational unit name +. Execute `ladapmodify` to remove the group from the organizational unit: + ---- LDAP_PROTOCOL=ldap # ldap, ldaps @@ -107,7 +107,7 @@ ROOT_PASSWORD= # Admin Password ldapmodify -v -H :// -D "" -f -w ---- -=== Adding A New User +=== Adding a New User . To add a new user, create an LDIF file (`new_user.ldif`) like this: + @@ -125,7 +125,7 @@ mail: E-MAIL_ADDRESS ---- + * USERID: User ID (UID) of the new user. This value must be a unique number. -* OU_NAME: Organizational unit name +* OU_NAME: organizational unit name * PASSWORD_HASH: The user's hashed password. Use `/usr/sbin/slappasswd` to generate the hash. * FIRST_NAME: The user's first name * SURNAME: The user's last name @@ -146,9 +146,9 @@ ldapadd -v -H :// -D ---- -=== Showing user attributes +=== Showing User Attributes -. To show the attributes of a user, use the `ldapsearch` command. +. To show the attributes of a user, use the `ldapsearch` command: + ---- LDAP_PROTOCOL=ldap # ldap, ldaps @@ -163,7 +163,7 @@ ldapsearch -v -x -H :// -b "" -D "" -w ---- -=== Modifying a user +=== Modifying a User The following procedure shows how to modify a user in the LDAP server. See the LDIF files for examples of how to change a user password and add a user to the @@ -181,9 +181,9 @@ userPassword: NEW_PASSWORD ---- + * USERID: The desired user's ID -* OU_NAME: Organizational unit name +* OU_NAME: organizational unit name * NEW_PASSWORD: The user's new hashed password. -. Add the user to `Administrators` group. +. Add the user to the `Administrators` group: + ---- dn: cn=Administrators,ou=Groups,dc=example,dc=org @@ -192,7 +192,7 @@ add: uniqueMember uniqueMember: uid=USERID,ou=OU_NAME,dc=example,dc=org ---- * USERID: Substitute with the user's ID. -* OU_NAME: Organizational unit name +* OU_NAME: organizational unit name . Execute `ldapmodify` to change user attributes: + ---- @@ -207,7 +207,7 @@ ldapmodify -v -H :// -D "" -f -w ---- -=== Deleting a user +=== Deleting a User To delete a user from the LDAP server, follow these steps: @@ -219,7 +219,7 @@ changetype: delete ---- + * USERID: Substitute this with the user's ID. -* OU_NAME: Organizational unit name +* OU_NAME: organizational unit name . Run `ldapmodify` to delete the user: + ---- diff --git a/adoc/admin-ses-integration.adoc b/adoc/admin-ses-integration.adoc index 6a6db9467..35b9e1989 100644 --- a/adoc/admin-ses-integration.adoc +++ b/adoc/admin-ses-integration.adoc @@ -1,8 +1,8 @@ = {ses} Integration -{productname} offers {ses} as a storage solution for its containers, -this chapter describes the steps required for successful integration. +{productname} offers {ses} as a storage solution for its containers. +This chapter describes the steps required for successful integration. == Prerequisites @@ -17,7 +17,7 @@ For more details refer to the {ses} documentation: https://www.suse.com/documentation/suse-enterprise-storage/. * The {ses} cluster has a pool with RADOS Block Device (RBD) enabled. -== Procedures according to type of integration +== Procedures According to Type of Integration The steps will differ in small details depending on whether you are using RBD or CephFS and dynamic or static persistent volumes. @@ -54,7 +54,7 @@ data: ---- . Create an image in the SES cluster. To do that, run the following command on the {master_node}, -replacing `SIZE` with the size of the image, or example `2G`, +replacing `SIZE` with the size of the image, for example `2G`, and `YOUR_VOLUME` with the name of the image. + @@ -69,7 +69,7 @@ rbd create -s SIZE YOUR_VOLUME `POD_NAME` and `CONTAINER_NAME` for a {kube} container and pod name of your choice. `IMAGE_NAME` is the name you decide to give your container image, for example "opensuse/leap". `RBD_POOL` is the RBD pool name, - please refer to the RBD documentation on instructions how to create the RBD pool. + please refer to the RBD documentation for instructions on how to create the RBD pool: https://docs.ceph.com/docs/mimic/rbd/rados-rbd-cmds/#create-a-block-device-pool + @@ -150,14 +150,15 @@ data: key: "$(echo CEPH_SECRET | base64)" *EOF* ---- -. Create an image in the SES cluster. On the {master_node} , run the following command: +. Create an image in the SES cluster. On the {master_node}, run the following command: + ---- rbd create -s SIZE YOUR_VOLUME ---- + -Replace `SIZE` with the size of the image, for example `2G` (2 Gigabyte), and `YOUR_VOLUME` is the name of the image. +Replace `SIZE` with the size of the image, for example `2G` (2 gigabytes), +and `YOUR_VOLUME` with the name of the image. . Create the persistent volume: + @@ -214,7 +215,7 @@ Use the _gibibit_ notation, for example ``2Gi``. NOTE: This persistent volume claim does not explicitly list the volume. Persistent volume claims work by picking any volume that meets the criteria from a pool. In this case we specified any volume with a size of 2G or larger. -When the claim is removed the recycling policy will be followed. +When the claim is removed, the recycling policy will be followed. + . Create a pod that uses the persistent volume claim: @@ -245,7 +246,7 @@ spec: ---- kubectl get pod ---- -. Once pod is running, check the volume: +. Once the pod is running, check the volume: + ---- @@ -308,14 +309,14 @@ allow rwx pool=RBD_POOL" -o ceph.client.user.keyring + Replace `RBD_POOL` with the RBD pool name. -. For a dynamic persisten volume, you will also need a user key. +. For a dynamic persistent volume, you will also need a user key. Retrieve the Ceph *user* secret by running: + ---- ceph auth get-key client.user ---- or directly from `/etc/ceph/ceph.client.user.keyring` -. Apply the configuration that includes the Ceph secret by with the `kubectl apply` command, +. Apply the configuration that includes the Ceph secret by running the `kubectl apply` command, replacing `CEPH_SECRET` with your own Ceph secret. + @@ -402,7 +403,7 @@ spec: ---- kubectl get pod ---- -. Once pod is running, check the volume: +. Once the pod is running, check the volume: + ---- @@ -428,7 +429,7 @@ The RBD is not deleted. === Using CephFS in a Pod -The procedure below describes steps to take when you need to use a CephFS in a Pod. +The procedure below describes steps to take when you need to use a CephFS in a pod. .Procedure: Using CephFS In A Pod @@ -614,7 +615,7 @@ spec: ---- kubectl get pod ---- -. Once pod is running, check the volume by running: +. Once the pod is running, check the volume by running: + ---- diff --git a/adoc/admin-troubleshooting.adoc b/adoc/admin-troubleshooting.adoc index d5e9d2496..488c76aaa 100644 --- a/adoc/admin-troubleshooting.adoc +++ b/adoc/admin-troubleshooting.adoc @@ -8,13 +8,13 @@ Additionally, {suse} support collects problems and their solutions online at lin == The `supportconfig` Tool As a first step for any troubleshooting/debugging effort, you need to find out -where the problem is caused. For this purpose we ship the `supportconfig` tool +the location of the cause of the problem. For this purpose we ship the `supportconfig` tool and plugin with {productname}. With a simple command you can collect and compile a variety of details about your cluster to enable {suse} support to pinpoint the potential cause of an issue. In case of problems, a detailed system report can be created with the -`supportconfig` command line tool. It will collect information about the system such as: +`supportconfig` command line tool. It will collect information about the system, such as: * Current Kernel version * Hardware information @@ -28,7 +28,7 @@ A full list of of the data collected by `supportconfig` can be found under https://github.com/SUSE/supportutils-plugin-suse-caasp/blob/master/README.md. ==== -To collect all relevant logs run the `supportconfig` command on all the master +To collect all relevant logs, run the `supportconfig` command on all the master and worker nodes individually. [source,bash] diff --git a/adoc/admin-updates.adoc b/adoc/admin-updates.adoc index 30ac67f05..a18230f79 100644 --- a/adoc/admin-updates.adoc +++ b/adoc/admin-updates.adoc @@ -2,7 +2,7 @@ === Updating Kubernetes Components -The update of {kube} components is handled via `skuba`. +Updating of {kube} components is handled via `skuba`. ==== Generating and Overview of Available Updates @@ -151,21 +151,21 @@ skuba cluster upgrade plan [TIP] ==== -The upgrade via `skuba node upgrade apply` will +The upgrade via `skuba node upgrade apply` will: -* Upgrade the containerized control plane -* Upgrade the rest of the {kube} system stack (`kubelet`, `cri-o`) -* Restart services +* upgrade the containerized control plane. +* upgrade the rest of the {kube} system stack (`kubelet`, `cri-o`). +* restart services. ==== -Base Operating System updates are handled by `skuba-update`, which works together +Base operating system updates are handled by `skuba-update`, which works together with the `kured` reboot daemon. ==== Disabling Automatic Updates Nodes added to a cluster have the service `skuba-update.timer`, which is responsible for running automatic updates, activated by default. -This service is calling `skuba-update` utility and it can be configured with the `/etc/sysconfig/skuba-update` file. -To disable the automatic updates on a node simply `ssh` to it and then configure the skuba-update service by editing `/etc/sysconfig/skuba-update` file with the following runtime options: +This service calls the `skuba-update` utility and it can be configured with the `/etc/sysconfig/skuba-update` file. +To disable the automatic updates on a node, simply `ssh` to it and then configure the skuba-update service by editing the `/etc/sysconfig/skuba-update` file with the following runtime options: ---- ## Path : System/Management @@ -180,7 +180,7 @@ SKUBA_UPDATE_OPTIONS="--annotate-only" [TIP] It is not required to reload or restart `skuba-update.timer`. -The `--annotate-only` flag makes `skuba-update` utility to only check if updates are available and annotate the node accordingly. +The `--annotate-only` flag makes the `skuba-update` utility only check if updates are available and annotate the node accordingly. When this flag is activated no updates are installed at all. ==== Completely Disabling Reboots diff --git a/adoc/architecture-description.adoc b/adoc/architecture-description.adoc index faba766d3..68fa3f8a4 100644 --- a/adoc/architecture-description.adoc +++ b/adoc/architecture-description.adoc @@ -314,7 +314,7 @@ If only one load balancer is deployed this creates a single point of failure. For a complete HA solution, more than one load balancer is required. [IMPORTANT] -If your environment only contains one load balancer it can not be considered +If your environment only contains one load balancer it cannot be considered highly available or fault tolerant. === Testing / POC @@ -338,7 +338,7 @@ The default scenario requires 8 nodes: ** Persistent IP addresses on all nodes. ** NTP server provided on the host network. ** DNS entry that resolves to the load balancer VIP. -** LDAP server or OIDC provider (Active Directory, GitLab, GitHub etc.) +** LDAP server or OIDC provider (Active Directory, GitLab, GitHub, etc.) * (Optional) "Infrastructure node" ** LDAP server if LDAP integration is desired and your organization diff --git a/adoc/book_admin.adoc b/adoc/book_admin.adoc index b836be6a4..b0df6430f 100644 --- a/adoc/book_admin.adoc +++ b/adoc/book_admin.adoc @@ -87,8 +87,8 @@ include::admin-troubleshooting.adoc[Troubleshooting,leveloffset=+1] // Glossary //include::common_glossary.adoc[Glossary] -// Changelog -include::common_changelog.adoc[Documentation Changelog] +// Change Log +// include::common_changelog.adoc[Documentation Change Log] //GNU Licenses include::common_legal.adoc[Legal] diff --git a/adoc/book_architecture.adoc b/adoc/book_architecture.adoc index a15906477..927708eaa 100644 --- a/adoc/book_architecture.adoc +++ b/adoc/book_architecture.adoc @@ -17,5 +17,5 @@ include::common_disclaimer.adoc[Disclaimer] include::architecture-description.adoc[Architecture Description] -// Changelog -include::common_changelog.adoc[Documentation Changelog] +// Change Log +// include::common_changelog.adoc[Documentation Change Log] diff --git a/adoc/book_deployment.adoc b/adoc/book_deployment.adoc index d608ac73a..10fc8026b 100644 --- a/adoc/book_deployment.adoc +++ b/adoc/book_deployment.adoc @@ -1,7 +1,7 @@ include::attributes.adoc[] include::entities.adoc[] -= {productname} {productversion} Deployment Guide: This guide describes the deployment for {productname} {productversion}. += {productname} {productversion} Deployment Guide: This guide describes deployment for {productname} {productversion}. Markus Napp; Nora Kořánová :sectnums: :doctype: book @@ -39,12 +39,12 @@ include::deployment-sysreqs.adoc[System Requirements] [IMPORTANT] ==== -If you are installing over one of the previous milestones you must remove the +If you are installing over one of the previous milestones, you must remove the RPM repository. {productname} is now distributed as an extension for {sle} and no longer requires the separate repository. If you do not remove the repository before installation, there might be conflicts -with the package dependencies that can render you installation nonfunctional. +with the package dependencies that couldrender your installation nonfunctional. ==== include::deployment-preparation.adoc[Deployment Preparations, leveloffset=+1] @@ -63,8 +63,8 @@ include::deployment-bootstrap.adoc[Bootstrapping,leveloffset=0] include::deployment-cilium.adoc[Cilium] -// Changelog -include::common_changelog.adoc[Documentation Changelog] +// Change Log +// include::common_changelog.adoc[Documentation Change Log] //GNU Licenses include::common_legal.adoc[Legal] diff --git a/adoc/common_changelog.adoc b/adoc/common_changelog.adoc index b552570e8..8c01bb1ae 100644 --- a/adoc/common_changelog.adoc +++ b/adoc/common_changelog.adoc @@ -1,4 +1,4 @@ -== Documentation Changelog +== Documentation Change Log // === Month 2019 // @@ -15,7 +15,7 @@ |Date |Commit |Description |2019-05-24 |link:https://github.com/SUSE/doc-caasp/commit/8c60cb3393da11a811e75b16eb4e07f934d31bdc[8c60cb3] |Rename disclaimer file |2019-05-24 |link:https://github.com/SUSE/doc-caasp/commit/4b0618820ec148dd07f08bb719a08b586c5ca062[4b06188] |Update readme file with info about repo structure -|2019-05-24 |link:https://github.com/SUSE/doc-caasp/commit/ce040008077dee25ed9e100d061b7a1cc17d7934[ce04000] |Restructure terraform examples in deployment guides +|2019-05-24 |link:https://github.com/SUSE/doc-caasp/commit/ce040008077dee25ed9e100d061b7a1cc17d7934[ce04000] |Restructure Terraform examples in deployment guides |2019-05-24 |link:https://github.com/SUSE/doc-caasp/commit/421e9b7acbdb87029c11304ae8be2c1e7005b550[421e9b7] |Reword customer beta registration info |2019-05-24 |link:https://github.com/SUSE/doc-caasp/commit/0c924365f11f95d8801fa93b4331102d641823cd[0c92436] |Add clarification comment to attributes file |2019-05-24 |link:https://github.com/SUSE/doc-caasp/commit/e80147926a3213130ff9efd14cf2fd7b525c44c3[e801479] |Add release type logic for registration code instructions @@ -30,17 +30,17 @@ |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/874f48f6cb1dbfb936eab547594f9560d532d1c4[874f48f] |Moved registration code section to preparations |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/81c88482d072815e76257d3a3f163f70995e9a02[81c8848] |Added deployment preparations section |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/b13e4f4f68a2e9e066e5f457114f9939a7d02c34[b13e4f4] |Minor fixes for openstack deployment -|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/e6f032bc4e481c06dbec42eb4b5f7b5a9bf4f215[e6f032b] |Update openstack/default terraform example -|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/146831fd4bbf24146b84ebce09ce8c3d217cec1d[146831f] |Add terraform entity +|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/e6f032bc4e481c06dbec42eb4b5f7b5a9bf4f215[e6f032b] |Update openstack/default Terraform example +|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/146831fd4bbf24146b84ebce09ce8c3d217cec1d[146831f] |Add Terraform entity |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/a229b54fcda4139de36f6d4d2be014d597057bb1[a229b54] |Update openstack deployment documentation |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/2d4331657cfe18339e254004d0182a2da0509a97[2d43316] |Merge branch 'master' into update_openstack -|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/e925dad547cd588c4a1ae98d3c65b9b7601291fb[e925dad] |Rework terraform example for ecp -|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/d0bc6054869481d671f2ee9730117badac44be1c[d0bc605] |Update vmware terraform example -|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/c0bd15830d0dc16d16555100094ed2ec312374bc[c0bd158] |Move terraform example into separate file +|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/e925dad547cd588c4a1ae98d3c65b9b7601291fb[e925dad] |Rework Terraform example for ecp +|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/d0bc6054869481d671f2ee9730117badac44be1c[d0bc605] |Update vmware Terraform example +|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/c0bd15830d0dc16d16555100094ed2ec312374bc[c0bd158] |Move Terraform example into separate file |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/28c7aaa41886ca59ae1fdf01d5f657cc4a04d6f1[28c7aaa] |Optimized screenshots |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/b1c10f960adeff7068e6d798dc0f619c3dc44f0b[b1c10f9] |Moved product versions and media locations to new attributes file |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/4f826bee18742526897c0b6e0ae05cba1c295812[4f826be] |Fix prompt replacements in typography include -|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/582de628f6db2f2bbaa9b72bba1d9f0aa9f70939[582de62] |Fixed includes for new filenames +|2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/582de628f6db2f2bbaa9b72bba1d9f0aa9f70939[582de62] |Fixed includes for new file names |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/9a7357bd46aa33309f407d0598a6835b1a86cec3[9a7357b] |Make disclaimer react to release type |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/75b70bfda3484458bc2565dc7276c7cc735b1be6[75b70bf] |Add attributes file |2019-05-23 |link:https://github.com/SUSE/doc-caasp/commit/04256b3304e0042f45460524f34b38aaa747296d[04256b3] |Rework architecture content to separate book file @@ -123,12 +123,13 @@ [cols="20%,15%,65%",options="header"] |=== +|Date |Commit |Description |2019-04-30 |link:https://github.com/SUSE/doc-caasp/commit/66ef4a03b83895300e01e95032b309872cffb537[66ef4a0] |Add minor clarificiations for Openstack deployment |2019-04-29 |link:https://github.com/SUSE/doc-caasp/commit/321a9fb0cd8d45f428a8f6525cc795418c6d3c5b[321a9fb] |Add software version entities |2019-04-29 |link:https://github.com/SUSE/doc-caasp/commit/a90d61fde96cf913c6a9372f7032b8dddb23467d[a90d61f] |Fix command for cluster status check |2019-04-29 |link:https://github.com/SUSE/doc-caasp/commit/72129cb47543f84ef458ce7102099d8a52228a56[72129cb] |Moved generalized command up in list |2019-04-29 |link:https://github.com/SUSE/doc-caasp/commit/2e617071a948e3ddc269906831a03c1a6bfd29eb[2e61707] |add a hint about the example tf file -|2019-04-29 |link:https://github.com/SUSE/doc-caasp/commit/6b18f861de05f639e0ee419113f692569ee7af9a[6b18f86] |add hint about terraform output +|2019-04-29 |link:https://github.com/SUSE/doc-caasp/commit/6b18f861de05f639e0ee419113f692569ee7af9a[6b18f86] |add hint about Terraform output |2019-04-29 |link:https://github.com/SUSE/doc-caasp/commit/61bc4b6306eda5f3b0151ec345c07e98e9f46434[61bc4b6] |changed `kubectl caasp cluster status` to `caaspctl cluster status` as the default way to check cluster status |2019-04-17 |link:https://github.com/SUSE/doc-caasp/commit/13666672e251799b663c6abbcd4fdcefcb235c30[1366667] |Add first architecture description skeleton brainstorm |2019-04-15 |link:https://github.com/SUSE/doc-caasp/commit/3f681ff89acd738fc1281a875d6670324127ae52[3f681ff] |Provide more information about ssh-agent @@ -167,6 +168,7 @@ [cols="20%,15%,65%",options="header"] |=== +|Date |Commit |Description |2019-03-27 |link:https://github.com/SUSE/doc-caasp/commit/cce9c5d16b7284496eb04c7c51fc4f7992f9d97e[cce9c5d] |Merge branch 'develop' into adoc |2019-03-26 |link:https://github.com/SUSE/doc-caasp/commit/6741b221d4f5affca53711eaf2462a1dff88f5da[6741b22] |Add clarification for transactional update reboot parameter |2019-03-26 |link:https://github.com/SUSE/doc-caasp/commit/607043d00e77b492fe2396adacfeb5cb95e18b19[607043d] |Move old files to different directory diff --git a/adoc/common_copyright.adoc b/adoc/common_copyright.adoc index d60c0b0a0..d12318a1c 100644 --- a/adoc/common_copyright.adoc +++ b/adoc/common_copyright.adoc @@ -17,7 +17,7 @@ trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols ({reg} , {trade} - etc.) denote trademarks of {suse} +, etc.) denote trademarks of {suse} and its affiliates. Asterisks (*) denote third-party trademarks. diff --git a/adoc/common_copyright_gfdl.adoc b/adoc/common_copyright_gfdl.adoc index aa5277687..20a2ca1c3 100644 --- a/adoc/common_copyright_gfdl.adoc +++ b/adoc/common_copyright_gfdl.adoc @@ -17,7 +17,7 @@ trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols ({reg} , {trade} - etc.) denote trademarks of {suse} +, etc.) denote trademarks of {suse} and its affiliates. Asterisks (*) denote third-party trademarks. diff --git a/adoc/common_copyright_quick.adoc b/adoc/common_copyright_quick.adoc index 1d592733e..6fb79c178 100644 --- a/adoc/common_copyright_quick.adoc +++ b/adoc/common_copyright_quick.adoc @@ -14,7 +14,7 @@ A copy of the license version 1.2 is included in the section entitled "`GNU Free For {suse} trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. -Trademark symbols ((R), (TM) etc.) denote trademarks of {suse} and its affiliates. +Trademark symbols ((R), (TM), etc.) denote trademarks of {suse} and its affiliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. diff --git a/adoc/common_disclaimer.adoc b/adoc/common_disclaimer.adoc index 35d441656..d2282bf68 100644 --- a/adoc/common_disclaimer.adoc +++ b/adoc/common_disclaimer.adoc @@ -7,7 +7,7 @@ endif::[] [WARNING] ==== -This is a work in progress document. +This document is a work in progress. The content in this document is subject to change without notice. ==== diff --git a/adoc/common_intro_feedback.adoc b/adoc/common_intro_feedback.adoc index c99aeda96..718c36c5c 100644 --- a/adoc/common_intro_feedback.adoc +++ b/adoc/common_intro_feedback.adoc @@ -2,19 +2,18 @@ :imagesdir: ./images -Several feedback channels are available: +Several feedback channels are available: Bugs and Enhancement Requests:: -For services and support options available for your product, refer to http://www.suse.com/support/. +For services and support options available for your product, refer to http://www.suse.com/support/. + -To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click menu:Create New[] -. +To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click menu:Create New[]. User Comments:: We want to hear your comments about and suggestions for this manual and the other documentation included with this product. -Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there. +Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there. Mail:: For feedback on the documentation of this product, you can also send a mail to ``doc-team@suse.com``. Make sure to include the document title, the product version and the publication date of the documentation. -To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL). +To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL). diff --git a/adoc/deployment-aws.adoc b/adoc/deployment-aws.adoc index ddff7c8c6..bddf6ce3b 100644 --- a/adoc/deployment-aws.adoc +++ b/adoc/deployment-aws.adoc @@ -10,12 +10,12 @@ top of those. . Download the AWS credentials .. Log in to the AWS console. -.. Click on your username in the upper right hand corner to reveal the dropdown menu. +.. Click on your username in the upper right hand corner to reveal the drop-down menu. .. Click on My Security Credentials. .. Click Create Access Key on the Security Credentials tab. .. Note down the newly created _Access_ and _Secret_ keys. -=== Deploying the cluster nodes +=== Deploying the Cluster Nodes . On the management machine, find the {tf} template files for AWS in `/usr/share/caasp/terraform/aws` (which was installed as part of the management @@ -33,7 +33,7 @@ cd ~/caasp/deployment/aws/ ---- mv terraform.tfvars.example terraform.tfvars ---- -. Edit the `terraform.tfvars` file and add modify the following variables: +. Edit the `terraform.tfvars` file and add/modify the following variables: + include::deployment-terraform-example.adoc[tags=tf_aws] + @@ -53,7 +53,7 @@ Substitute `CAASP_REGISTRATION_CODE` for the code from <>. caasp_registry_code = "CAASP_REGISTRATION_CODE" ---- + -This is required so all the deployed nodes can automatically register to {scc} and retrieve packages. +This is required so all the deployed nodes can automatically register with {scc} and retrieve packages. . Now you can deploy the nodes by running: + ---- @@ -83,12 +83,12 @@ The IP addresses of the generated machines will be displayed in the terraform output during the cluster node deployment. You need these IP addresses to deploy {productname} to the cluster. -If you need to find an IP addresses later on you can run `terraform output` within -the directory you performed the deployment from `~/my-cluster` directory or +If you need to find an IP address later on, you can run `terraform output` within +the directory you performed the deployment from the `~/my-cluster` directory or perform the following steps: . Log in to the AWS Console and click on menu:Load Balancers[]. Find the one with the -string you entered in the {tf} configuration above e.g. `testing-lb`. +string you entered in the {tf} configuration above, for example `testing-lb`. . Note down the "DNS name". + . Now click on menu:Instances[]. diff --git a/adoc/deployment-bare-metal.adoc b/adoc/deployment-bare-metal.adoc index d35fac323..845319d1b 100644 --- a/adoc/deployment-bare-metal.adoc +++ b/adoc/deployment-bare-metal.adoc @@ -3,7 +3,7 @@ include::entities.adoc[] [[deployment_bare_metal]] == Deployment on Bare Metal -=== Environment description +=== Environment Description [IMPORTANT] ==== @@ -15,12 +15,12 @@ solution that directs access to the master nodes. [NOTE] ==== The {ay} file found in `skuba` is a template. It has the base requirements. -This {ay} file should act as a guide and should be updated with your companies standards. +This {ay} file should act as a guide and should be updated with your company's standards. ==== [NOTE] ==== -To account for hardware/platform specific setup criteria (legacy BIOS vs. (U)EFI, drive partitioning, networking etc.), +To account for hardware/platform-specific setup criteria (legacy BIOS vs. (U)EFI, drive partitioning, networking, etc.), you must adjust the {ay} file to your needs according to the requirements. Refer to the official {ay} documentation for more information: link:https://www.suse.com/documentation/sles-15/singlehtml/book_autoyast/book_autoyast.html[{ay} Guide]. @@ -28,19 +28,19 @@ Refer to the official {ay} documentation for more information: link:https://www. ==== Hardware Prerequisites -Deployment with {ay} will require a minimum *disk size of 40GB*. -10GB out of that total space will be reserved for container images without any workloads, -for the root partition (30GB) and the EFI system partition (200MB). +Deployment with {ay} will require a minimum *disk size of 40 GB*. +10 GB out of that total space will be reserved for container images without any workloads, +for the root partition (30 GB) and the EFI system partition (200 MB). -=== {ay} preparation +=== {ay} Preparation . On the management machine, get an example {ay} file from `/usr/share/caasp/autoyast/bare-metal/autoyast.xml`, -(which was installed as part of the management pattern (`sudo zypper in -t pattern SUSE-CaaSP-Management`) earlier on). +(which was installed earlier on as part of the management pattern (`sudo zypper in -t pattern SUSE-CaaSP-Management`). . Copy the file to a suitable location to modify it. Name the file `autoyast.xml`. -. Modify the following places in the {ay} (and any additional places that are required by your specific configuration/environment). +. Modify the following places in the {ay} file (and any additional places as required by your specific configuration/environment): .. `` + -Change the pre-filled value to your organization's NTP server. Provide multiple servers if possible by add new `` subentries. +Change the pre-filled value to your organization's NTP server. Provide multiple servers if possible by adding new `` subentries. .. `sles` + Insert your authorized key in the placeholder field. @@ -63,7 +63,7 @@ Insert the email address and {productname} registration code in the placeholder .. `` + Insert the {productname} registration code in the placeholder field. This enables the {productname} extension module. -Update the {ay} file with your registration keys and company best practices and hardware configurations. +Update the {ay} file with your registration keys and your company's best practices and hardware configurations. + [NOTE] ==== @@ -73,21 +73,21 @@ Your {productname} registration key can be used to both activate {sle} 15 SP1 an + Refer to the official {ay} documentation for more information: link:https://www.suse.com/documentation/sles-15/singlehtml/book_autoyast/book_autoyast.html[{ay} Guide]. + -. Host the {ay} files on a web server reachable inside the network your are installing the cluster in. +. Host the {ay} files on a Web server reachable inside the network your are installing the cluster in. -=== Provisioning The Cluster Nodes +=== Provisioning the Cluster Nodes Once the {ay} file is available in the network that the machines will be configured in, you can start deploying machines. The default production scenario consists of 8 nodes: -* 2 Load Balancers -* 3 Masters -* 3 Workers +* 2 load balancers +* 3 masters +* 3 workers Depending on the type of load balancer you wish to use, you need to deploy at least 6 machines to serve as cluster nodes and provide 2 load balancers from the environment. -The load balancer must point at the machines that are dedicated to be used as `master` nodes in the future cluster. +The load balancer must point at the machines that are assigned to be used as `master` nodes in the future cluster. [TIP] If you do not wish to use infrastructure load balancers, please deploy additional machines and refer to <>. @@ -96,13 +96,13 @@ Install {sle} 15 SP1 from your preferred medium and follow the steps for link:ht Provide `autoyast=https://[webserver/path/to/autoyast.xml]` during the {sle} 15 SP1 installation. -==== {sls} installation +==== {sls} Installation [NOTE] ==== -Use AutoYaST and ensure to use a staged frozen patchlevel via RMT/SUSE Manager to ensure 100% reproducible setup. +Use AutoYaST and make sure to use a staged frozen patchlevel via RMT/SUSE Manager to ensure a 100% reproducible setup. link:https://www.suse.com/documentation/sles-15/singlehtml/book_rmt/book_rmt.html#cha.rmt_client[RMT Guide] ==== -Once the machines have been installed using the {ay} file, you are now ready to bootstrap your cluster +Once the machines have been installed using the {ay} file, you are now ready to bootstrap your cluster: link:deployment-bootstrap.adoc[bootstrap guide]. diff --git a/adoc/deployment-bootstrap.adoc b/adoc/deployment-bootstrap.adoc index 2787af4c4..50f818de1 100644 --- a/adoc/deployment-bootstrap.adoc +++ b/adoc/deployment-bootstrap.adoc @@ -2,7 +2,7 @@ == Bootstrapping the Cluster Bootstrapping the cluster is the initial process of starting up the cluster -and defining which of the nodes are masters and which workers. For maximum automation of this process +and defining which of the nodes are masters and which are workers. For maximum automation of this process, {productname} uses the `skuba` package. === Preparation @@ -26,7 +26,7 @@ zypper in -t pattern SUSE-CaaSP-Management [TIP] ==== Example deployment configuration files for each deployment scenario are installed -under `/usr/share/caasp/terraform/`, or in case of the Bare metal deployment: +under `/usr/share/caasp/terraform/`, or in case of the bare metal deployment: `/usr/share/caasp/autoyast/`. ==== @@ -47,7 +47,7 @@ be specified by the following flags: [IMPORTANT] ==== -You must configure `sudo` for the user to be able authenticate without password. +You must configure `sudo` for the user to be able to authenticate without password. Replace `USERNAME` with the user you created during installation. As root, run: ---- @@ -68,7 +68,7 @@ skuba cluster init --control-plane my-cluster [IMPORTANT] ==== -The IP/FQDN must be reachable by every node of the cluster and therefore 127.0.0.1/localhost can't be used. +The IP/FQDN must be reachable by every node of the cluster and therefore 127.0.0.1/localhost cannot be used. ==== ==== Transitioning from Docker to CRI-O @@ -98,18 +98,18 @@ way to revert this modification. Please choose wisely. ==== -==== Cluster configuration +==== Cluster Configuration Before bootstrapping the cluster, it is advisable to perform some additional configuration. -===== Enabling cloud provider integration +===== Enabling Cloud Provider Integration Enable cloud provider integration to take advantage of the underlying cloud platforms and automatically manage resources like the Load Balancer, Nodes (Instances), Network Routes and Storage services. If you want to enable cloud provider integration with different cloud platforms, -initialize the cluster with flag `--cloud-provider `. +initialize the cluster with the flag `--cloud-provider `. The only currently available option is `openstack`, but more options are planned: @@ -132,8 +132,8 @@ The file `my-cluster/cloud/openstack/openstack.conf` must not be freely accessib Please remember to set proper file permissions for it, for example `600`. ==== -===== Example OpenStack cloud provider configuration -You can find those required parameters in OpenStack RC File v3. +===== Example OpenStack Cloud Provider Configuration +You can find the required parameters in OpenStack RC File v3. ==== [Global] auth-url= // <1> @@ -166,7 +166,7 @@ under Project > Access and Security > API Access > Credentials. a multi-region OpenStack cloud. A region is a general division of an OpenStack deployment. <7> (optional) Used to specify the path to your custom CA file. <8> (optional) Used to override automatic version detection. -Valid values are `v1` or `v2`. Where no value is provided automatic detection +Valid values are `v1` or `v2`. Where no value is provided, automatic detection will select the highest supported version exposed by the underlying OpenStack cloud. <9> (optional) Used to specify the ID of the subnet you want to create your load balancer on. Can be found at Network > Networks. Click on the respective network to get its subnets. @@ -181,19 +181,19 @@ The value must be less than the delay value. Ensure that you specify a valid tim <14> (optional) Number of permissible ping failures before changing the load balancer member’s status to INACTIVE. Must be a number between 1 and 10. <15> (optional) Used to override automatic version detection. -Valid values are v1, v2, v3 and auto. When auto is specified automatic detection +Valid values are v1, v2, v3 and auto. When auto is specified, automatic detection will select the highest supported version exposed by the underlying OpenStack cloud. -<16> (optional) Influence availability zone use when attaching Cinder volumes. +<16> (optional) Influences availability zone, use when attaching Cinder volumes. When Nova and Cinder have different availability zones, this should be set to `true`. -After setting options in `openstack.conf` file, please proceed with bootstrapping procedure <>. +After setting options in the `openstack.conf` file, please proceed with the bootstrapping procedure <>. [IMPORTANT] ==== -When the cloud provider integration is enabled, it's very important to bootstrap and join nodes with the same node names that they have inside `Openstack`, as -this name will be used by the `Openstack` cloud controller manager to reconcile node metadata. +When cloud provider integration is enabled, it's very important to bootstrap and join nodes with the same node names that they have inside `Openstack`, as +these names will be used by the `Openstack` cloud controller manager to reconcile node metadata. ==== ===== Integrate External LDAP TLS @@ -222,23 +222,23 @@ this name will be used by the `Openstack` cloud controller manager to reconcile nameAttr: cn // <12> ==== <1> Host name of LDAP server reachable from the cluster. -<2> The port on which to connect to the host (e.g. StartTLS: `389`, TLS: `636`). -<3> LDAP server base64 encoded root CA certificate file (e.g. `cat | base64 | awk '{print}' ORS='' && echo`) +<2> The port on which to connect to the host (for example StartTLS: `389`, TLS: `636`). +<3> LDAP server base64 encoded root CA certificate file (for example `cat | base64 | awk '{print}' ORS='' && echo`) <4> Bind DN of user that can do user searches. <5> Password of the user. -<6> Label of LDAP attribute users will enter to identify themselves (e.g. `username`). -<7> BaseDN where users are located (e.g. `ou=Users,dc=example,dc=org`). -<8> Filter to specify type of user objects (e.g. "(objectClass=person)"). -<9> Attribute users will enter to identify themselves (e.g. mail). -<10> Attribute used to identify user within the system (e.g. DN). +<6> Label of LDAP attribute users will enter to identify themselves (for example `username`). +<7> BaseDN where users are located (for example `ou=Users,dc=example,dc=org`). +<8> Filter to specify type of user objects (for example "(objectClass=person)"). +<9> Attribute users will enter to identify themselves (for example mail). +<10> Attribute used to identify user within the system (for example DN). <11> Attribute containing the user's email. -<12> Attribute used as username used within OIDC tokens. +<12> Attribute used as username within OIDC tokens. -Besides the LDAP connector you can also setup other connectors. +Besides the LDAP connector you can also set up other connectors. For additional connectors, refer to the available connector configurations in the Dex repository: https://github.com/dexidp/dex/tree/v2.16.0/Documentation/connectors. -===== Prevent Nodes Running Special Workloads From Being Rebooted +===== Prevent Nodes Running Special Workloads from Being Rebooted Some nodes might run specially treated workloads (pods). @@ -279,7 +279,7 @@ kubectl apply -f my-cluster/addons/kured/kured.yaml This will restart all `kured` pods with the additional configuration flags. -==== Prevent Nodes With Any Prometheus Alerts From Being Rebooted +==== Prevent Nodes with Any Prometheus Alerts from Being Rebooted [NOTE] ==== @@ -320,11 +320,11 @@ kubectl apply -f my-cluster/addons/kured/kured.yaml This will restart all `kured` pods with the additional configuration flags. [[cluster.bootstrap]] -==== Cluster bootstrap +==== Cluster Bootstrap . Switch to the new directory. . Now bootstrap a master node. For `--target` enter the IP address of your first master node. -Replace `` with a unique identifier for example "master-one". +Replace `` with a unique identifier, for example, "master-one". + .Secure configuration files access [WARNING] @@ -345,7 +345,7 @@ The files will be stored in the `my-cluster` directory specified in step one. . Add additional master nodes to the cluster. + Replace the `` with the IP for the machine. -Replace `` with a unique identifier for example "master-two". +Replace `` with a unique identifier, for example, "master-two". + ---- skuba node join --role master --user sles --sudo --target @@ -353,7 +353,7 @@ skuba node join --role master --user sles --sudo --target . Add a worker to the cluster. + Replace the `` with the IP for the machine. -Replace `` with a unique identifier for example "worker-one". +Replace `` with a unique identifier, for example, "worker-one". + ---- skuba node join --role worker --user sles --sudo --target @@ -375,7 +375,7 @@ worker-one SUSE Linux Enterprise Server 15 SP1 4.12.14-110-default cri-o:/ [IMPORTANT] ==== -The IP/FQDN must be reachable by every node of the cluster and therefore 127.0.0.1/localhost can't be used. +The IP/FQDN must be reachable by every node of the cluster and therefore 127.0.0.1/localhost cannot be used. ==== === Using kubectl @@ -398,7 +398,7 @@ To talk to your cluster, simply symlink the generated configuration file to `~/. ln -s ~/clusters/my-cluster/admin.conf ~/.kube/config ---- -Then you can perform all cluster operations as usual. For example checking cluster status with either: +Then you can perform all cluster operations as usual. For example, checking cluster status with either: * `skuba cluster status` + diff --git a/adoc/deployment-cilium.adoc b/adoc/deployment-cilium.adoc index a7edf521c..753010ec0 100644 --- a/adoc/deployment-cilium.adoc +++ b/adoc/deployment-cilium.adoc @@ -1,4 +1,4 @@ -== Network security +== Network Security === Cilium @@ -12,7 +12,7 @@ calls. Cilium translates network/security policies into BPF programs, which are into the kernel. This means that security policies can be applied and updated without any changes to the application code or container configuration. -In {productname} {productversion} Cilium is deployed automatically with the Platform installation. +In {productname} {productversion}, Cilium is deployed automatically with the Platform installation. Please refer to the official Cilium documentation for instructions on how to secure various parts of your infrastructure. The following chapters are specifically recommended for {productname} users. diff --git a/adoc/deployment-ecp.adoc b/adoc/deployment-ecp.adoc index 51beba032..77311cb2c 100644 --- a/adoc/deployment-ecp.adoc +++ b/adoc/deployment-ecp.adoc @@ -9,7 +9,7 @@ You will use {tf} to deploy the required master and worker cluster nodes (plus a . Download the SUSE OpenStack Cloud RC file. .. Log in to SUSE OpenStack Cloud. -.. Click on your username in the upper right hand corner to reveal the dropdown menu. +.. Click on your username in the upper right hand corner to reveal the drop-down menu. .. Click on menu:Download OpenStack RC File v3[]. .. Save the file to your workstation. .. Load the file into your shell environment using the following command, @@ -18,18 +18,18 @@ replacing DOWNLOADED_RC_FILE with the name your file: ---- source DOWNLOADED_RC_FILE.sh ---- -.. Enter the password for the RC file. This should be same credentials that you use to log in to {soc}. +.. Enter the password for the RC file. This should be same the credentials that you use to log in to {soc}. . Get the SLES15-SP1 image. .. Download the pre-built image of SUSE SLES15-SP1 for {soc} from {jeos_product_page_url}. .. Upload the image to your {soc}. .The default user is 'sles' [NOTE] -The SUSE SLES15-SP1 images for {soc} come with pre-defined user `sles`, which you use to log into the cluster nodes. This user has been configured for password-less 'sudo' and is the one recommended to be used by {tf} and `skuba`. +The SUSE SLES15-SP1 images for {soc} come with predefined user `sles`, which you use to log in to the cluster nodes. This user has been configured for password-less 'sudo' and is the one recommended to be used by {tf} and `skuba`. -=== Deploying the cluster nodes +=== Deploying the Cluster Nodes -. Find the {tf} template files for {soc} in `/usr/share/caasp/terraform/openstack` (which was installed as part of the management pattern (`sudo zypper in -t pattern SUSE-CaaSP-Management`)). +. Find the {tf} template files for {soc} in `/usr/share/caasp/terraform/openstack` (which was installed as part of the management pattern - `sudo zypper in -t pattern SUSE-CaaSP-Management`). Copy this folder to a location of your choice as the files need adjustment. + ---- @@ -43,7 +43,7 @@ cd ~/caasp/deployment/openstack/ ---- mv terraform.tfvars.example terraform.tfvars ---- -. Edit the `terraform.tfvars` file and add modify the following variables: +. Edit the `terraform.tfvars` file and add/modify the following variables: + include::deployment-terraform-example.adoc[tags=tf_openstack] + @@ -53,7 +53,7 @@ You can set the timezone before deploying the nodes by modifying the following f * `~/my-cluster/cloud-init/common.tpl` ==== -. (Optional) If you absolutely need to be able to SSH into your cluster nodes using password instead of key-based authentication, this is the best time to set it globally for all of your nodes. If you do this later, you will have to do it manually. To set this, modify the cloud-init configuration and comment-out the related SSH configuration: +. (Optional) If you absolutely need to be able to SSH into your cluster nodes using password instead of key-based authentication, this is the best time to set it globally for all of your nodes. If you do this later, you will have to do it manually. To set this, modify the cloud-init configuration and comment out the related SSH configuration: `~/my-cluster/cloud-init/common.tpl` + ---- @@ -80,7 +80,7 @@ caasp_registry_code = "CAASP_REGISTRATION_CODE" #rmt_server_name = "rmt.example.com" ---- + -This is required so all the deployed nodes can automatically register to {scc} and retrieve packages. +This is required so all the deployed nodes can automatically register with {scc} and retrieve packages. + . You can also enable Cloud Provider Integration with OpenStack in `~/my-cluster/cpi.auto.tfvars`: + @@ -106,32 +106,32 @@ Terraform will now provision all the machines and network infrastructure for the .Note down IP/FQDN for nodes [IMPORTANT] ==== -The IP addresses of the generated machines will be displayed in the terraform +The IP addresses of the generated machines will be displayed in the Terraform output during the cluster node deployment. You need these IP addresses to deploy {productname} to the cluster. -If you need to find an IP addresses later on you can run `terraform output` within the directory you performed the deployment from `~/my-cluster` directory or perform the following steps: +If you need to find an IP address later on, you can run `terraform output` within the directory you performed the deployment from the `~/my-cluster` directory or perform the following steps: -. Log in to {soc} and click on menu:Network[Load Balancers]. Find the one with the string you entered in the terraform configuration above e.g. "testing-lb". -. Note down the "Floating IP". If you have configured a FQDN for this IP, use the hostname instead. +. Log in to {soc} and click on menu:Network[Load Balancers]. Find the one with the string you entered in the Terraform configuration above, for example "testing-lb". +. Note down the "Floating IP". If you have configured an FQDN for this IP, use the host name instead. + image::deploy-loadbalancer-ip.png[] . Now click on menu:Compute[Instances]. -. Switch the filter dropdown to `Instance Name` and enter the string you specified for `stack_name` in the `terraform.tfvars` file. -. Find the Floating IPs on each of the nodes of your cluster. +. Switch the filter dropdown box to `Instance Name` and enter the string you specified for `stack_name` in the `terraform.tfvars` file. +. Find the floating IPs on each of the nodes of your cluster. ==== -=== Logging into the cluster nodes +=== Logging in to the Cluster Nodes -. Connecting into the cluster nodes can be accomplished only via SSH key-based authentication thanks to the ssh-public key injection done earlier via {tf}. You can use the predefined `sles` user to log in. +. Connecting to the cluster nodes can be accomplished only via SSH key-based authentication thanks to the ssh-public key injection done earlier via {tf}. You can use the predefined `sles` user to log in. + -If the ssh-agent is running in the background run: +If the ssh-agent is running in the background, run: + ---- ssh sles@ ---- + -Without the ssh-agent running run: +Without the ssh-agent running, run: + ---- ssh sles@ -i @@ -139,13 +139,13 @@ ssh sles@ -i + . Once connected, you can execute commands using password-less `sudo`. In addition to that, you can also set a password if you prefer to. + -To set the *root password* run: +To set the *root password*, run: + ---- sudo passwd ---- + -To set the *sles user's password* run: +To set the *sles user's password*, run: + ---- sudo passwd sles @@ -155,18 +155,23 @@ sudo passwd sles .Password authentication has been disabled [IMPORTANT] ==== -Under the default settings you always need your SSH key to access the machines. Even after setting a password for either `root` or `sles` user, you will be unable to log in via SSH using their passwords respectively. You will most likely receive a `Permission denied (publickey)` error. This mechanism has been deliberately disabled because of security best practices. However, if this environment does not fit your workflows, you can change it at your own risk by modifying the SSH configuration: +Under the default settings you always need your SSH key to access the machines. +Even after setting a password for either `root` or `sles` user, you will be unable +to log in via SSH using their respective passwords. You will most likely receive a +`Permission denied (publickey)` error. This mechanism has been deliberately disabled +because of security best practices. However, if this setup does not fit your workflows, +you can change it at your own risk by modifying the SSH configuration: under `/etc/ssh/sshd_config` -To allow password SSH authentication set: +To allow password SSH authentication, set: ---- + PasswordAuthentication yes ---- -To allow login as root via SSH set: +To allow login as root via SSH, set: ---- + PermitRootLogin yes ---- -For the changes to take effect you need to restart the SSH service by running +For the changes to take effect, you need to restart the SSH service by running: ---- sudo systemctl restart sshd.service ---- diff --git a/adoc/deployment-loadbalancer.adoc b/adoc/deployment-loadbalancer.adoc index 88d068b7d..903ebaeca 100644 --- a/adoc/deployment-loadbalancer.adoc +++ b/adoc/deployment-loadbalancer.adoc @@ -1,14 +1,13 @@ [[loadbalancer]] -== Nginx TCP Load Balancer with passive checks +== Nginx TCP Load Balancer with Passive Checks -We can use the `ngx_stream_module` module (available since version 1.9.0) in order to use -TCP load balancing. In this mode, `nginx` will just forward the TCP packets to the master nodes. +For TCP load balancing, we can use the `ngx_stream_module` module (available since version 1.9.0). In this mode, `nginx` will just forward the TCP packets to the master nodes. The default mechanism is *round-robin* so each request will be distributed to a different server. [WARNING] ==== -The Open Source version of Nginx referred to in this guide only allows one to +The open source version of Nginx referred to in this guide only allows the use of use passive health checks. `nginx` will mark a node as unresponsive only after a failed request. The original request is lost and not forwarded to an available alternative server. @@ -18,21 +17,21 @@ This load balancer configuration is therefore only suitable for testing and proo For production environments, we recommend the use of link:https://www.suse.com/documentation/sle-ha-15/index.html[{sle} {hasi} 15] ==== -=== Configuring The Load Balancer +=== Configuring the Load Balancer -. Register SLES +. Register SLES: + [source,bash] ---- SUSEConnect -r $MY_REG_CODE ---- -. Install Nginx +. Install Nginx: + [source,bash] ---- zypper in nginx ---- -. Write configuration in `/etc/nginx/nginx.conf` +. Write the configuration in `/etc/nginx/nginx.conf`: + ---- user nginx; @@ -101,7 +100,7 @@ stream { so the same client will always be redirected to the same server except if this server is unavailable. <2> Replace the individual `masterXX` with the IP/FQDN of your actual master nodes (one entry each) in the `upstream k8s-masters` section. -<3> Dex port 32000 and Gangway port 32001 must be accessible through the load balancer for RBAC authentication +<3> Dex port 32000 and Gangway port 32001 must be accessible through the load balancer for RBAC authentication. . Configure `firewalld` to open up port `6443`. As root, run: + [source,bash] @@ -125,13 +124,11 @@ The {productname} cluster must be up and running for this to produce any useful results. This step can only be performed after <> is completed successfully. -To verify that the load balancer works you can run a simple command to repeatedly +To verify that the load balancer works, you can run a simple command to repeatedly retrieve cluster information from the master nodes. Each request should be forwarded to a different master node. -. Check logs -+ -From your workstation run: +. Check the logs. From your workstation, run: + [source,bash] ---- diff --git a/adoc/deployment-preparation.adoc b/adoc/deployment-preparation.adoc index ba836b7dd..03ab78d15 100644 --- a/adoc/deployment-preparation.adoc +++ b/adoc/deployment-preparation.adoc @@ -6,17 +6,17 @@ This workstation is called the "Management machine". Important files are generat and must be maintained on this machine, but it is not a member of the skuba cluster. [[ssh.configuration]] -=== Basic SSH key configuration +=== Basic SSH Key Configuration -To log into the created cluster nodes, you need to configure an SSH key pair and -load it into your users `ssh-agent` program. This is also mandatory in order to be able to use +To log in to the created cluster nodes, you need to configure an SSH key pair and +load it into your user's `ssh-agent` program. This is also mandatory in order to be able to use the installation tools `terraform` and `skuba`. In a later deployment step, skuba will ensure that the key is distributed across all the nodes and trusted by them. For now, you only need to make sure that an ssh-agent is running and that it has the SSH key added: . The `ssh-agent` is usually started automatically by graphical -desktop environments. If that is not your case run: +desktop environments. If that is not your case, run: + ---- eval "$(ssh-agent)" @@ -51,9 +51,9 @@ them grants access to the node, or until the ssh server maximum authentication attempts are exhausted. ==== -==== Forwarding the authentication agent connection +==== Forwarding the Authentication Agent Connection It is also possible to *forward the authentication agent connection* from a -host to another one, which can be useful if you intend to run skuba on +host to another, which can be useful if you intend to run skuba on a "jump host" and don't want to copy your private key to this node. This can be achieved using the `ssh -A` command. Please refer to the man page of `ssh` to learn about the security implications of using this feature. @@ -85,7 +85,7 @@ If you wish to beta test {productname} {productmajor}, please send an e-mail to beta-programs@lists.suse.com to request a {scc} subscription and a {productname} registration code. endif::[] -=== Installation tools +=== Installation Tools For any deployment type you will need `skuba` and `{tf}`. These packages are available from the {productname} package sources. They are provided as an installation @@ -119,16 +119,16 @@ as various default configurations and examples. Setting up a load balancer is mandatory in any production environment. ==== -{productname} requires a load balancer to distribute workload between the deployed -master nodes of the cluster. A failure tolerant {productname} cluster will +{productname} requires a load balancer to distribute work load between the deployed +master nodes of the cluster. A failure-tolerant {productname} cluster will always use more than one load balancer since that becomes a "single point of failure". -There are many ways to configure a load balancer. This documentation can not +There are many ways to configure a load balancer. This documentation cannot describe all possible combinations of load balancer configurations and thus -does not aim to do so. Please apply your organizations' load balancing best +does not aim to do so. Please apply your organization's load balancing best practices. -For {soc} the {tf} configurations shipped with this version will automatically deploy +For {soc}, the {tf} configurations shipped with this version will automatically deploy a suitable load balancer for the cluster. For VMware you must configure a load balancer manually and allow it access to @@ -136,13 +136,13 @@ all master nodes created during <>. The load balancer should be configured before the actual deployment. It is needed during the cluster bootstrap. To simplify configuration you can reserve the IPs -needed for the cluster nodes and pre-configure these in the load balancer. +needed for the cluster nodes and preconfigure these in the load balancer. The load balancer needs access to port `6443` on the `apiserver` (all master nodes) in the cluster. It also needs access to Gangway port `32001` and Dex port `32000` on all master and worker nodes in the cluster for RBAC authentication. -We recommend performing regular HTTPS health checks each master node `/healthz` +We recommend performing regular HTTPS health checks for each master node `/healthz` endpoint to verify that the node is responsive. The following is an example of a possible load balancer configuration based on {sle} 15 SP1 and `nginx`. diff --git a/adoc/deployment-sles.adoc b/adoc/deployment-sles.adoc index 53774e34f..b744a1c82 100644 --- a/adoc/deployment-sles.adoc +++ b/adoc/deployment-sles.adoc @@ -1,10 +1,10 @@ -== Deployment on existing SLES installation +== Deployment on Existing SLES Installation If you already have a running {sle} 15 SP1 installation, you can add {productname} to this installation using SUSE Connect. You also need to enable the "Containers" module because it contains some dependencies required by {productname}. -Retrieve your {productname} registration code and run. +Retrieve your {productname} registration code and run the following. Substitute `CAASP_REGISTRATION_CODE` for the code from <>. [source,bash] diff --git a/adoc/deployment-sysreqs.adoc b/adoc/deployment-sysreqs.adoc index d3462421f..7264a15ff 100644 --- a/adoc/deployment-sysreqs.adoc +++ b/adoc/deployment-sysreqs.adoc @@ -12,8 +12,8 @@ Currently we support: You will need at least two machines: -* 1 Master Node -* 1 Worker Node +* 1 master node +* 1 worker node {productname} {productversion} supports deployments with a single or multiple master nodes. Production environments must be deployed with multiple master nodes for resilience. @@ -26,39 +26,39 @@ The minimal viable failure tolerant production environment configuration consist === Hardware -==== Master nodes +==== Master Nodes Up to 5 worker nodes *(minimum)*: * Storage: 50 GB+ * (v)CPU: 2 * RAM: 4 GB -* Network: Minimum 1GB/s, (faster is preferred) +* Network: Minimum 1Gb/s (faster is preferred) Up to 10 worker nodes: * Storage: 50 GB+ * (v)CPU: 2 * RAM: 8 GB -* Network: Minimum 1GB/s, (faster is preferred) +* Network: Minimum 1Gb/s (faster is preferred) Up to 100 worker nodes: * Storage: 50 GB+ * (v)CPU: 4 * RAM: 16 GB -* Network: Minimum 1GB/s (faster is preferred) +* Network: Minimum 1Gb/s (faster is preferred) Up to 250 worker nodes: * Storage: 50 GB+ * (v)CPU: 8 * RAM: 16 GB -* Network: Minimum 1GB/s (faster is preferred) +* Network: Minimum 1Gb/s (faster is preferred) [IMPORTANT] ==== -Using a minimum of 2 (v)CPU is a hard requirement, deploying +Using a minimum of 2 (v)CPUs is a hard requirement, deploying a cluster with less processing units is not possible. ==== @@ -71,10 +71,10 @@ A worker node requires the following resources: Based on these values, the *minimal* configuration of a worker node is: -* Storage: Depending on workloads, minimum 20-30GB to hold the base OS and required packages. Mount additional storage volumes as needed. +* Storage: Depending on workloads, minimum 20-30 GB to hold the base OS and required packages. Mount additional storage volumes as needed. * (v)CPU: 1 * RAM: 2 GB -* Network: Minimum 1GB/s (faster is preferred) +* Network: Minimum 1Gb/s (faster is preferred) Calculate the size of the required (v)CPU by adding up the base requirements, the estimated additional essential cluster components (logging agent, monitoring agent, configuration management, etc.) and the estimated CPU workloads: @@ -91,10 +91,10 @@ These values are provided as a guide to work in most cases. They may vary based ==== Storage Performance -For Master and Worker nodes you must ensure storage performance of at least 500 sequential IOPS with disk bandwidth depending on your cluster size. +For master and worker nodes you must ensure storage performance of at least 500 sequential IOPS with disk bandwidth depending on your cluster size. - "Typically 50 sequential IOPS (e.g., a 7200 RPM disk) is required. - For heavily loaded clusters, 500 sequential IOPS (e.g., a typical local SSD + "Typically 50 sequential IOPS (for example, a 7200 RPM disk) is required. + For heavily loaded clusters, 500 sequential IOPS (for example, a typical local SSD or a high performance virtualized block device) is recommended." "Typically 10MB/s will recover 100MB data within 15 seconds. @@ -172,11 +172,11 @@ link:https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware ==== IP Addresses -All nodes must be assigned static IP addresses that must not be changed manually afterwards. +All nodes must be assigned static IP addresses, which must not be changed manually afterwards. [IMPORTANT] ==== -Plan carefully for required IP ranges and future scenarios as +Plan carefully for required IP ranges and future scenarios as it is not possible to reconfigure the IP ranges once the deployment is complete. ==== @@ -191,7 +191,7 @@ Configure firewall and other network security to allow communication on the defa ==== Performance -All master nodes of the cluster must have a minimum 1GB/s network connection to fulfill the requirements for etcd. +All master nodes of the cluster must have a minimum 1Gb/s network connection to fulfill the requirements for etcd. "1GbE is sufficient for common etcd deployments. For large etcd clusters, a 10GbE network will reduce mean time to recovery." diff --git a/adoc/deployment-terraform-example.adoc b/adoc/deployment-terraform-example.adoc index c3f899fed..4b293b8db 100644 --- a/adoc/deployment-terraform-example.adoc +++ b/adoc/deployment-terraform-example.adoc @@ -8,7 +8,7 @@ # Name of the image to use image_name = "SLE-15-SP1-JeOS-GMC" -# Identifier to make all your resources unique and avoid clashes with other users of this terraform project +# Identifier to make all your resources unique and avoid clashes with other users of this Terraform project stack_name = "testing" // <1> # Name of the internal network to be created @@ -78,13 +78,13 @@ ntp_servers = ["0.novell.pool.ntp.org", "1.novell.pool.ntp.org", "2.novell.pool. ---- <1> `stack_name`: Prefix for all machines of the cluster spawned by terraform. -<2> `internal_net` the internal network name that will be created/used for the cluster in {soc}. +<2> `internal_net`: the internal network name that will be created/used for the cluster in {soc}. *Note*: This string will be used to generate the human readable IDs in {soc}. -If you use a generic term it is very likely to fail deployment because the term is already in use by someone else. It's a good idea to use your username or other unique identifier. +If you use a generic term, deployment is very likely to fail because the term is already in use by someone else. It's a good idea to use your username or some other unique identifier. <3> `masters`: Number of master nodes to be deployed. <4> `workers`: Number of worker nodes to be deployed. <5> `repositories`: A list of additional repositories to be added on each -machines - leave empty if no additional packages need to be installed. +machines. Leave empty if no additional packages need to be installed. <6> `packages`: Additional packages to be installed on the node. *Note*: Do not remove any of the pre-filled values in the `packages` section. This can render your cluster unusable. You can add more packages but do not remove any of the @@ -167,22 +167,22 @@ ntp_servers = ["0.novell.pool.ntp.org", "1.novell.pool.ntp.org", "2.novell.pool. <5> `template_name`: The name of the template created according to instructions. <6> `stack_name`: Prefix for all machines of the cluster spawned by terraform. *Note*: This string will be used to generate the human readable IDs in {soc}. -If you use a generic term it is very likely to fail deployment because the term is already in use by someone else. It's a good idea to use your username or other unique identifier. +If you use a generic term, deployment very likely to fail because the term is already in use by someone else. It's a good idea to use your username or some other unique identifier. <7> `masters`: Number of master nodes to be deployed. <8> `master_disk_size`: Size of the root disk in GB. -*Note*: The value must be at least the same size of the source template. It is only possible to grow the size of a disk. +*Note*: The value must be at least the same size as the source template. It is only possible to increase the size of a disk. <9> `workers`: Number of worker nodes to be deployed. <10> `worker_disk_size`: Size of the root disk in GB. -*Note*: The value must be at least the same size of the source template. It is only possible to grow the size of a disk. +*Note*: The value must be at least the same size as the source template. It is only possible to increase the size of a disk. <11> `username`: Login username for the nodes. -*Note*: Leave this the default `sles`. The username must exist on the used base operating system. It will not be created. +*Note*: Leave this as the default `sles`. The username must exist on the used base operating system. It will not be created. <12> `repositories`: A list of additional repositories to be added on each -machines - leave empty if no additional packages need to be installed. +machines. Leave empty if no additional packages need to be installed. <13> `packages`: Additional packages to be installed on the node. *Note*: Do not remove any of the pre-filled values in the `packages` section. This can render your cluster unusable. You can add more packages but do not remove any of the default packages listed. -<14> `authorized_keys`: List of ssh-public-keys that will be able to login to the +<14> `authorized_keys`: List of ssh-public-keys that will be able to log in to the deployed machines. <15> `ntp_servers`: A list of `ntp` servers you would like to use with `chrony`. # end::tf_vmware[] @@ -249,7 +249,7 @@ deployed machines. // <7> `workers`: Number of worker nodes to be deployed. // <8> `username`: Login username for the nodes. // *Note*: the username must exist on the used base operating system. It will not be created. -// <9> `authorized_keys`: List of ssh-public-keys that will be able to login to the +// <9> `authorized_keys`: List of ssh-public-keys that will be able to log in to the // deployed machines. // <10> `caasp_registry_code`: SUSE CaaSP Product Registration Code for registering // the product against SUSE Customer Service. diff --git a/adoc/deployment-vmware.adoc b/adoc/deployment-vmware.adoc index 643e4390e..1dcc3b2e9 100644 --- a/adoc/deployment-vmware.adoc +++ b/adoc/deployment-vmware.adoc @@ -4,7 +4,7 @@ [NOTE] You must have completed <> to proceed. -=== Environment description +=== Environment Description [NOTE] ==== @@ -25,9 +25,9 @@ for the {kube} api-servers on the master nodes on a local load balancer using round-robin 1:1 port forwarding. ==== -=== VM preparation for creating a template +=== VM Preparation for Creating a Template -. Upload the ISO image {isofile} to desired VMware datastore. +. Upload the ISO image {isofile} to the desired VMware datastore. Now you can create a new base VM for {productname} within the designated resource pool through the vSphere WebUI: @@ -40,15 +40,15 @@ image::vmware_step1.png[width=80%,pdfwidth=80%] . Select a `Compute Resource` that will run the VM. + image::vmware_step2.png[width=80%,pdfwidth=80%] -. Select the storage used by the VM. +. Select the storage to be used by the VM. + image::vmware_step3.png[width=80%,pdfwidth=80%] . Select `ESXi 6.7 and later` from compatibility. + image::vmware_step4.png[width=80%,pdfwidth=80%] -. Select menu:Guest OS Family[Linux] and menu:Guest OS Version[SUSE Linux Enterprise 15 (64 Bit)]. +. Select menu:Guest OS Family[Linux] and menu:Guest OS Version[SUSE Linux Enterprise 15 (64-bit)]. + -*Note*: You will manually select the correct installation media in the next step. +*Note*: You will manually select the correct installation medium in the next step. + image::vmware_step5.png[width=80%,pdfwidth=80%] . Now customize the hardware settings. @@ -56,35 +56,34 @@ image::vmware_step5.png[width=80%,pdfwidth=80%] image::vmware_step6.png[width=80%,pdfwidth=80%] .. Select menu:CPU[2]. .. Select menu:Memory[4096 MB]. -.. Select menu:New Hard disk[40GB], menu:New Hard disk[Disk Provisioning > Thin Provision]. +.. Select menu:New Hard disk[40 GB], menu:New Hard disk[Disk Provisioning > Thin Provision]. .. Select menu:New SCSI Controller[LSI Logic Parallel SCSI controller (default)] and change it to "VMware Paravirtualized". .. Select menu:New Network[VM Network], menu:New Network[Adapter Type > VMXNET3]. + -("VM Network" sets up a bridged network which provides a public IP address reachable within a company) +("VM Network" sets up a bridged network which provides a public IP address reachable within a company.) .. Select menu:New CD/DVD[Datastore ISO File]. -.. Tick the box menu:New CD/DVD[Connect At Power On] to be able boot from ISO/DVD. -.. The click on "Browse" next to the `CD/DVD Media` field to select the downloaded ISO image on desired datastore. -.. Go to tab VM Options. +.. Check the box menu:New CD/DVD[Connect At Power On] to be able boot from ISO/DVD. +.. Then click on "Browse" next to the `CD/DVD Media` field to select the downloaded ISO image on the desired datastore. +.. Go to the VM Options tab. + image::vmware_step6b.png[width=80%,pdfwidth=80%] .. Select menu:Boot Options[]. -.. Select menu:Firmware[Bios]. +.. Select menu:Firmware[BIOS]. .. Confirm the process with menu:Next[]. -==== {sls} installation +==== {sls} Installation Power on the newly created VM and install the system over graphical remote console: . Enter registration code for SLES in YaST. . Confirm the update repositories prompt with "Yes". . Remove the check mark in the "Hide Development Versions" box. -. . Make sure the following modules are selected on the "Extension and Module Selection" screen: + image::vmware_extension.png[width=80%,pdfwidth=80%] ** SUSE CaaS Platform 4.0 x86_64 (BETA) ** Basesystem Module -** Containers Module (this will automatically be ticked when you select {productname}) +** Containers Module (this will automatically be checked when you select {productname}) ** Public Cloud Module . Enter the registration code to unlock the {productname} extension. . Select menu:System Role[Minimal] on the "System Role" screen. @@ -92,10 +91,10 @@ image::vmware_extension.png[width=80%,pdfwidth=80%] . Select "Start with current proposal". + image::vmware_step8.png[width=80%,pdfwidth=80%] -.. Keep `sda1` as BIOS partition +.. Keep `sda1` as BIOS partition. .. Remove the root `/` partition. + -Select the device in "System View" on the left (Default: `/dev/sda2`) and click "Delete". Confirm with "Yes". +Select the device in "System View" on the left (default: `/dev/sda2`) and click "Delete". Confirm with "Yes". + image::vmware_step9.png[width=80%,pdfwidth=80%] .. Remove the `/home` partition. @@ -118,7 +117,7 @@ image::vmware_step13.png[width=80%,pdfwidth=80%] *** Enable Snapshots *** Mount Device *** Mount Point `/` -. You should be left with 2 partitions. Now click "Accept". +. You should be left with two partitions. Now click "Accept". + image::vmware_step7.png[width=80%,pdfwidth=80%] . Confirm the partitioning changes. @@ -126,8 +125,8 @@ image::vmware_step7.png[width=80%,pdfwidth=80%] image::vmware_step14.png[width=80%,pdfwidth=80%] . Click "Next". . Configure your timezone and click "Next". -. Create a user with the Username `sles` and specify a password. -.. Tick the box menu:Local User[Use this password for system administrator]. +. Create a user with the username `sles` and specify a password. +.. Check the box menu:Local User[Use this password for system administrator]. + image::vmware_step15.png[width=80%,pdfwidth=80%] . Click "Next". @@ -136,16 +135,16 @@ image::vmware_step15.png[width=80%,pdfwidth=80%] ... Disable the Firewall (click on `(disable)`). ... Enable the SSH service (click on `(enable)`). .. Scroll to the `kdump` section of the software description and click on the title. -. In the "Kdump Start-Up" screen select menu:Enable/Disable Kdump[Disable Kdump]. +. In the "Kdump Start-Up" screen, select menu:Enable/Disable Kdump[Disable Kdump]. .. Confirm with "OK". + image::vmware_step16.png[width=80%,pdfwidth=80%] -. Click "Install". Confirm the installation by clicking "Install" in the popup dialog. +. Click "Install". Confirm the installation by clicking "Install" in the pop-up dialog. . Finish the installation and confirm system reboot with "OK". + image::vmware_step17.png[width=80%,pdfwidth=80%] -==== Preparation of the VM as a template +==== Preparation of the VM as a Template In order to run {productname} on the created VMs, you must configure and install some additional packages like `sudo`, `cloud-init` and `open-vm-tools`. @@ -159,12 +158,12 @@ Steps 1-4 may be skipped, if they were already performed in YaST during the {sle ---- SUSEConnect -r CAASP_REGISTRATION_CODE ---- -. Register the `Containers` module (free of charge) +. Register the `Containers` module (free of charge): + ---- SUSEConnect -p sle-module-containers/15.1/x86_64 ---- -. Register the `Public Cloud` module for basic `cloud-init` package (free of charge) +. Register the `Public Cloud` module for basic `cloud-init` package (free of charge): + ---- SUSEConnect -p sle-module-public-cloud/15.1/x86_64 @@ -198,7 +197,7 @@ rm /etc/machine-id /var/lib/zypp/AnonymousUniqueId \ /var/lib/systemd/random-seed /var/lib/dbus/machine-id \ /var/lib/wicked/* ---- -. Cleanup btrfs snapshots and create one with initial state: +. Clean up btrfs snapshots and create one with initial state: + ---- snapper list @@ -237,7 +236,7 @@ cd ~/caasp/deployment/vmware/ ---- mv terraform.tfvars.example terraform.tfvars ---- -. Edit the `terraform.tfvars` file and add modify the following variables: +. Edit the `terraform.tfvars` file and add/modify the following variables: + include::deployment-terraform-example.adoc[tag=tf_vmware] . Enter the registration code for your nodes in `~/my-cluster/registration.auto.tfvars`: @@ -250,7 +249,7 @@ Substitute `CAASP_REGISTRATION_CODE` for the code from <>. caasp_registry_code = "CAASP_REGISTRATION_CODE" ---- + -This is required so all the deployed nodes can automatically register to {scc} and retrieve packages. +This is required so all the deployed nodes can automatically register with {scc} and retrieve packages. Once the files are adjusted, `terraform` needs to know about the `vSphere` server and the login details for it; these can be exported as environment variables or @@ -268,7 +267,7 @@ export VSPHERE_ALLOW_UNVERIFIED_SSL=true # In case you are using custom certific ssh-add ---- -Run terraform to create the required machines for use with `skuba`: +Run Terraform to create the required machines for use with `skuba`: ---- terraform init @@ -276,13 +275,13 @@ terraform plan terraform apply ---- -==== Setup by hand -For each VM deployment follow the {ay} installation method used for deployment on -Bare metal machines as described in <>. +==== Setup by Hand +For each VM deployment, follow the {ay} installation method used for deployment on +bare metal machines as described in <>. [IMPORTANT] ==== -Make sure to give each VM in VMware a clear name that shows it's purpose in the cluster e.g. +Make sure to give each VM in VMware a clear name that shows its purpose in the cluster, for example * `caasp-master-0` * `caasp-worker-0` diff --git a/adoc/entities.adoc b/adoc/entities.adoc index 418287abb..752a2764f 100644 --- a/adoc/entities.adoc +++ b/adoc/entities.adoc @@ -231,8 +231,6 @@ Miscellaneous :Admin_node: Administration node :Admin_Node: Administration Node :master_node: master node -:Master_node: Master node -:Master_Node: Master Node :worker_node: worker node :Worker_node: Worker node :Worker_Node: Worker Node @@ -248,7 +246,7 @@ Miscellaneous :kubectl: `kubectl` :tupdate: `transactional-update` :caasp-cli: `caasp-cli` -:skuba;: `skuba` +:skuba: `skuba` :psp: PodSecurityPolicy //// diff --git a/adoc/network-decl.adoc b/adoc/network-decl.adoc index b245485a1..d1f5d0f31 100644 --- a/adoc/network-decl.adoc +++ b/adoc/network-decl.adoc @@ -74,7 +74,7 @@ :wsIV: mercury // names (Xen) -:xenhostname: earth.{exampledomain} +:xenhost name: earth.{exampledomain} :xenhost: earth :xenhostip: {subnetI}.20 :xenguestname: alice.{exampledomain}