Skip to content

Commit

Permalink
Integrate proofreading results (#465)
Browse files Browse the repository at this point in the history
* Admin Guide: Integrate proofreading.
* Deployment Guide: Integrate proofreading.
* Quickstart Guide: Integrate proofreading.
* Fix and comment out the Change Log
  • Loading branch information
nkoranova authored Sep 5, 2019
1 parent 3e64dcd commit 05f7c92
Show file tree
Hide file tree
Showing 39 changed files with 368 additions and 376 deletions.
28 changes: 14 additions & 14 deletions adoc/admin-cap-integration.adoc
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
= {suse} {cap} Integration

{productname} offers {cap} for modern application delivery,
this chapter describes the steps required for successful integration.
{productname} offers {cap} for modern application delivery.
This chapter describes the steps required for successful integration.

== Prerequisites

Before you start with integrating {cap}, you need to ensure the following:
Before you start integrating {cap}, you need to ensure the following:

* The {productname} cluster did not use the `--strict-cap-defaults` option
during the initial setup when you run `skuba cluster init`.
This ensures presence of extra CRI-O capabilities compatible for docker containers.
during the initial setup when you ran `skuba cluster init`.
This ensures the presence of extra CRI-O capabilities compatible with docker containers.
For more details refer to the
_{productname} Deployment Guide, Transitioning from Docker to CRI-O_.
* The {productname} cluster has `swapaccount=1` set on all worker nodes.
Expand All @@ -19,9 +19,9 @@ sudo sed -i -r 's|^(GRUB_CMDLINE_LINUX_DEFAULT=)\"(.*.)\"|\1\"\2swapaccount=1 \"
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
sudo systemctl reboot
----
* The {productname} cluster has no restrictions to {cap} ports.
* The {productname} cluster has no restrictions for {cap} ports.
For more details refer to the {cap} documentation: https://www.suse.com/documentation/cloud-application-platform-1/singlehtml/book_cap_guides/book_cap_guides.html .
* `Helm` and `Tiller` installed on node where you run `skuba` and `kubectl` command.
* `Helm` and `Tiller` are installed on the node where you run the `skuba` and `kubectl` commands.
+
----
sudo zypper install helm
Expand All @@ -33,25 +33,25 @@ helm init --tiller-image registry.suse.com/caasp/v4/helm-tiller:{helm_tiller_ver
----

== Procedures
. Create a storage class. For precise steps, refer to <<_RBD-dynamic-persistent-volumes>>
. Create a storage class. For precise steps, refer to <<_RBD-dynamic-persistent-volumes>>.

. Add `Helm` chart repository.
. Add the `Helm` chart repository.
+
----
helm repo add suse https://kubernetes-charts.suse.com/
----

. Map the {productname} master node external IP address to the `<cap-domain>` and
`uaa.<cap-domain>` on your DNS server.
For testing purposes you can also use `/etc/hosts`
For testing purposes you can also use `/etc/hosts`.
+
----
<caasp_master_node_external_ip> <caasp_master_node_external_ip>.omg.howdoi.website
<caasp_master_node_external_ip> uaa.<caasp_master_node_external_ip>.omg.howdoi.website
----

. Create shared value file. This will be used for CAP `uaa`, `cf`, and
`console` charts. Substitute the values enclosed in `< >` for specific values.
. Create a shared value file. This will be used for CAP `uaa`, `cf`, and
`console` charts. Substitute the values enclosed in `< >` with specific values.
+
----
cat << *EOF* > custom_values.yaml
Expand Down Expand Up @@ -87,7 +87,7 @@ uaa-0 1/1 Running 1 21h
...
----

. Verify uaa OAuth - this should return a JSON Object:
. Verify uaa OAuth -- this should return a JSON object:
+
----
curl --insecure https://uaa.<cap-domain>:2793/.well-known/openid-configuration
Expand Down Expand Up @@ -145,6 +145,6 @@ volume-migration-1-s96cc 0/1 Completed 0 54m
....
----

A successful deployment allows you to access {cap} console via a web browser at
A successful deployment allows you to access {cap} console via a Web browser at
https://<domain-name>:8443/login. The default username is admin and the password
is the `secure_password` you have set in one of the steps above.
26 changes: 13 additions & 13 deletions adoc/admin-centralized-logging.adoc
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
= Centralized Logging

Centralized Logging is a means of collecting logs from CaaS Platform for centralized management.
Centralized Logging is a means of collecting logs from the {productname} for centralized management.
It forwards system and Kubernetes cluster logs to a specified external logging service,
for example, Rsyslog server.

Collecting logs in a central location can serve for audit or debug purposes or to analyze and visually present data.
Collecting logs in a central location can be useful for audit or debug purposes or to analyze and visually present data.

== Types of logs
== Types of Logs

You can log the following groups of services. See the <<Deployment>>
You can log the following groups of services. See <<Deployment>>
for more information on how to select and customize the logs.

Kubernetes System Components::
Expand All @@ -32,10 +32,10 @@ OS Components::
* Zypper
* Network (wicked)

Centralized logging is also restricted to the following protocols: UDP, TCP, TCP + TLS, TCP + mTLS.
Centralized Logging is also restricted to the following protocols: UDP, TCP, TCP + TLS, TCP + mTLS.


== Log formats
== Log Formats

The two supported syslog message formats are *RFC 5424* and *RFC 3164*.

Expand Down Expand Up @@ -72,7 +72,7 @@ sudo zypper install helm

- As of {productname} {productversion},
Tiller is not part of the {productname} package repository,
so to install the Tiller container image run:
so to install the Tiller container image, run:

[source,bash]
----
Expand Down Expand Up @@ -137,21 +137,21 @@ for instance after log agents shutdown, restart or in case of an unresponsive re
The queue files are located under `/var/lib/{RELEASE_NAME}-log-agent-rsyslog` on every node in the cluster.
Queue files remain even after the log agents are deleted.

The buffered queue can be enable/disable with following parameter:
The buffered queue can be enabled/disabled with the following parameter:

`*queue.enabled*`, default value = false

Setting `queue.enabled` to `false` means that data will be stored in-memory only.
Setting the parameter to `true` will set the data store to a mixture of in-memory and in-disk.
Data will then store in memory until the queue is filled up, after which storing is switched to disk.
Data will then be stored in memory until the queue is filled up, after which storing is switched to disk.
Enabling the queue also automatically saves the queue to disk at service shutdown.

Additional parameters to define queue size and its disk usage are:

`*queue.size*`, default value = 50000

This option sets the number of messages allowed for the in-memory queue.
This setting effects the Kubernetes cluster logs (`kubernetes-control-plane` and `kubernetes-user-name-space`).
This setting affects the Kubernetes cluster logs (`kubernetes-control-plane` and `kubernetes-user-name-space`).


`*queue.maxDiskSpace*`, default value = 2147483648
Expand Down Expand Up @@ -184,9 +184,9 @@ Options with empty default values are set as not specified.
|queue.maxDiskSpace|sets maximum Rsyslog queue disk space in bytes|2147483648
|queue.size|sets Rsyslog queue size in bytes|50000
|resources.limits.cpu|sets CPU limits|
|resources.limits.memory|sets memory limits|512Mi
|resources.requests.cpu|sets CPU for request|100m
|resources.requests.memory|sets memory for request|512Mi
|resources.limits.memory|sets memory limits|512 Mi
|resources.requests.cpu|sets CPU for requests|100m
|resources.requests.memory|sets memory for requests|512 Mi
|resumeInterval|specifies time (seconds) after failure before retry is attempted|30
|resumeRetryCount|sets number of retries after first failure before the log is discarded. -1 is unlimited|-1
|server.tls.clientCert|sets TLS client certificate|
Expand Down
22 changes: 11 additions & 11 deletions adoc/admin-cluster-management.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ its individual nodes: bootstrapping, joining and removing nodes.
For maximum automation and ease {productname} uses the `skuba` tool,
which simplifies Kubernetes cluster creation and reconfiguration.

== Bootstrap and initial configuration
== Bootstrap and Initial Configuration

Bootstrapping the cluster is the initial process of starting up a minimal
viable cluster and joining the first master node. Only the first master node needs to be bootstrapped,
later nodes can simply be joined as described in <<Adding nodes>>.
later nodes can simply be joined as described in <<adding_nodes>>.

Before bootstrapping any nodes to the cluster,
you need to create an initial cluster definition folder (initialize the cluster).
Expand All @@ -19,7 +19,7 @@ For a step by step guide on how to initialize the cluster, configure updates usi
and subsequently bootstrap nodes to it, refer to the _{productname} Deployment Guide_.

[[adding_nodes]]
== Adding nodes
== Adding Nodes

Once you have added the first master node to the cluster using `skuba node bootstrap`,
use the `skuba node join` command to add more nodes. Joining master or worker nodes to
Expand All @@ -31,9 +31,9 @@ skuba node join --role <master/worker> --user <user-name> --sudo --target <IP/FQ

The mandatory flags for the join command are `--role`, `--user`, `--sudo` and `--target`.

- `--role` serves to specify if the node is a *master* or *worker*
- `--role` serves to specify if the node is a *master* or *worker*.
- `--sudo` is for running the command with superuser privileges,
this is necessary for all node operations.
which is necessary for all node operations.
- `<user-name>` is the name of the user that exists on your SLES machine (default: `sles`).
- `--target <IP/FQDN>` is the IP address or FQDN of the relevant machine.
- `<node-name>` is how you decide to name the node you are adding.
Expand All @@ -51,7 +51,7 @@ To add a new *worker* node, you would run something like:
skuba node join --role worker --user sles --sudo --target 10.86.2.164 worker1

[[removing_nodes]]
== Removing nodes
== Removing Nodes

=== Temporary Removal

Expand All @@ -67,7 +67,7 @@ https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#use-kubec

[IMPORTANT]
====
Nodes removed with this method can not be added back to the cluster or any other
Nodes removed with this method cannot be added back to the cluster or any other
skuba-initiated cluster. You must reinstall the entire node and then join it
again to the cluster.
====
Expand All @@ -81,14 +81,14 @@ skuba node remove <node-name>

[IMPORTANT]
====
After the removal of a master node you have to manually delete its entries
After the removal of a master node, you have to manually delete its entries
from your load balancer's configuration.
====

== Reconfiguring nodes
== Reconfiguring Nodes

To reconfigure a node, for example to change the node role from worker to master, you will need to use a combination of commands.
To reconfigure a node, for example to change the node's role from worker to master, you will need to use a combination of commands.

. Run `skuba node remove <node-name>`.
. Reinstall the node from scratch.
. Run `skuba node join --role <desired-role> --user <user-name> --sudo --target <IP/FQDN> <node-name>`
. Run `skuba node join --role <desired-role> --user <user-name> --sudo --target <IP/FQDN> <node-name>`.
8 changes: 4 additions & 4 deletions adoc/admin-crio-proxy.adoc
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
== Configuring HTTP/HTTPS proxy for {crio}
== Configuring HTTP/HTTPS Proxy for {crio}

In some cases you must configure the container runtime to use a proxy to pull
container images. To configure this for {crio} you must modify the file
`/etc/sysconfig/crio`.

. First define the hostnames that should be used without a proxy (`NO_PROXY`).
. First define the host names that should be used without a proxy (`NO_PROXY`).
. Then define which proxies should be used by the HTTP and HTTPS connections
(`HTTP_PROXY` and `HTTPS_PROXY`).
. After you have saved the changes restart the container runtime with
. After you have saved the changes, restart the container runtime with
+
[source,bash]
----
systemctl restart crio
----

=== Configuration example
=== Configuration Example

* Proxy server without authentication
+
Expand Down
18 changes: 9 additions & 9 deletions adoc/admin-crio-registries.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
== Configuring container registries for {crio}
== Configuring Container Registries for {crio}

Every registry related configuration needs to be done in the TOML file
Every registry-related configuration needs to be done in the TOML file
`/etc/containers/registries.conf`. After any change of this file, CRI-O
needs to be restarted.

Expand Down Expand Up @@ -40,7 +40,7 @@ table to be considered. Only the TOML entry with the longest match is used.
As a special case, the `prefix` field can be missing. If so, it defaults to the
value of the `location` field.

=== Per-namespace settings
=== Per-namespace Settings

- `insecure` (`true` or `false`): By default, container runtimes require TLS
when retrieving images from a registry. If `insecure` is set to `true`,
Expand All @@ -50,21 +50,21 @@ value of the `location` field.
- `blocked` (`true` or `false`): If `true`, pulling images with matching names
is forbidden.

=== Remapping and mirroring registries
=== Remapping and Mirroring Registries

The user-specified image reference is, primarily, a "logical" image name,
always used for naming the image. By default, the image reference also directly
specifies the registry and repository to use, but the following options can be
used to redirect the underlying accesses to different registry servers or
locations. This can be used to support configurations with no access to the
internet without having to change `Dockerfile`s, or to add redundancy.
Internet without having to change Dockerfiles, or to add redundancy.

==== `location`

Accepts the same format as the `prefix` field, and specifies the physical
location of the `prefix`-rooted namespace. By default, this equal to `prefix`
location of the `prefix`-rooted namespace. By default, this is equal to `prefix`
(in which case `prefix` can be omitted and the `\[[registry]]` TOML table can
only specify `location`).
just specify `location`).

===== Example

Expand All @@ -79,7 +79,7 @@ the `internal-registry-for-example.net/bar/myimage:latest` image.

==== `mirror`

An array of TOML tables specifying (possibly-partial) mirrors for the
An array of TOML tables specifying (possibly partial) mirrors for the
`prefix`-rooted namespace.

The mirrors are attempted in the specified order. The first one that can be
Expand All @@ -97,7 +97,7 @@ the same semantics as if specified in the `\[[registry]]` TOML table directly:

Can be `true` or `false`. If `true`, mirrors will only be used during pulling
if the image reference includes a digest. Referencing an image by digest
ensures that the same is always used (whereas referencing an image by a tag may
ensures that the same one is always used (whereas referencing an image by a tag may
cause different registries to return different images if the tag mapping is out
of sync).

Expand Down
10 changes: 5 additions & 5 deletions adoc/admin-flexvolume.adoc
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
include::entities.adoc[]

= FlexVolume configuration
= FlexVolume Configuration

FlexVolume drivers are external (out-of-tree) drivers usually provided by a specific vendor.
They are executable files, which are placed in a predefined directory in the cluster on both worker and master nodes.
They are executable files that are placed in a predefined directory in the cluster on both worker and master nodes.
Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin.

The vendor driver first has to be installed on each worker and master node in a Kubernetes cluster.
On {productname} {productmajor} the path to install the drivers is `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`.
On {productname} {productmajor}, the path to install the drivers is `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`.

If the drivers are deployed with `DaemonSet`, this will require changing
the flexvolume directory path, which is usually stored as an environment
variable, e.g. for rook:
the FlexVolume directory path, which is usually stored as an environment
variable, for example for rook:

[source,bash]
FLEXVOLUME_DIR_PATH=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/
Expand Down
2 changes: 1 addition & 1 deletion adoc/admin-monitoring-health-checks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ curl -i http://localhost:10248/healthz

==== Remote Check

There are two ways to fetch endpoints remotely (metrics, healthz etc.).
There are two ways to fetch endpoints remotely (metrics, healthz, etc.).
Both methods use HTTPS and a token.

*The first method* is executed against the APIServer and mostly used with Prometheus
Expand Down
6 changes: 3 additions & 3 deletions adoc/admin-monitoring-stack.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Or add this entry to /etc/hosts
. Create certificates
+
You will need SSL certificates for the shared resources.
If you are deploying in a pre-defined network environment, please get proper certificates from your network administrator.
If you are deploying in a predefined network environment, please get proper certificates from your network administrator.
In this example, the domains are named after the components they represent. `prometheus.example.com`, `prometheus-alertmanager.example.com` and `grafana.example.com`

== Installation
Expand Down Expand Up @@ -411,7 +411,7 @@ The configuration sets one "receiver" to get notified by email when a node meets
* Node has memory pressure
* Node has disk pressure

The first two are critical because the node can not accept new pods, the last two are just warnings.
The first two are critical because the node cannot accept new pods, the last two are just warnings.

The Alertmanager configuration can be added to [path]`prometheus-config-values.yaml` by adding the `alertmanagerFiles` section.

Expand Down Expand Up @@ -681,7 +681,7 @@ You can find a couple of dashboard examples for {productname} in the https://git

=== Prometheus Jobs

The Prometheus upstream helm chart includes the following pre-defined jobs that will scrapes metrics from these jobs using service discovery.
The Prometheus upstream helm chart includes the following predefined jobs that will scrapes metrics from these jobs using service discovery.

* prometheus: Get metrics from prometheus server
* kubernetes-apiservers: Get metrics from {kube} apiserver
Expand Down
Loading

0 comments on commit 05f7c92

Please sign in to comment.