From f0ae6bbfdc6c251ac021c9e39e1e1b440e411943 Mon Sep 17 00:00:00 2001 From: <> Date: Wed, 14 Aug 2024 07:40:18 +0000 Subject: [PATCH] Deployed 0a72090 with MkDocs version: 1.6.0 --- index.html | 2 +- search/search_index.json | 2 +- set-variables-group-vars/index.html | 5 +++++ sitemap.xml | 24 ++++++++++++------------ sitemap.xml.gz | Bin 349 -> 349 bytes 5 files changed, 19 insertions(+), 14 deletions(-) diff --git a/index.html b/index.html index 21423b6d..670ae7b5 100644 --- a/index.html +++ b/index.html @@ -167,5 +167,5 @@

Need Help? to change directories ( cd .. to go up to the parent directory) mkdir to create a new directory Copy/paste the following and hit enter: git clone https://github.com/IBM/Ansible-OpenShift-Provisioning.git Change into the newly created directory The commands and output should resemble the following example: $ pwd /Users/example-user $ mkdir ansible-project $ cd ansible-project/ $ git clone https://github.com/IBM/Ansible-OpenShift-Provisioning.git Cloning into 'Ansible-OpenShift-Provisioning'... remote: Enumerating objects: 3472, done. remote: Counting objects: 100% (200/200), done. remote: Compressing objects: 100% (57/57), done. remote: Total 3472 (delta 152), reused 143 (delta 143), pack-reused 3272 Receiving objects: 100% (3472/3472), 506.29 KiB | 1.27 MiB/s, done. Resolving deltas: 100% (1699/1699), done. $ ls Ansible-OpenShift-Provisioning $ cd Ansible-OpenShift-Provisioning/ $ ls CHANGELOG.md README.md docs mkdocs.yaml roles LICENSE ansible.cfg inventories playbooks Get Pull Secret # In a web browser, navigate to Red Hat's Hybrid Cloud Console , click the text that says 'Copy pull secret' and save it for the next step. Gather Environment Information # You will need a lot of information about the environment this cluster will be set-up in. You will need the help of at least your IBM zSystems infrastructure team so they can provision you a storage group. You'll also need them to provide you with IP address range, hostnames, subnet, gateway, how much disk space you have to work with, etc. A full list of variables needed are found on the next page. Many of them are filled in with defaults or are optional. Please take your time. I would recommend having someone on stand-by in case you need more information or need to ask a question about the environment.","title":"1 Get Info"},{"location":"get-info/#step-1-get-info","text":"","title":"Step 1: Get Info"},{"location":"get-info/#get-repository","text":"Open the terminal Navigate to a folder (AKA directory) where you would like to store this project. Either do so graphically, or use the command-line. Here are some helpful commands for doing so: pwd to see what directory you're currently in ls to list child directories cd to change directories ( cd .. to go up to the parent directory) mkdir to create a new directory Copy/paste the following and hit enter: git clone https://github.com/IBM/Ansible-OpenShift-Provisioning.git Change into the newly created directory The commands and output should resemble the following example: $ pwd /Users/example-user $ mkdir ansible-project $ cd ansible-project/ $ git clone https://github.com/IBM/Ansible-OpenShift-Provisioning.git Cloning into 'Ansible-OpenShift-Provisioning'... remote: Enumerating objects: 3472, done. remote: Counting objects: 100% (200/200), done. remote: Compressing objects: 100% (57/57), done. remote: Total 3472 (delta 152), reused 143 (delta 143), pack-reused 3272 Receiving objects: 100% (3472/3472), 506.29 KiB | 1.27 MiB/s, done. Resolving deltas: 100% (1699/1699), done. $ ls Ansible-OpenShift-Provisioning $ cd Ansible-OpenShift-Provisioning/ $ ls CHANGELOG.md README.md docs mkdocs.yaml roles LICENSE ansible.cfg inventories playbooks","title":"Get Repository"},{"location":"get-info/#get-pull-secret","text":"In a web browser, navigate to Red Hat's Hybrid Cloud Console , click the text that says 'Copy pull secret' and save it for the next step.","title":"Get Pull Secret"},{"location":"get-info/#gather-environment-information","text":"You will need a lot of information about the environment this cluster will be set-up in. You will need the help of at least your IBM zSystems infrastructure team so they can provision you a storage group. You'll also need them to provide you with IP address range, hostnames, subnet, gateway, how much disk space you have to work with, etc. A full list of variables needed are found on the next page. Many of them are filled in with defaults or are optional. Please take your time. I would recommend having someone on stand-by in case you need more information or need to ask a question about the environment.","title":"Gather Environment Information"},{"location":"prerequisites/","text":"Prerequisites # Red Hat # Account ( Sign Up ) License or free trial of Red Hat OpenShift Container Platform for IBM Z systems - s390x architecture (comes with the required licenses for Red Hat Enterprise Linux (RHEL) and CoreOS) IBM zSystems # Hardware Management Console (HMC) access on IBM zSystems or LinuxONE In order to use the playbook that automates the creation of the KVM host Dynamic Partition Manager (DPM) mode is required. If DPM mode is not an option for your environment, that playbook can be skipped, but a bare-metal RHEL server must be set-up on an LPAR manually (Filipe Miranda's how-to article ) before moving on. Once that is done, continue with the playbook 3 that sets up the KVM host. For a minimum installation, at least: 6 Integrated Facilities for Linux (IFLs) with SMT2 enabled 85 GB of RAM An FCP storage group created with 1 TB of disk space 8 IPv4 addresses File Server # A file server accessible from your IBM zSystems / LinuxONE server. Either FTP or HTTP service configured and active. Once a RHEL server is installed natively on the LPAR, pre-existing or configured by this automation, (i.e. the KVM host), you can use that as the file server. If you are not using a pre-existing KVM host(s) and need to create them using this automation, you must use an FTP server because the HMC does not support HTTP. A user with sudo and SSH access on that server. A DVD ISO file of Red Hat Enterprise Linux (RHEL) 8 for s390x architecture mounted in an accessible folder (e.g. /home/ /rhel/ for FTP or /var/www/html/rhel for HTTP) If you do not have RHEL for s390x yet, go to the Red Hat Customer Portal and download it. Under 'Product Variant' use the drop-down menu to select 'Red Hat Enterprise Linux for IBM z Systems' Double-check it's for version 8 and for s390x architecture Then scroll down to Red Hat Enterprise Linux 8.x Binary DVD and click on the 'Download Now' button. To pull the image directly from the command-line of your file server, copy the link for the 'Download Now' button and use wget to pull it down. wget \"https://access.cdn.redhat.com/content/origin/files/sha256/13/13[...]40/rhel-8.7-s390x-dvd.iso?user=6[...]e\" Don't forget to mount it too: FTP: mount /home//rhel or HTTP: mount /var/www/html/rhel A folder created to store config files (e.g. /home/user/ocp-config for FTP or /var/www/html/ocp-config for http) For FTP: sudo mkdir /home//ocp-config or HTTP: sudo mkdir /var/www/html/ocp-config Ansible Controller # The computer/virtual machine running Ansible, sometimes referred to as localhost. Must be running on with MacOS or Linux operating systems. Network access to your IBM zSystems / LinuxONE hardware All you need to run Ansible is a terminal and a text editor. However, an IDE like VS Code is highly recommended for an integrated, user-friendly experience with helpful extensions like YAML . Python3 installed: MacOS, first install Homebrew package manager: /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\" then install Python3 brew install python3 #MacOS Fedora: sudo dnf install python3 #Fedora Debian: sudo apt install python3 #Debian Once Python3 is installed, you also need Ansible version 2.9 or above: pip3 install ansible Once Ansible is installed, you will need a few collections from Ansible Galaxy: ansible-galaxy collection install community.general community.crypto ansible.posix community.libvirt If you will be using these playbooks to automate the creation of the LPAR(s) that will act as KVM host(s) for the cluster, you will also need: ansible-galaxy collection install ibm.ibm_zhmc If you are using MacOS, you also need to have Xcode : xcode-select --install Jumphost for NAT network # If for KVM network NAT is used, instead of macvtap, a ssh tunnel using a jumphost is required to access the OCP cluster. To configure the ssh tunnel expect is required on the jumphost. Expect will be installed during the setup of the bastion (4_setup_bastion.yaml playbook). In case of missing access to install additional packages, install it manually on the jumphost by executing following command: yum install expect In addition make sure that python3 is installed on the jumphost otherwise ansible might fail to run the tasks. You can install python3 manually by executing the following command: yum install python3","title":"Prerequisites"},{"location":"prerequisites/#prerequisites","text":"","title":"Prerequisites"},{"location":"prerequisites/#red-hat","text":"Account ( Sign Up ) License or free trial of Red Hat OpenShift Container Platform for IBM Z systems - s390x architecture (comes with the required licenses for Red Hat Enterprise Linux (RHEL) and CoreOS)","title":"Red Hat"},{"location":"prerequisites/#ibm-zsystems","text":"Hardware Management Console (HMC) access on IBM zSystems or LinuxONE In order to use the playbook that automates the creation of the KVM host Dynamic Partition Manager (DPM) mode is required. If DPM mode is not an option for your environment, that playbook can be skipped, but a bare-metal RHEL server must be set-up on an LPAR manually (Filipe Miranda's how-to article ) before moving on. Once that is done, continue with the playbook 3 that sets up the KVM host. For a minimum installation, at least: 6 Integrated Facilities for Linux (IFLs) with SMT2 enabled 85 GB of RAM An FCP storage group created with 1 TB of disk space 8 IPv4 addresses","title":"IBM zSystems"},{"location":"prerequisites/#file-server","text":"A file server accessible from your IBM zSystems / LinuxONE server. Either FTP or HTTP service configured and active. Once a RHEL server is installed natively on the LPAR, pre-existing or configured by this automation, (i.e. the KVM host), you can use that as the file server. If you are not using a pre-existing KVM host(s) and need to create them using this automation, you must use an FTP server because the HMC does not support HTTP. A user with sudo and SSH access on that server. A DVD ISO file of Red Hat Enterprise Linux (RHEL) 8 for s390x architecture mounted in an accessible folder (e.g. /home/ /rhel/ for FTP or /var/www/html/rhel for HTTP) If you do not have RHEL for s390x yet, go to the Red Hat Customer Portal and download it. Under 'Product Variant' use the drop-down menu to select 'Red Hat Enterprise Linux for IBM z Systems' Double-check it's for version 8 and for s390x architecture Then scroll down to Red Hat Enterprise Linux 8.x Binary DVD and click on the 'Download Now' button. To pull the image directly from the command-line of your file server, copy the link for the 'Download Now' button and use wget to pull it down. wget \"https://access.cdn.redhat.com/content/origin/files/sha256/13/13[...]40/rhel-8.7-s390x-dvd.iso?user=6[...]e\" Don't forget to mount it too: FTP: mount /home//rhel or HTTP: mount /var/www/html/rhel A folder created to store config files (e.g. /home/user/ocp-config for FTP or /var/www/html/ocp-config for http) For FTP: sudo mkdir /home//ocp-config or HTTP: sudo mkdir /var/www/html/ocp-config","title":"File Server"},{"location":"prerequisites/#ansible-controller","text":"The computer/virtual machine running Ansible, sometimes referred to as localhost. Must be running on with MacOS or Linux operating systems. Network access to your IBM zSystems / LinuxONE hardware All you need to run Ansible is a terminal and a text editor. However, an IDE like VS Code is highly recommended for an integrated, user-friendly experience with helpful extensions like YAML . Python3 installed: MacOS, first install Homebrew package manager: /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\" then install Python3 brew install python3 #MacOS Fedora: sudo dnf install python3 #Fedora Debian: sudo apt install python3 #Debian Once Python3 is installed, you also need Ansible version 2.9 or above: pip3 install ansible Once Ansible is installed, you will need a few collections from Ansible Galaxy: ansible-galaxy collection install community.general community.crypto ansible.posix community.libvirt If you will be using these playbooks to automate the creation of the LPAR(s) that will act as KVM host(s) for the cluster, you will also need: ansible-galaxy collection install ibm.ibm_zhmc If you are using MacOS, you also need to have Xcode : xcode-select --install","title":"Ansible Controller"},{"location":"prerequisites/#jumphost-for-nat-network","text":"If for KVM network NAT is used, instead of macvtap, a ssh tunnel using a jumphost is required to access the OCP cluster. To configure the ssh tunnel expect is required on the jumphost. Expect will be installed during the setup of the bastion (4_setup_bastion.yaml playbook). In case of missing access to install additional packages, install it manually on the jumphost by executing following command: yum install expect In addition make sure that python3 is installed on the jumphost otherwise ansible might fail to run the tasks. You can install python3 manually by executing the following command: yum install python3","title":"Jumphost for NAT network"},{"location":"run-the-playbooks-for-abi/","text":"Run the Playbooks # Prerequisites # KVM host with root user access or user with sudo privileges. Note: # This playbook only support for single node cluster (SNO) on KVM using ABI. As of now we are supporting only macvtap for Agent based installation (ABI) on KVM Steps: # Step-1: Initial Setup for ABI # Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Update variables in Section (1 - 9) and Section 12 - OpenShift Settings Update variables in Section - 19 ( Agent Based Installer ) in all.yaml before running the playbooks. In case of SNO Section 9 ( Compute Nodes ) need to be comment or remove First playbook to be run is 0_setup.yaml which will create inventory file for ABI and will add ssh key to the kvm host. Run this shell command: ansible-playbook playbooks/0_setup.yaml Run each part step-by-step by running one playbook at a time, or all at once using playbooks/master_playbook_for_abi.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: 0_setup.yaml ( code ) 3_setup_kvm_host.yaml ( code ) 4_create_bastion.yaml ( code ) 5_setup_bastion.yaml ( code ) create_abi_cluster.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: ansible-playbook playbooks/master_playbook_for_abi.yaml If the process fails in error, go through the steps in the troubleshooting page. Step-2: Setup Playbook (0_setup.yaml) # Overview # First-time setup of the Ansible Controller, the machine running Ansible. Outcomes # Packages and Ansible Galaxy collections are confirmed to be installed properly. host_vars files are confirmed to match KVM host(s) hostnames. Ansible inventory is templated out and working properly. SSH key generated for Ansible passwordless authentication. SSH agent is setup on the Ansible Controller. Ansible SSH key is copied to the file server. Notes # You can use an existing SSH key as your Ansible key, or have Ansible create one for you. It is highly recommended to use one without a passphrase. Step-3: Setup KVM Host Playbook (3_setup_kvm_host.yaml) # Overview # Configures the RHEL server(s) installed natively on the LPAR(s) to act as virtualization hypervisor(s) to host the virtual machines that make up the eventual cluster. Outcomes # Ansible SSH key is copied to all KVM hosts for passwordless authentication. RHEL subscription is auto-attached to all KVM hosts. Software packages specified in group_vars/all.yaml have been installed. Cockpit console enabled for Graphical User Interface via web browser. Go to http://kvm-ip-here:9090 to view it. Libvirt is started and enabled. Logical volume group that was created during kickstart is extended to fill all available space. A macvtap bridge has been created on the host's networking interface. Notes # If you're using a pre-existing LPAR, take a look at roles/configure_storage/tasks/main.yaml to make sure that the commands that will be run to extend the logical volume will work. Storage configurations can vary widely. The values there are the defaults from using autopart during kickstart. Also be aware that if lpar.storage_group_2.auto_config is True, the role roles/configure_storage/tasks/main.yaml will be non-idempotent. Meaning, it will fail if you run it twice. Step-4: Create Bastion Playbook (4_create_bastion.yaml) # Overview # Creates the bastion KVM guest on the first KVM host. The bastion hosts essential services for the cluster. If you already have a bastion server, that can be used instead of running this playbook. Outcomes # Bastion configs are templated out to the file server. Bastion is booted using virt-install. Bastion is kickstarted for fully automated setup of the operating system. Notes # This can be a particularly sticky part of the process. If any of the variables used in the virt-install or kickstart are off, the bastion won't be able to boot. Recommend watching it come up from the first KVM host's cockpit. Go to http://kvm-ip-here:9090 via web-browser to view it. You'll have to sign in, enable administrative access (top right), and then click on the virtual machines tab on the left-hand toolbar. Step-5: Setup Bastion Playbook (5_setup_bastion.yaml) # Overview # Configuration of the bastion to host essential infrastructure services for the cluster. Can be first-time setup or use an existing server. Outcomes # Ansible SSH key copied to bastion for passwordless authentication. Software packages specified in group_vars/all.yaml have been installed. An OCP-specific SSH key is generated for passing into the install-config (then passed to the nodes). Firewall is configured to permit traffic through the necessary ports. Domain Name Server (DNS) configured to resolve cluster's IP addresses and APIs. Only done if env.bastion.options.dns is true. DNS is checked to make sure all the necessary Fully Qualified Domain Names, including APIs resolve properly. Also ensures outside access is working. High Availability Proxy (HAProxy) load balancer is configured. Only done if env.bastion.options.loadbalancer.on_bastion is true. If the the cluster is to be highly available (meaning spread across more than one LPAR), an OpenVPN server is setup on the bastion to allow for the KVM hosts to communicate between eachother. OpenVPN clients are configured on the KVM hosts. CoreOS roofts is pulled to the bastion if not already there. OCP client and installer are pulled down if not there already. oc, kubectl and openshift-install binaries are installed. OCP install-config is templated and backed up. In disconnected mode, if platform is mirrored (currently only legacy), image content source policy and additionalTrustBundle is also patched. Manfifests are created. OCP install directory found at /root/ocpinst/ is created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes. Notes # The stickiest part is DNS setup and get_ocp role at the end. Step-6: Master Playbook (master_playbook_for_abi) # Overview # Use this playbook to run all required 5 playbooks (0_setup, 3_setup_kvm_host,4_create_bastion, 5_setup_bastion, create_abi_cluster) at once. Outcomes # Same as all the above outcomes for all required playbooks. At the end you will have an OpenShift cluster deployed and first-time login credentials. Destroy ABI Cluster # Overview # Destroy the ABI Cluster and other resources created as part of installation Procedure # Run the playbook destroy_abi_cluster.yaml to destroy all the resources created while installation ansible-playbook playbooks/destroy_abi_cluster.yaml destroy_abi_cluster Playbook # Overview # Delete all the resources on ABI Cluster. Destroy the Bastion, Compute and Control Nodes. Outcomes # Monitors Deletion Of Compute Machines and Control Machines. Destroys VMs of Bastion and Compute and Control. Test Playbook (test.yaml) # Overview # Use this playbook for your testing purposes, if needed.","title":"Run the Playbooks"},{"location":"run-the-playbooks-for-abi/#run-the-playbooks","text":"","title":"Run the Playbooks"},{"location":"run-the-playbooks-for-abi/#prerequisites","text":"KVM host with root user access or user with sudo privileges.","title":"Prerequisites"},{"location":"run-the-playbooks-for-abi/#note","text":"This playbook only support for single node cluster (SNO) on KVM using ABI. As of now we are supporting only macvtap for Agent based installation (ABI) on KVM","title":"Note:"},{"location":"run-the-playbooks-for-abi/#steps","text":"","title":"Steps:"},{"location":"run-the-playbooks-for-abi/#step-1-initial-setup-for-abi","text":"Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Update variables in Section (1 - 9) and Section 12 - OpenShift Settings Update variables in Section - 19 ( Agent Based Installer ) in all.yaml before running the playbooks. In case of SNO Section 9 ( Compute Nodes ) need to be comment or remove First playbook to be run is 0_setup.yaml which will create inventory file for ABI and will add ssh key to the kvm host. Run this shell command: ansible-playbook playbooks/0_setup.yaml Run each part step-by-step by running one playbook at a time, or all at once using playbooks/master_playbook_for_abi.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: 0_setup.yaml ( code ) 3_setup_kvm_host.yaml ( code ) 4_create_bastion.yaml ( code ) 5_setup_bastion.yaml ( code ) create_abi_cluster.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: ansible-playbook playbooks/master_playbook_for_abi.yaml If the process fails in error, go through the steps in the troubleshooting page.","title":"Step-1: Initial Setup for ABI"},{"location":"run-the-playbooks-for-abi/#step-2-setup-playbook-0_setupyaml","text":"","title":"Step-2: Setup Playbook (0_setup.yaml)"},{"location":"run-the-playbooks-for-abi/#overview","text":"First-time setup of the Ansible Controller, the machine running Ansible.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes","text":"Packages and Ansible Galaxy collections are confirmed to be installed properly. host_vars files are confirmed to match KVM host(s) hostnames. Ansible inventory is templated out and working properly. SSH key generated for Ansible passwordless authentication. SSH agent is setup on the Ansible Controller. Ansible SSH key is copied to the file server.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#notes","text":"You can use an existing SSH key as your Ansible key, or have Ansible create one for you. It is highly recommended to use one without a passphrase.","title":"Notes"},{"location":"run-the-playbooks-for-abi/#step-3-setup-kvm-host-playbook-3_setup_kvm_hostyaml","text":"","title":"Step-3: Setup KVM Host Playbook (3_setup_kvm_host.yaml)"},{"location":"run-the-playbooks-for-abi/#overview_1","text":"Configures the RHEL server(s) installed natively on the LPAR(s) to act as virtualization hypervisor(s) to host the virtual machines that make up the eventual cluster.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_1","text":"Ansible SSH key is copied to all KVM hosts for passwordless authentication. RHEL subscription is auto-attached to all KVM hosts. Software packages specified in group_vars/all.yaml have been installed. Cockpit console enabled for Graphical User Interface via web browser. Go to http://kvm-ip-here:9090 to view it. Libvirt is started and enabled. Logical volume group that was created during kickstart is extended to fill all available space. A macvtap bridge has been created on the host's networking interface.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#notes_1","text":"If you're using a pre-existing LPAR, take a look at roles/configure_storage/tasks/main.yaml to make sure that the commands that will be run to extend the logical volume will work. Storage configurations can vary widely. The values there are the defaults from using autopart during kickstart. Also be aware that if lpar.storage_group_2.auto_config is True, the role roles/configure_storage/tasks/main.yaml will be non-idempotent. Meaning, it will fail if you run it twice.","title":"Notes"},{"location":"run-the-playbooks-for-abi/#step-4-create-bastion-playbook-4_create_bastionyaml","text":"","title":"Step-4: Create Bastion Playbook (4_create_bastion.yaml)"},{"location":"run-the-playbooks-for-abi/#overview_2","text":"Creates the bastion KVM guest on the first KVM host. The bastion hosts essential services for the cluster. If you already have a bastion server, that can be used instead of running this playbook.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_2","text":"Bastion configs are templated out to the file server. Bastion is booted using virt-install. Bastion is kickstarted for fully automated setup of the operating system.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#notes_2","text":"This can be a particularly sticky part of the process. If any of the variables used in the virt-install or kickstart are off, the bastion won't be able to boot. Recommend watching it come up from the first KVM host's cockpit. Go to http://kvm-ip-here:9090 via web-browser to view it. You'll have to sign in, enable administrative access (top right), and then click on the virtual machines tab on the left-hand toolbar.","title":"Notes"},{"location":"run-the-playbooks-for-abi/#step-5-setup-bastion-playbook-5_setup_bastionyaml","text":"","title":"Step-5: Setup Bastion Playbook (5_setup_bastion.yaml)"},{"location":"run-the-playbooks-for-abi/#overview_3","text":"Configuration of the bastion to host essential infrastructure services for the cluster. Can be first-time setup or use an existing server.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_3","text":"Ansible SSH key copied to bastion for passwordless authentication. Software packages specified in group_vars/all.yaml have been installed. An OCP-specific SSH key is generated for passing into the install-config (then passed to the nodes). Firewall is configured to permit traffic through the necessary ports. Domain Name Server (DNS) configured to resolve cluster's IP addresses and APIs. Only done if env.bastion.options.dns is true. DNS is checked to make sure all the necessary Fully Qualified Domain Names, including APIs resolve properly. Also ensures outside access is working. High Availability Proxy (HAProxy) load balancer is configured. Only done if env.bastion.options.loadbalancer.on_bastion is true. If the the cluster is to be highly available (meaning spread across more than one LPAR), an OpenVPN server is setup on the bastion to allow for the KVM hosts to communicate between eachother. OpenVPN clients are configured on the KVM hosts. CoreOS roofts is pulled to the bastion if not already there. OCP client and installer are pulled down if not there already. oc, kubectl and openshift-install binaries are installed. OCP install-config is templated and backed up. In disconnected mode, if platform is mirrored (currently only legacy), image content source policy and additionalTrustBundle is also patched. Manfifests are created. OCP install directory found at /root/ocpinst/ is created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#notes_3","text":"The stickiest part is DNS setup and get_ocp role at the end.","title":"Notes"},{"location":"run-the-playbooks-for-abi/#step-6-master-playbook-master_playbook_for_abi","text":"","title":"Step-6: Master Playbook (master_playbook_for_abi)"},{"location":"run-the-playbooks-for-abi/#overview_4","text":"Use this playbook to run all required 5 playbooks (0_setup, 3_setup_kvm_host,4_create_bastion, 5_setup_bastion, create_abi_cluster) at once.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_4","text":"Same as all the above outcomes for all required playbooks. At the end you will have an OpenShift cluster deployed and first-time login credentials.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#destroy-abi-cluster","text":"","title":"Destroy ABI Cluster"},{"location":"run-the-playbooks-for-abi/#overview_5","text":"Destroy the ABI Cluster and other resources created as part of installation","title":"Overview"},{"location":"run-the-playbooks-for-abi/#procedure","text":"Run the playbook destroy_abi_cluster.yaml to destroy all the resources created while installation ansible-playbook playbooks/destroy_abi_cluster.yaml","title":"Procedure"},{"location":"run-the-playbooks-for-abi/#destroy_abi_cluster-playbook","text":"","title":"destroy_abi_cluster Playbook"},{"location":"run-the-playbooks-for-abi/#overview_6","text":"Delete all the resources on ABI Cluster. Destroy the Bastion, Compute and Control Nodes.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_5","text":"Monitors Deletion Of Compute Machines and Control Machines. Destroys VMs of Bastion and Compute and Control.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#test-playbook-testyaml","text":"","title":"Test Playbook (test.yaml)"},{"location":"run-the-playbooks-for-abi/#overview_7","text":"Use this playbook for your testing purposes, if needed.","title":"Overview"},{"location":"run-the-playbooks-for-disconnected/","text":"Run the Playbooks # Overview # For installing disconnected clusters, you will mostly be following rhe same process as a standard connected cluster. The main additional steps we would be doing is mirroring the OCP images to another registry which is accessible to the cluster and post the cluster coming up, we will be applying operator hub manifests such as image content source policy and catalog source, generated by oc-mirror , to the cluster. Disconnected playbook are mentioned below. Please refer the 4 Run the Playbooks documentation for details of rest of the playbooks: disconnected_mirror_artifacts.yaml ( code ) - Run before 6_create_nodes.yaml disconnected_apply_operator_manifests.yaml ( code ) - Run after 7_ocp_verification.yaml . Pre-requisites # A running registry where the OCP and operator hub images will be mirrored. If the CA of this registry is not automatically trusted, then keep the CA cert content handy to update in inventory file. The CA cert is the file with which, do dont need to skip tls to access the registry. Make sure you have required pull secrets handy. You will need 2 pull secrets, one to apply on the cluster and another which will be used for mirroring. The mirroring pull secret MUST have push access to the mirror registry as well as must give you access to Red Hat registries. A good way to create this would be take the Red Hat pull secret from Get Info page and do a podman login with creds having write access. cp -avrf /path/to/redhat-pull-secrets.json ./mirror-secret.json podman login -u admin -p admin --tls-verify=false --authfile=./mirror-secret.json cat ./mirror-secret.json | jq -r tostring A mirror host. This can be any host that can access the internet (mainly the registry being mirrored from) as well as the registry being mirrored to. This registries being mirrored from would typically be the Red Hat registries (registry.redhat.io, quay.io etc) The file server, configured mentioned below. Appropriately updated variables in your all.yaml . Refer the variables documentation. File Server # This configuration will take place on the file server mentioned under File Server section in overall pre-requisites documentaion. The additional configurations are mentioned over here. Make sure to have a directory housing the clients For FTP: sudo mkdir /home//clients or HTTP: sudo mkdir /var/www/html/clients Make sure this directory contains a pre-downloaded oc-mirror binary in tar.gz format. Currently the supported binary is available for x86_64 on Red Hat Customer portal openshift downloads page. It can also be found on mirror.openshift.com from 4.14 onwards for other architectures. NOTE # At this stage, only oc-mirror binary is fetched from File Server, so it is expected that the lpar for disconnected cluster can at least reach mirror.openshift.com to download the other artifacts for cluster installation. The platform related image content source policy will be baked into the install config as part of 5 Setup Bastion Playbook . For platform content, mirroring is supported both using oc-mirror plugin as well as legacy way. oc-mirror is used as default alhough it is possible to switch to using the legacy way of mirroing platform seperately as well. NOTE : Only legacy way supports specifying your own org on the registry for the ocp images. Manifests generated by oc-mirror will be applied to the cluster once it is up. Disconnected Mirror Artifacts Playbook # Overview # Mirror the ocp platform and other necessary images to the mirror registry. Please run this playbook before you run 6 Create Nodes Playbook and after 0 Setup Playbook . Outcomes # Download oc and oc-mirror to the mirror host. Template the mirror pull secret to the mirror host. Add the ca cert to the mirror host anchors if ca is not trusted. Mirror the platform images using oc adm release mirror if legacy mirroring is enabled. Template the image set to mirror host and then mirror it using oc-mirror plogin. Copy the results on the oc-mirror to ansible controller to apply to cluster in future steps. Notes # Platform can be mirrored both using oc-mirror as well as legacy way, using oc adm catalog mirror . oc-mirror is default method but you can also use legacy mirroring. oc-mirror manifests will be only be applied on the cluster, post verification using below playbook. This playbook can be run at any stage after the 0 Setup playbook. Make sure to run this before the cluster starts pulling at the images from the registry which typically happens where the Create Nodes Playbook is run. Disconnected apply oc mirror manifests to cluster Playbook # Overview # Post cluster creation, oc-mirror manifests are applied to the cluster. Please run this playbook after 7 OCP Verification Playbook . Outcomes # Copy the oc-mirror results manifests to the bastion. Apply the copied manifests to the cluster. Disable default content sources.","title":"Run the Playbooks (Disconnected)"},{"location":"run-the-playbooks-for-disconnected/#run-the-playbooks","text":"","title":"Run the Playbooks"},{"location":"run-the-playbooks-for-disconnected/#overview","text":"For installing disconnected clusters, you will mostly be following rhe same process as a standard connected cluster. The main additional steps we would be doing is mirroring the OCP images to another registry which is accessible to the cluster and post the cluster coming up, we will be applying operator hub manifests such as image content source policy and catalog source, generated by oc-mirror , to the cluster. Disconnected playbook are mentioned below. Please refer the 4 Run the Playbooks documentation for details of rest of the playbooks: disconnected_mirror_artifacts.yaml ( code ) - Run before 6_create_nodes.yaml disconnected_apply_operator_manifests.yaml ( code ) - Run after 7_ocp_verification.yaml .","title":"Overview"},{"location":"run-the-playbooks-for-disconnected/#pre-requisites","text":"A running registry where the OCP and operator hub images will be mirrored. If the CA of this registry is not automatically trusted, then keep the CA cert content handy to update in inventory file. The CA cert is the file with which, do dont need to skip tls to access the registry. Make sure you have required pull secrets handy. You will need 2 pull secrets, one to apply on the cluster and another which will be used for mirroring. The mirroring pull secret MUST have push access to the mirror registry as well as must give you access to Red Hat registries. A good way to create this would be take the Red Hat pull secret from Get Info page and do a podman login with creds having write access. cp -avrf /path/to/redhat-pull-secrets.json ./mirror-secret.json podman login -u admin -p admin --tls-verify=false --authfile=./mirror-secret.json cat ./mirror-secret.json | jq -r tostring A mirror host. This can be any host that can access the internet (mainly the registry being mirrored from) as well as the registry being mirrored to. This registries being mirrored from would typically be the Red Hat registries (registry.redhat.io, quay.io etc) The file server, configured mentioned below. Appropriately updated variables in your all.yaml . Refer the variables documentation.","title":"Pre-requisites"},{"location":"run-the-playbooks-for-disconnected/#file-server","text":"This configuration will take place on the file server mentioned under File Server section in overall pre-requisites documentaion. The additional configurations are mentioned over here. Make sure to have a directory housing the clients For FTP: sudo mkdir /home//clients or HTTP: sudo mkdir /var/www/html/clients Make sure this directory contains a pre-downloaded oc-mirror binary in tar.gz format. Currently the supported binary is available for x86_64 on Red Hat Customer portal openshift downloads page. It can also be found on mirror.openshift.com from 4.14 onwards for other architectures.","title":"File Server"},{"location":"run-the-playbooks-for-disconnected/#note","text":"At this stage, only oc-mirror binary is fetched from File Server, so it is expected that the lpar for disconnected cluster can at least reach mirror.openshift.com to download the other artifacts for cluster installation. The platform related image content source policy will be baked into the install config as part of 5 Setup Bastion Playbook . For platform content, mirroring is supported both using oc-mirror plugin as well as legacy way. oc-mirror is used as default alhough it is possible to switch to using the legacy way of mirroing platform seperately as well. NOTE : Only legacy way supports specifying your own org on the registry for the ocp images. Manifests generated by oc-mirror will be applied to the cluster once it is up.","title":"NOTE"},{"location":"run-the-playbooks-for-disconnected/#disconnected-mirror-artifacts-playbook","text":"","title":"Disconnected Mirror Artifacts Playbook"},{"location":"run-the-playbooks-for-disconnected/#overview_1","text":"Mirror the ocp platform and other necessary images to the mirror registry. Please run this playbook before you run 6 Create Nodes Playbook and after 0 Setup Playbook .","title":"Overview"},{"location":"run-the-playbooks-for-disconnected/#outcomes","text":"Download oc and oc-mirror to the mirror host. Template the mirror pull secret to the mirror host. Add the ca cert to the mirror host anchors if ca is not trusted. Mirror the platform images using oc adm release mirror if legacy mirroring is enabled. Template the image set to mirror host and then mirror it using oc-mirror plogin. Copy the results on the oc-mirror to ansible controller to apply to cluster in future steps.","title":"Outcomes"},{"location":"run-the-playbooks-for-disconnected/#notes","text":"Platform can be mirrored both using oc-mirror as well as legacy way, using oc adm catalog mirror . oc-mirror is default method but you can also use legacy mirroring. oc-mirror manifests will be only be applied on the cluster, post verification using below playbook. This playbook can be run at any stage after the 0 Setup playbook. Make sure to run this before the cluster starts pulling at the images from the registry which typically happens where the Create Nodes Playbook is run.","title":"Notes"},{"location":"run-the-playbooks-for-disconnected/#disconnected-apply-oc-mirror-manifests-to-cluster-playbook","text":"","title":"Disconnected apply oc mirror manifests to cluster Playbook"},{"location":"run-the-playbooks-for-disconnected/#overview_2","text":"Post cluster creation, oc-mirror manifests are applied to the cluster. Please run this playbook after 7 OCP Verification Playbook .","title":"Overview"},{"location":"run-the-playbooks-for-disconnected/#outcomes_1","text":"Copy the oc-mirror results manifests to the bastion. Apply the copied manifests to the cluster. Disable default content sources.","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/","text":"Run the Playbooks # Prerequisites # Running OCP Cluster ( Management Cluster ) KVM host with root user access or user with sudo privileges if compute nodes are KVM. zvm host ( bastion ) and nodes if compute nodes are zVM. Network Prerequisites # DNS entry to resolve api.${cluster}.${domain} , api-int.${cluster}.${domain} , *apps.${cluster}.${domain} to a load balancer deployed to redirect incoming traffic to the ingresses pod ( Bastion ). If using dynamic IP for agents, make sure you have entries in DHCP Server for macaddresses you are using in installation to map to IPv4 addresses and along with this DHCP server should make your IPs to use nameserver which you have configured. Note: # As of now we are supporting only macvtap for Hosted Control Plane Agent based installation for KVM compute nodes. Supported network modes for zVM : vswitch, OSA, RoCE, Hipersockets Step-1: Setup Ansible Vault for Management Cluster Credentials # Overview # Creating an encrypted file for storing Management Cluster Credentials and other passwords. Steps: # The ansible-vault create command is used to create the encrypted file. Create an encrypted file in playbooks directory and set the Vault password ( Below command will prompt for setting Vault password). ansible-vault create playbooks/secrets.yaml Give the credentials of Management Cluster in the encrypted file (created above) in following format. kvm_host_password: '' bastion_root_pw: '' api_server: ':' user_name: '' password: '' You can edit the encrypted file using below command ansible-vault edit playbooks/secrets.yaml Make sure you entered Manamegement cluster credenitails properly ,incorrect credentails will cause problem while logging in to the cluster in further steps. Step-2: Initial Setup for Hosted Control Plane # Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Update variables as per the compute node type (zKVM /zVM) in hcp.yaml ( hcp.yaml.template )before running the playbooks. First playbook to be run is setup_for_hcp.yaml which will create inventory file for HCP and will add ssh key to the kvm host. Run this shell command: ansible-playbook playbooks/setup_for_hcp.yaml --ask-vault-pass Step-3: Create Hosted Cluster # Run each part step-by-step by running one playbook at a time, or all at once using hcp.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: create_hosted_cluster.yaml ( code ) create_agents_and_wait_for_install_complete.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: After installation , you can find the details of cluster like kubeconfig and password in the installation directory ( $HOME/ansible_workdir/ ) ansible-playbook playbooks/hcp.yaml --ask-vault-pass Description for Playbooks # setup_for_hcp Playbook # Overview # First-time setup of the Ansible Controller,the machine running Ansible. Outcomes # Inventory file for hcp to be created. SSH key generated for Ansible passwordless authentication. Ansible SSH key is copied to kvm host. Notes # You can use an existing SSH key as your Ansible key, or have Ansible create one for you. create_hosted_cluster Playbook # Overview # Creates and configures bastion Creating AgentServiceConfig, HostedControlPlane, InfraEnv Resources, Download Images Outcomes # Install prerequisites on kvm_host Create bastion Configure bastion Log in to Management Cluster Creates AgentServiceConfig resource and required configmaps Deploys HostedControlPlane Creates InfraEnv resource and wait till ISO generation Download required Images to kvm_host (initrd.img and kernel.img) Download rootfs.img and configure httpd on bastion. create_agents_and_wait_for_install_complete Playbook # Overview # Boots the Agents Scale and Nodepool and monitor all the resources required. Outcomes # Boot Agents Monitor the attachment of agents Approves the agents Scale up the nodepool Monitor agentmachines and machines creation Monitor the worker nodes attachment Configure HAProxy for Hosted workers Monitor the Cluster operators Display Login Credentials for Hosted Cluster Destroy the Hosted Cluser # Overview # Destroy the Hosted Control Plane and other resources created as part of installation Procedure # Run the playbook destroy_cluster_hcp.yaml to destroy all the resources created while installation ansible-playbook playbooks/destroy_cluster_hcp.yaml --ask-vault-pass destroy_cluster_hcp Playbook # Overview # Delete all the resources on Hosted Cluster Destroy the Hosted Control Plane Outcomes # Scale in the nodepool to 0 Monitors the deletion of workers, agent machines and machines. Deletes the agents Deletes InfraEnv Resource Destroys the Hosted Control Plane Deletes AgentServiceConfig Deletes the images downloaded on kvm host Destroys VMs of Bastion and Agents Notes # Overriding OCP Release Image for HCP # If you want to use any other image as OCP release image for HCP , you can override it by environment variable. export HCP_RELEASE_IMAGE=\"\"","title":"Run the Playbooks (HostedControlPlane)"},{"location":"run-the-playbooks-for-hcp/#run-the-playbooks","text":"","title":"Run the Playbooks"},{"location":"run-the-playbooks-for-hcp/#prerequisites","text":"Running OCP Cluster ( Management Cluster ) KVM host with root user access or user with sudo privileges if compute nodes are KVM. zvm host ( bastion ) and nodes if compute nodes are zVM.","title":"Prerequisites"},{"location":"run-the-playbooks-for-hcp/#network-prerequisites","text":"DNS entry to resolve api.${cluster}.${domain} , api-int.${cluster}.${domain} , *apps.${cluster}.${domain} to a load balancer deployed to redirect incoming traffic to the ingresses pod ( Bastion ). If using dynamic IP for agents, make sure you have entries in DHCP Server for macaddresses you are using in installation to map to IPv4 addresses and along with this DHCP server should make your IPs to use nameserver which you have configured.","title":"Network Prerequisites"},{"location":"run-the-playbooks-for-hcp/#note","text":"As of now we are supporting only macvtap for Hosted Control Plane Agent based installation for KVM compute nodes. Supported network modes for zVM : vswitch, OSA, RoCE, Hipersockets","title":"Note:"},{"location":"run-the-playbooks-for-hcp/#step-1-setup-ansible-vault-for-management-cluster-credentials","text":"","title":"Step-1: Setup Ansible Vault for Management Cluster Credentials"},{"location":"run-the-playbooks-for-hcp/#overview","text":"Creating an encrypted file for storing Management Cluster Credentials and other passwords.","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#steps","text":"The ansible-vault create command is used to create the encrypted file. Create an encrypted file in playbooks directory and set the Vault password ( Below command will prompt for setting Vault password). ansible-vault create playbooks/secrets.yaml Give the credentials of Management Cluster in the encrypted file (created above) in following format. kvm_host_password: '' bastion_root_pw: '' api_server: ':' user_name: '' password: '' You can edit the encrypted file using below command ansible-vault edit playbooks/secrets.yaml Make sure you entered Manamegement cluster credenitails properly ,incorrect credentails will cause problem while logging in to the cluster in further steps.","title":"Steps:"},{"location":"run-the-playbooks-for-hcp/#step-2-initial-setup-for-hosted-control-plane","text":"Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Update variables as per the compute node type (zKVM /zVM) in hcp.yaml ( hcp.yaml.template )before running the playbooks. First playbook to be run is setup_for_hcp.yaml which will create inventory file for HCP and will add ssh key to the kvm host. Run this shell command: ansible-playbook playbooks/setup_for_hcp.yaml --ask-vault-pass","title":"Step-2: Initial Setup for Hosted Control Plane"},{"location":"run-the-playbooks-for-hcp/#step-3-create-hosted-cluster","text":"Run each part step-by-step by running one playbook at a time, or all at once using hcp.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: create_hosted_cluster.yaml ( code ) create_agents_and_wait_for_install_complete.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: After installation , you can find the details of cluster like kubeconfig and password in the installation directory ( $HOME/ansible_workdir/ ) ansible-playbook playbooks/hcp.yaml --ask-vault-pass","title":"Step-3: Create Hosted Cluster"},{"location":"run-the-playbooks-for-hcp/#description-for-playbooks","text":"","title":"Description for Playbooks"},{"location":"run-the-playbooks-for-hcp/#setup_for_hcp-playbook","text":"","title":"setup_for_hcp Playbook"},{"location":"run-the-playbooks-for-hcp/#overview_1","text":"First-time setup of the Ansible Controller,the machine running Ansible.","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#outcomes","text":"Inventory file for hcp to be created. SSH key generated for Ansible passwordless authentication. Ansible SSH key is copied to kvm host.","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/#notes","text":"You can use an existing SSH key as your Ansible key, or have Ansible create one for you.","title":"Notes"},{"location":"run-the-playbooks-for-hcp/#create_hosted_cluster-playbook","text":"","title":"create_hosted_cluster Playbook"},{"location":"run-the-playbooks-for-hcp/#overview_2","text":"Creates and configures bastion Creating AgentServiceConfig, HostedControlPlane, InfraEnv Resources, Download Images","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#outcomes_1","text":"Install prerequisites on kvm_host Create bastion Configure bastion Log in to Management Cluster Creates AgentServiceConfig resource and required configmaps Deploys HostedControlPlane Creates InfraEnv resource and wait till ISO generation Download required Images to kvm_host (initrd.img and kernel.img) Download rootfs.img and configure httpd on bastion.","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/#create_agents_and_wait_for_install_complete-playbook","text":"","title":"create_agents_and_wait_for_install_complete Playbook"},{"location":"run-the-playbooks-for-hcp/#overview_3","text":"Boots the Agents Scale and Nodepool and monitor all the resources required.","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#outcomes_2","text":"Boot Agents Monitor the attachment of agents Approves the agents Scale up the nodepool Monitor agentmachines and machines creation Monitor the worker nodes attachment Configure HAProxy for Hosted workers Monitor the Cluster operators Display Login Credentials for Hosted Cluster","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/#destroy-the-hosted-cluser","text":"","title":"Destroy the Hosted Cluser"},{"location":"run-the-playbooks-for-hcp/#overview_4","text":"Destroy the Hosted Control Plane and other resources created as part of installation","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#procedure","text":"Run the playbook destroy_cluster_hcp.yaml to destroy all the resources created while installation ansible-playbook playbooks/destroy_cluster_hcp.yaml --ask-vault-pass","title":"Procedure"},{"location":"run-the-playbooks-for-hcp/#destroy_cluster_hcp-playbook","text":"","title":"destroy_cluster_hcp Playbook"},{"location":"run-the-playbooks-for-hcp/#overview_5","text":"Delete all the resources on Hosted Cluster Destroy the Hosted Control Plane","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#outcomes_3","text":"Scale in the nodepool to 0 Monitors the deletion of workers, agent machines and machines. Deletes the agents Deletes InfraEnv Resource Destroys the Hosted Control Plane Deletes AgentServiceConfig Deletes the images downloaded on kvm host Destroys VMs of Bastion and Agents","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/#notes_1","text":"","title":"Notes"},{"location":"run-the-playbooks-for-hcp/#overriding-ocp-release-image-for-hcp","text":"If you want to use any other image as OCP release image for HCP , you can override it by environment variable. export HCP_RELEASE_IMAGE=\"\"","title":"Overriding OCP Release Image for HCP"},{"location":"run-the-playbooks/","text":"Step 4: Run the Playbooks # Overview # Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Run this shell command: ansible-playbook playbooks/0_setup.yaml Run each part step-by-step by running one playbook at a time, or all at once using playbooks/site.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: 0_setup.yaml ( code ) 1_create_lpar.yaml ( code ) 2_create_kvm_host.yaml ( code ) 3_setup_kvm_host.yaml ( code ) 4_create_bastion.yaml ( code ) 5_setup_bastion.yaml ( code ) 6_create_nodes.yaml ( code ) 7_ocp_verification.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: ansible-playbook playbooks/site.yaml If the process fails in error, go through the steps in the troubleshooting page. At the end of the the last playbook, follow the printed instructions for first-time login to the cluster. If you make cluster configuration changes in all.yaml file, like increased number of nodes or a new bastion setup, after you have successfully installed a OCP cluster, then you just need to run these playbooks in order: 5_setup_bastion.yaml 6_create_nodes.yaml 7_ocp_verification.yaml 0 Setup Playbook # Overview # First-time setup of the Ansible Controller, the machine running Ansible. Outcomes # Packages and Ansible Galaxy collections are confirmed to be installed properly. host_vars files are confirmed to match KVM host(s) hostnames. Ansible inventory is templated out and working properly. SSH key generated for Ansible passwordless authentication. SSH agent is setup on the Ansible Controller. Ansible SSH key is copied to the file server. Notes # You can use an existing SSH key as your Ansible key, or have Ansible create one for you. It is highly recommended to use one without a passphrase. 1 Create LPAR Playbook # Overview # Creation of one to three Logical Partitions (LPARs), depending on your configuration. Uses the Hardware Management Console (HMC) API, so your system must be in Dynamic Partition Manager (DPM) mode. Outcomes # One to three LPARs created. One to two Networking Interface Cards (NICs) attached per LPAR. One to two storage groups attached per LPAR. LPARs are in 'Stopped' state. Notes # Recommend opening the HMC via web-browser to watch the LPARs come up. 2 Create KVM Host Playbook # Overview # First-time start-up of Red Hat Enterprise Linux installed natively on the LPAR(s). Uses the Hardware Management Console (HMC) API, so your system must be in Dynamic Partition Manager (DPM) mode. Configuration files are passed to the file server and RHEL is booted and then kickstarted for fully automated setup. Outcomes # LPAR(s) started up in 'Active' state. Configuration files (cfg, ins, prm) for the KVM host(s) are on the file server in the provided configs directory. Notes # Recommended to open the HMC via web-browser to watch the Operating System Messages for each LPAR as they boot in order to debug any potential problems. 3 Setup KVM Host Playbook # Overview # Configures the RHEL server(s) installed natively on the LPAR(s) to act as virtualization hypervisor(s) to host the virtual machines that make up the eventual cluster. Outcomes # Ansible SSH key is copied to all KVM hosts for passwordless authentication. RHEL subscription is auto-attached to all KVM hosts. Software packages specified in group_vars/all.yaml have been installed. Cockpit console enabled for Graphical User Interface via web browser. Go to http://kvm-ip-here:9090 to view it. Libvirt is started and enabled. Logical volume group that was created during kickstart is extended to fill all available space. A macvtap bridge has been created on the host's networking interface. Notes # If you're using a pre-existing LPAR, take a look at roles/configure_storage/tasks/main.yaml to make sure that the commands that will be run to extend the logical volume will work. Storage configurations can vary widely. The values there are the defaults from using autopart during kickstart. Also be aware that if lpar.storage_group_2.auto_config is True, the role roles/configure_storage/tasks/main.yaml will be non-idempotent. Meaning, it will fail if you run it twice. 4 Create Bastion Playbook # Overview # Creates the bastion KVM guest on the first KVM host. The bastion hosts essential services for the cluster. If you already have a bastion server, that can be used instead of running this playbook. Outcomes # Bastion configs are templated out to the file server. Bastion is booted using virt-install. Bastion is kickstarted for fully automated setup of the operating system. Notes # This can be a particularly sticky part of the process. If any of the variables used in the virt-install or kickstart are off, the bastion won't be able to boot. Recommend watching it come up from the first KVM host's cockpit. Go to http://kvm-ip-here:9090 via web-browser to view it. You'll have to sign in, enable administrative access (top right), and then click on the virtual machines tab on the left-hand toolbar. 5 Setup Bastion Playbook # Overview # Configuration of the bastion to host essential infrastructure services for the cluster. Can be first-time setup or use an existing server. Outcomes # Ansible SSH key copied to bastion for passwordless authentication. Software packages specified in group_vars/all.yaml have been installed. An OCP-specific SSH key is generated for passing into the install-config (then passed to the nodes). Firewall is configured to permit traffic through the necessary ports. Domain Name Server (DNS) configured to resolve cluster's IP addresses and APIs. Only done if env.bastion.options.dns is true. DNS is checked to make sure all the necessary Fully Qualified Domain Names, including APIs resolve properly. Also ensures outside access is working. High Availability Proxy (HAProxy) load balancer is configured. Only done if env.bastion.options.loadbalancer.on_bastion is true. If the the cluster is to be highly available (meaning spread across more than one LPAR), an OpenVPN server is setup on the bastion to allow for the KVM hosts to communicate between eachother. OpenVPN clients are configured on the KVM hosts. CoreOS roofts is pulled to the bastion if not already there. OCP client and installer are pulled down if not there already. oc, kubectl and openshift-install binaries are installed. OCP install-config is templated and backed up. In disconnected mode, if platform is mirrored (currently only legacy), image content source policy and additionalTrustBundle is also patched. Manfifests are created. OCP install directory found at /root/ocpinst/ is created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes. Notes # The stickiest part is DNS setup and get_ocp role at the end. 6 Create Nodes Playbook # Overview # OCP cluster's nodes are created and the control plane is bootstrapped. Outcomes # CoreOS initramfs and kernel are pulled down. Control nodes are created and bootstrapped. Bootstrap has been created, done its job connecting the control plane, and is then destroyed. Compute nodes are created, as many as is specified in groups_vars/all.yaml. Infra nodes, if defined in group_vars/all.yaml have been created, but are at this point essentially just compute nodes. Notes # To watch the bootstrap do its job connecting the control plane: first, SSH to the bastion, then change to root (sudo -i), from there SSH to the bootstrap node as user 'core' (e.g. ssh core@bootstrap-ip). Once you're in the bootstrap run 'journalctl -b -f -u release-image.service -u bootkube.service'. Expect many errors as the control planes come up. You're waiting for the message 'bootkube.service complete' If the cluster is highly available, the bootstrap node will be created on the last (usually third) KVM host in the group. Since the bastion is on the first host, this was done to spread out the load. 7 OCP Verification Playbook # Overview # Final steps of waiting for and verifying the OpenShift cluster to complete its installation. Outcomes # Certificate Signing Requests (CSRs) have been approved. All nodes are in ready state. All cluster operators are available. OpenShift installation is verified to be complete. Temporary credentials and URL are printed to allow easy first-time login to the cluster. Notes # These steps may take a long time and the tasks are very repetitive because of that. If your cluster has a very large number of compute nodes or insufficient resources, more rounds of approvals and time may be needed for these tasks. If you made it this far, congratulations! To install a new cluster, copy your inventory directory, change the default in the ansible.cfg, change the variables, and start again. With all the customizations to the playbooks you made along the way still intact. Additional Playbooks # Create additional compute nodes (create_compute_node.yaml) and delete compute nodes (delete_compute_node.yaml) # Overview # In case you want to add additional compute nodes in a day-2 operation to your cluster or delete existing compute nodes in your cluster, run these playbooks. Currently we support only env.network_mode macvtap for these two playbooks. We recommand to create a new config file for the additional compute node with such parameters: day2_compute_node: vm_name: control-4 vm_hostname: control-4 vm_ip: 172.192.100.101 hostname: kvm01 host_arch: s390x # rhcos_download_url with '/' at the end ! rhcos_download_url: \"https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.15/4.15.0/\" # RHCOS live image filenames rhcos_live_kernel: \"rhcos-4.15.0-s390x-live-kernel-s390x\" rhcos_live_initrd: \"rhcos-4.15.0-s390x-live-initramfs.s390x.img\" rhcos_live_rootfs: \"rhcos-4.15.0-s390x-live-rootfs.s390x.img\" Make sure that the hostname where you want to create the additional compute node is defined in the inventories/default/hosts file. Now you can execute the add_compute_node playbook with this command and parameter: ansible-playbook playbooks/add_compute_node.yaml --extra-vars \"@compute-node.yaml\" Outcomes # The defind compute node will be added or deleted, depends which playbook you have executed. Master Playbook (site.yaml) # Overview # Use this playbook to run all required playbooks (0-7) all at once. Outcomes # Same as all the above outcomes for all required playbooks. At the end you will have an OpenShift cluster deployed and first-time login credentials. Pre-Existing Host Master Playbook (pre-existing_site.yaml) # Overview # Use this version of the master playbook if you are using a pre-existing LPAR(s) with RHEL already installed. Outcomes # Same as all the above outcomes for all playbooks excluding 1 & 2. This will not create LPAR(s) nor boot your RHEL KVM host(s). At the end you will have an OpenShift cluster deployed and first-time login credentials. Reinstall Cluster Playbook (reinstall_cluster.yaml) # Overview # In case the cluster needs to be completely reinstalled, run this playbook. It will refresh the ingitions that expire after 24 hours, teardown the nodes and re-create them, and then verify the installation. Outcomes # get_ocp role runs. Delete the folders /var/www/html/bin and /var/www/html/ignition. CoreOS roofts is pulled to the bastion. OCP client and installer are pulled down. oc, kubectl and openshift-install binaries are installed. OCP install-config is created from scratch, templated and backed up. Manfifests are created. OCP install directory found at /root/ocpinst/ is deleted, re-created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes. 6 Create Nodes playbook runs, tearing down and recreating cluster nodes. 7 OCP Verification playbook runs, verifying new deployment. Test Playbook (test.yaml) # Overview # Use this playbook for your testing purposes, if needed.","title":"4 Run the Playbooks"},{"location":"run-the-playbooks/#step-4-run-the-playbooks","text":"","title":"Step 4: Run the Playbooks"},{"location":"run-the-playbooks/#overview","text":"Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Run this shell command: ansible-playbook playbooks/0_setup.yaml Run each part step-by-step by running one playbook at a time, or all at once using playbooks/site.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: 0_setup.yaml ( code ) 1_create_lpar.yaml ( code ) 2_create_kvm_host.yaml ( code ) 3_setup_kvm_host.yaml ( code ) 4_create_bastion.yaml ( code ) 5_setup_bastion.yaml ( code ) 6_create_nodes.yaml ( code ) 7_ocp_verification.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: ansible-playbook playbooks/site.yaml If the process fails in error, go through the steps in the troubleshooting page. At the end of the the last playbook, follow the printed instructions for first-time login to the cluster. If you make cluster configuration changes in all.yaml file, like increased number of nodes or a new bastion setup, after you have successfully installed a OCP cluster, then you just need to run these playbooks in order: 5_setup_bastion.yaml 6_create_nodes.yaml 7_ocp_verification.yaml","title":"Overview"},{"location":"run-the-playbooks/#0-setup-playbook","text":"","title":"0 Setup Playbook"},{"location":"run-the-playbooks/#overview_1","text":"First-time setup of the Ansible Controller, the machine running Ansible.","title":"Overview"},{"location":"run-the-playbooks/#outcomes","text":"Packages and Ansible Galaxy collections are confirmed to be installed properly. host_vars files are confirmed to match KVM host(s) hostnames. Ansible inventory is templated out and working properly. SSH key generated for Ansible passwordless authentication. SSH agent is setup on the Ansible Controller. Ansible SSH key is copied to the file server.","title":"Outcomes"},{"location":"run-the-playbooks/#notes","text":"You can use an existing SSH key as your Ansible key, or have Ansible create one for you. It is highly recommended to use one without a passphrase.","title":"Notes"},{"location":"run-the-playbooks/#1-create-lpar-playbook","text":"","title":"1 Create LPAR Playbook"},{"location":"run-the-playbooks/#overview_2","text":"Creation of one to three Logical Partitions (LPARs), depending on your configuration. Uses the Hardware Management Console (HMC) API, so your system must be in Dynamic Partition Manager (DPM) mode.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_1","text":"One to three LPARs created. One to two Networking Interface Cards (NICs) attached per LPAR. One to two storage groups attached per LPAR. LPARs are in 'Stopped' state.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_1","text":"Recommend opening the HMC via web-browser to watch the LPARs come up.","title":"Notes"},{"location":"run-the-playbooks/#2-create-kvm-host-playbook","text":"","title":"2 Create KVM Host Playbook"},{"location":"run-the-playbooks/#overview_3","text":"First-time start-up of Red Hat Enterprise Linux installed natively on the LPAR(s). Uses the Hardware Management Console (HMC) API, so your system must be in Dynamic Partition Manager (DPM) mode. Configuration files are passed to the file server and RHEL is booted and then kickstarted for fully automated setup.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_2","text":"LPAR(s) started up in 'Active' state. Configuration files (cfg, ins, prm) for the KVM host(s) are on the file server in the provided configs directory.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_2","text":"Recommended to open the HMC via web-browser to watch the Operating System Messages for each LPAR as they boot in order to debug any potential problems.","title":"Notes"},{"location":"run-the-playbooks/#3-setup-kvm-host-playbook","text":"","title":"3 Setup KVM Host Playbook"},{"location":"run-the-playbooks/#overview_4","text":"Configures the RHEL server(s) installed natively on the LPAR(s) to act as virtualization hypervisor(s) to host the virtual machines that make up the eventual cluster.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_3","text":"Ansible SSH key is copied to all KVM hosts for passwordless authentication. RHEL subscription is auto-attached to all KVM hosts. Software packages specified in group_vars/all.yaml have been installed. Cockpit console enabled for Graphical User Interface via web browser. Go to http://kvm-ip-here:9090 to view it. Libvirt is started and enabled. Logical volume group that was created during kickstart is extended to fill all available space. A macvtap bridge has been created on the host's networking interface.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_3","text":"If you're using a pre-existing LPAR, take a look at roles/configure_storage/tasks/main.yaml to make sure that the commands that will be run to extend the logical volume will work. Storage configurations can vary widely. The values there are the defaults from using autopart during kickstart. Also be aware that if lpar.storage_group_2.auto_config is True, the role roles/configure_storage/tasks/main.yaml will be non-idempotent. Meaning, it will fail if you run it twice.","title":"Notes"},{"location":"run-the-playbooks/#4-create-bastion-playbook","text":"","title":"4 Create Bastion Playbook"},{"location":"run-the-playbooks/#overview_5","text":"Creates the bastion KVM guest on the first KVM host. The bastion hosts essential services for the cluster. If you already have a bastion server, that can be used instead of running this playbook.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_4","text":"Bastion configs are templated out to the file server. Bastion is booted using virt-install. Bastion is kickstarted for fully automated setup of the operating system.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_4","text":"This can be a particularly sticky part of the process. If any of the variables used in the virt-install or kickstart are off, the bastion won't be able to boot. Recommend watching it come up from the first KVM host's cockpit. Go to http://kvm-ip-here:9090 via web-browser to view it. You'll have to sign in, enable administrative access (top right), and then click on the virtual machines tab on the left-hand toolbar.","title":"Notes"},{"location":"run-the-playbooks/#5-setup-bastion-playbook","text":"","title":"5 Setup Bastion Playbook"},{"location":"run-the-playbooks/#overview_6","text":"Configuration of the bastion to host essential infrastructure services for the cluster. Can be first-time setup or use an existing server.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_5","text":"Ansible SSH key copied to bastion for passwordless authentication. Software packages specified in group_vars/all.yaml have been installed. An OCP-specific SSH key is generated for passing into the install-config (then passed to the nodes). Firewall is configured to permit traffic through the necessary ports. Domain Name Server (DNS) configured to resolve cluster's IP addresses and APIs. Only done if env.bastion.options.dns is true. DNS is checked to make sure all the necessary Fully Qualified Domain Names, including APIs resolve properly. Also ensures outside access is working. High Availability Proxy (HAProxy) load balancer is configured. Only done if env.bastion.options.loadbalancer.on_bastion is true. If the the cluster is to be highly available (meaning spread across more than one LPAR), an OpenVPN server is setup on the bastion to allow for the KVM hosts to communicate between eachother. OpenVPN clients are configured on the KVM hosts. CoreOS roofts is pulled to the bastion if not already there. OCP client and installer are pulled down if not there already. oc, kubectl and openshift-install binaries are installed. OCP install-config is templated and backed up. In disconnected mode, if platform is mirrored (currently only legacy), image content source policy and additionalTrustBundle is also patched. Manfifests are created. OCP install directory found at /root/ocpinst/ is created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_5","text":"The stickiest part is DNS setup and get_ocp role at the end.","title":"Notes"},{"location":"run-the-playbooks/#6-create-nodes-playbook","text":"","title":"6 Create Nodes Playbook"},{"location":"run-the-playbooks/#overview_7","text":"OCP cluster's nodes are created and the control plane is bootstrapped.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_6","text":"CoreOS initramfs and kernel are pulled down. Control nodes are created and bootstrapped. Bootstrap has been created, done its job connecting the control plane, and is then destroyed. Compute nodes are created, as many as is specified in groups_vars/all.yaml. Infra nodes, if defined in group_vars/all.yaml have been created, but are at this point essentially just compute nodes.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_6","text":"To watch the bootstrap do its job connecting the control plane: first, SSH to the bastion, then change to root (sudo -i), from there SSH to the bootstrap node as user 'core' (e.g. ssh core@bootstrap-ip). Once you're in the bootstrap run 'journalctl -b -f -u release-image.service -u bootkube.service'. Expect many errors as the control planes come up. You're waiting for the message 'bootkube.service complete' If the cluster is highly available, the bootstrap node will be created on the last (usually third) KVM host in the group. Since the bastion is on the first host, this was done to spread out the load.","title":"Notes"},{"location":"run-the-playbooks/#7-ocp-verification-playbook","text":"","title":"7 OCP Verification Playbook"},{"location":"run-the-playbooks/#overview_8","text":"Final steps of waiting for and verifying the OpenShift cluster to complete its installation.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_7","text":"Certificate Signing Requests (CSRs) have been approved. All nodes are in ready state. All cluster operators are available. OpenShift installation is verified to be complete. Temporary credentials and URL are printed to allow easy first-time login to the cluster.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_7","text":"These steps may take a long time and the tasks are very repetitive because of that. If your cluster has a very large number of compute nodes or insufficient resources, more rounds of approvals and time may be needed for these tasks. If you made it this far, congratulations! To install a new cluster, copy your inventory directory, change the default in the ansible.cfg, change the variables, and start again. With all the customizations to the playbooks you made along the way still intact.","title":"Notes"},{"location":"run-the-playbooks/#additional-playbooks","text":"","title":"Additional Playbooks"},{"location":"run-the-playbooks/#create-additional-compute-nodes-create_compute_nodeyaml-and-delete-compute-nodes-delete_compute_nodeyaml","text":"","title":"Create additional compute nodes (create_compute_node.yaml) and delete compute nodes (delete_compute_node.yaml)"},{"location":"run-the-playbooks/#overview_9","text":"In case you want to add additional compute nodes in a day-2 operation to your cluster or delete existing compute nodes in your cluster, run these playbooks. Currently we support only env.network_mode macvtap for these two playbooks. We recommand to create a new config file for the additional compute node with such parameters: day2_compute_node: vm_name: control-4 vm_hostname: control-4 vm_ip: 172.192.100.101 hostname: kvm01 host_arch: s390x # rhcos_download_url with '/' at the end ! rhcos_download_url: \"https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.15/4.15.0/\" # RHCOS live image filenames rhcos_live_kernel: \"rhcos-4.15.0-s390x-live-kernel-s390x\" rhcos_live_initrd: \"rhcos-4.15.0-s390x-live-initramfs.s390x.img\" rhcos_live_rootfs: \"rhcos-4.15.0-s390x-live-rootfs.s390x.img\" Make sure that the hostname where you want to create the additional compute node is defined in the inventories/default/hosts file. Now you can execute the add_compute_node playbook with this command and parameter: ansible-playbook playbooks/add_compute_node.yaml --extra-vars \"@compute-node.yaml\"","title":"Overview"},{"location":"run-the-playbooks/#outcomes_8","text":"The defind compute node will be added or deleted, depends which playbook you have executed.","title":"Outcomes"},{"location":"run-the-playbooks/#master-playbook-siteyaml","text":"","title":"Master Playbook (site.yaml)"},{"location":"run-the-playbooks/#overview_10","text":"Use this playbook to run all required playbooks (0-7) all at once.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_9","text":"Same as all the above outcomes for all required playbooks. At the end you will have an OpenShift cluster deployed and first-time login credentials.","title":"Outcomes"},{"location":"run-the-playbooks/#pre-existing-host-master-playbook-pre-existing_siteyaml","text":"","title":"Pre-Existing Host Master Playbook (pre-existing_site.yaml)"},{"location":"run-the-playbooks/#overview_11","text":"Use this version of the master playbook if you are using a pre-existing LPAR(s) with RHEL already installed.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_10","text":"Same as all the above outcomes for all playbooks excluding 1 & 2. This will not create LPAR(s) nor boot your RHEL KVM host(s). At the end you will have an OpenShift cluster deployed and first-time login credentials.","title":"Outcomes"},{"location":"run-the-playbooks/#reinstall-cluster-playbook-reinstall_clusteryaml","text":"","title":"Reinstall Cluster Playbook (reinstall_cluster.yaml)"},{"location":"run-the-playbooks/#overview_12","text":"In case the cluster needs to be completely reinstalled, run this playbook. It will refresh the ingitions that expire after 24 hours, teardown the nodes and re-create them, and then verify the installation.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_11","text":"get_ocp role runs. Delete the folders /var/www/html/bin and /var/www/html/ignition. CoreOS roofts is pulled to the bastion. OCP client and installer are pulled down. oc, kubectl and openshift-install binaries are installed. OCP install-config is created from scratch, templated and backed up. Manfifests are created. OCP install directory found at /root/ocpinst/ is deleted, re-created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes. 6 Create Nodes playbook runs, tearing down and recreating cluster nodes. 7 OCP Verification playbook runs, verifying new deployment.","title":"Outcomes"},{"location":"run-the-playbooks/#test-playbook-testyaml","text":"","title":"Test Playbook (test.yaml)"},{"location":"run-the-playbooks/#overview_13","text":"Use this playbook for your testing purposes, if needed.","title":"Overview"},{"location":"set-variables-group-vars/","text":"Step 2: Set Variables (group_vars) # Overview # In a text editor of your choice, open the template of the environment variables file . Make a copy of it called all.yaml and paste it into the same directory with its template. all.yaml is your master variables file and you will likely reference it many times throughout the process. The default inventory can be found at inventories/default . The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in. This is the most important step in the process. Take the time to make sure everything here is correct. Note on YAML syntax : Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file env and z don't need to be filled in, but the cpc_name does. There are X's where input is required to help you with this. Scroll the table to the right to see examples for each variable. 1 - Controller # Variable Name Description Example env.installation_type Can be of type kvm or lpar. Some packages will be ignored for installation in case of non lpar based installation. kvm env.controller.sudo_pass The password to the machine running Ansible (localhost). This will only be used for two things. To ensure you've installed the pre-requisite packages if you're on Linux, and to add the login URL to your /etc/hosts file. Pas$w0rd! 2 - LPAR(s) # Variable Name Description Example env.z.high_availability Is this cluster spread across three LPARs? If yes, mark True. If not (just in one LPAR), mark False True env.z.ip_forward This variable specifies if ip forwarding is enabled or not if NAT network is selected. If ip_forwarding is set to 0, the installed OCP cluster will not be able to access external services because using NAT keep the nodes isolated. This parameter will be set via sysctl on the KVM host. The change of the value is instantly active. This setting will be configured during 3_setup_kvm playbook. If NAT will be configured after 3_setup_kvm playbook, the setup needs to be done manually before bastion is being created, configured or reconfigured by running the 3_setup_kvm playbook with parameter: --tags cfg_ip_forward 1 env.z.lpar1.create To have Ansible create an LPAR and install RHEL on it for the KVM host, mark True. If using a pre-existing LPAR with RHEL already installed, mark False. True env.z.lpar1.hostname The hostname of the KVM host. kvm-host-01 env.z.lpar1.ip The IPv4 address of the KVM host. 192.168.10.1 env.z.lpar1.user Username for Linux admin on KVM host 1. Recommended to run as a non-root user with sudo access. admin env.z.lpar1.pass The password for the user that will be created or exists on the KVM host. ch4ngeMe! env.z.lpar2.create To create a second LPAR and install RHEL on it to act as another KVM host, mark True. If using pre-existing LPAR(s) with RHEL already installed, mark False. True env.z.lpar2.hostname (Optional) The hostname of the second KVM host. kvm-host-02 env.z.lpar2.ip (Optional) The IPv4 address of the second KVM host. 192.168.10.2 env.z.lpar2.user Username for Linux admin on KVM host 2. Recommended to run as a non-root user with sudo access. admin env.z.lpar2.pass (Optional) The password for the admin user on the second KVM host. ch4ngeMe! env.z.lpar3.create To create a third LPAR and install RHEL on it to act as another KVM host, mark True. If using pre-existing LPAR(s) with RHEL already installed, mark False. True env.z.lpar3.hostname (Optional) The hostname of the third KVM host. kvm-host-03 env.z.lpar3.ip (Optional) The IPv4 address of the third KVM host. 192.168.10.3 env.z.lpar3.user Username for Linux admin on KVM host 3. Recommended to run as a non-root user with sudo access. admin env.z.lpar3.pass (Optional) The password for the admin user on the third KVM host. ch4ngeMe! 3 - File Server # Variable Name Description Example env.file_server.ip IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. 192.168.10.201 env.file_server.port The port on which the file server is listening. Will be embedded into all download urls. Defaults to protocol default port. Keep empty '' to use default port 10000 env.file_server.user Username to connect to the file server. Must have sudo and SSH access. user1 env.file_server.pass Password to connect to the file server as above user. user1pa$s! env.file_server.protocol Protocol used to serve the files, either 'ftp' or 'http' http env.file_server.iso_os_variant The os variant for the bastion kvm to be created rhel8.8 env.file_server.iso_mount_dir Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. RHEL/8.7 env.file_server.cfgs_dir Directory path relative to to the HTTP/FTP accessible directory where configuration files can be stored. For example, if FTP root is /home/user1 and you would like to store the configs at /home/user1/ocpz-config then this variable would be ocpz-config. No slash before or after. ocpz-config 4 - Red Hat Info # Variable Name Description Example env.redhat.username Red Hat username with a valid license or free trial to Red Hat OpenShift Container Platform (RHOCP), which comes with necessary licenses for Red Hat Enterprise Linux (RHEL) and Red Hat CoreOS (RHCOS). redhat.user env.redhat.password Password to Red Hat above user's account. Used to auto-attach necessary subscriptions to KVM Host, bastion VM, and pull live images for OpenShift. rEdHatPa$s! env.redhat.manage_subscription True or False. Would you like to subscribe the server with Red Hat? True env.redhat.pull_secret Pull secret for OpenShift, comes from Red Hat's Hybrid Cloud Console . Make sure to enclose in 'single quotes'. '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\"}}}' 5 - Bastion # Variable Name Description Example env.bastion.create True or False. Would you like to create a bastion KVM guest to host essential infrastructure services like DNS, load balancer, firewall, etc? Can de-select certain services with the env.bastion.options variables below. True env.bastion.vm_name Name of the bastion VM. Arbitrary value. bastion env.bastion.resources.disk_size How much of the storage pool would you like to allocate to the bastion (in Gigabytes)? Recommended 30 or more. 30 env.bastion.resources.ram How much memory would you like to allocate the bastion (in megabytes)? Recommended 4096 or more 4096 env.bastion.resources.swap How much swap storage would you like to allocate the bastion (in megabytes)? Recommended 4096 or more. 4096 env.bastion.resources.vcpu How many virtual CPUs would you like to allocate to the bastion? Recommended 4 or more. 4 env.bastion.resources.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.bastion.networking.ip IPv4 address for the bastion. 192.168.10.3 env.bastion.networking.ipv6 IPv6 address for the bastion if use_ipv6 variable is 'True'. fd00::3 env.bastion.networking.mac MAC address for the bastion if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.bastion.networking.hostname Hostname of the bastion. Will be combined with env.bastion.networking.base_domain to create a Fully Qualified Domain Name (FQDN). ocpz-bastion env.bastion.networking.base_domain Base domain that, when combined with the hostname, creates a fully-qualified domain name (FQDN) for the bastion? ihost.com env.bastion.networking.subnetmask Subnet of the bastion. 255.255.255.0 env.bastion.networking.gateway IPv4 of he bastion's gateway server. 192.168.10.0 env.bastion.networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 env.bastion.networking.ipv6_prefix IPv6 prefix. 64 env.bastion.networking.nameserver1 IPv4 address of the server that resolves the bastion's hostname. 192.168.10.200 env.bastion.networking.nameserver2 (Optional) A second IPv4 address that resolves the bastion's hostname. 192.168.10.201 env.bastion.networking.forwarder What IPv4 address will be used to make external DNS calls for the bastion? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8 env.bastion.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1 env.bastion.access.user What would you like the admin's username to be on the bastion? If root, make pass and root_pass vars the same. admin env.bastion.access.pass The password to the bastion's admin user. If using root, make pass and root_pass vars the same. cH4ngeM3! env.bastion.access.root_pass The root password for the bastion. If using root, make pass and root_pass vars the same. R0OtPa$s! env.bastion.options.dns Would you like the bastion to host the DNS information for the cluster? True or False. If false, resolution must come from elsewhere in your environment. Make sure to add IP addresses for KVM hosts, bastion, bootstrap, control, compute nodes, AND api, api-int and *.apps as described here in section \"User-provisioned DNS Requirements\" Table 5. If True this will be done for you in the dns and check_dns roles. True env.bastion.options.loadbalancer.on_bastion Would you like the bastion to host the load balancer (HAProxy) for the cluster? True or False (boolean). If false, this service must be provided elsewhere in your environment, and public and private IP of the load balancer must be provided in the following two variables. True env.bastion.options.loadbalancer.public_ip (Only required if env.bastion.options.loadbalancer.on_bastion is True). The public IPv4 address for your environment's loadbalancer. api, apps, *.apps must use this. 192.168.10.50 env.bastion.options.loadbalancer.private_ip (Only required if env.bastion.options.loadbalancer.on_bastion is True). The private IPv4 address for your environment's loadbalancer. api-int must use this. 10.24.17.12 6 - Cluster Networking # Variable Name Description Example env.cluster.networking.metadata_name Name to describe the cluster as a whole, can be anything if DNS will be hosted on the bastion. If DNS is not on the bastion, must match your DNS configuration. Will be combined with the base_domain and hostnames to create Fully Qualified Domain Names (FQDN). ocpz env.cluster.networking.base_domain The site name, where is the cluster being hosted? This will be combined with the metadata_name and hostnames to create FQDNs. host.com env.bastion.networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 env.bastion.networking.ipv6_prefix IPv6 prefix. 64 env.cluster.networking.nameserver1 IPv4 address that the cluster get its hostname resolution from. If env.bastion.options.dns is True, this should be the IP address of the bastion. 192.168.10.200 env.cluster.networking.nameserver2 (Optional) A second IPv4 address will the cluster get its hostname resolution from? If env.bastion.options.dns is True, this should be left commented out. 192.168.10.201 env.cluster.networking.forwarder What IPv4 address will be used to make external DNS calls for the cluster? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8 env.cluster.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1 7 - Bootstrap Node # Variable Name Description Example env.cluster.nodes.bootstrap.disk_size How much disk space do you want to allocate to the bootstrap node (in Gigabytes)? Bootstrap node is temporary and will be brought down automatically when its job completes. 120 or more recommended. 120 env.cluster.nodes.bootstrap.ram How much memory would you like to allocate to the temporary bootstrap node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.bootstrap.vcpu How many virtual CPUs would you like to allocate to the temporary bootstrap node? Recommended 4 or more. 4 env.cluster.nodes.bootstrap.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.bootstrap.vm_name Name of the temporary bootstrap node VM. Arbitrary value. bootstrap env.cluster.nodes.bootstrap.ip IPv4 address of the temporary bootstrap node. 192.168.10.4 env.cluster.nodes.bootstrap.ipv6 IPv6 address for the bootstrap if use_ipv6 variable is 'True'. fd00::4 env.cluster.nodes.bootstrap.mac MAC address for the bootstrap node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.bootstrap.hostname Hostname of the temporary boostrap node. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). bootstrap-ocpz 8 - Control Nodes # Variable Name Description Example env.cluster.nodes.control.disk_size How much disk space do you want to allocate to each control node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.control.ram How much memory would you like to allocate to the each control node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.control.vcpu How many virtual CPUs would you like to allocate to each control node? Recommended 4 or more. 4 env.cluster.nodes.control.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.control.vm_name Name of the control node VMs. Arbitrary values. Usually no more or less than 3 are used. Must match the total number of IP addresses and hostnames for control nodes. Use provided list format. control-1control-2control-3 env.cluster.nodes.control.ip IPv4 address of the control nodes. Use provided list formatting. 192.168.10.5192.168.10.6192.168.10.7 env.cluster.nodes.control.ipv6 IPv6 address for the control nodes. Use iprovided list formatting (if use_ipv6 variable is 'True'). fd00::5fd00::6fd00::7 env.cluster.nodes.control.mac MAC address for the control node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.control.hostname Hostnames for control nodes. Must match the total number of IP addresses for control nodes (usually 3). If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). control-01control-02control-03 9 - Compute Nodes # Variable Name Description Example env.cluster.nodes.compute.disk_size How much disk space do you want to allocate to each compute node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.compute.ram How much memory would you like to allocate to the each compute node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.compute.vcpu How many virtual CPUs would you like to allocate to each compute node? Recommended 2 or more. 2 env.cluster.nodes.compute.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.compute.vm_name Name of the compute node VMs. Arbitrary values. This list can be expanded to any number of nodes, minimum 2. Must match the total number of IP addresses and hostnames for compute nodes. Use provided list format. compute-1compute-2 env.cluster.nodes.compute.ip IPv4 address of the compute nodes. Must match the total number of VM names and hostnames for compute nodes. Use provided list formatting. 192.168.10.8192.168.10.9 env.cluster.nodes.control.ipv6 IPv6 address for the compute nodes. Use iprovided list formatting (if use_ipv6 variable is 'True'). fd00::8fd00::9 env.cluster.nodes.compute.mac MAC address for the compute node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.compute.hostname Hostnames for compute nodes. Must match the total number of IP addresses and VM names for compute nodes. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). compute-01compute-02 10 - Infra Nodes # Variable Name Description Example env.cluster.nodes.infra.disk_size (Optional) Set up compute nodes that are made for infrastructure workloads (ingress, monitoring, logging)? How much disk space do you want to allocate to each infra node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.infra.ram (Optional) How much memory would you like to allocate to the each infra node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.infra.vcpu (Optional) How many virtual CPUs would you like to allocate to each infra node? Recommended 2 or more. 2 env.cluster.nodes.infra.vcpu_model_option (Optional) Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.infra.vm_name (Optional) Name of additional infra node VMs. Arbitrary values. This list can be expanded to any number of nodes, minimum 2. Must match the total number of IP addresses and hostnames for infra nodes. Use provided list format. infra-1infra-2 env.cluster.nodes.infra.ip (Optional) IPv4 address of the infra nodes. This list can be expanded to any number of nodes, minimum 2. Use provided list formatting. 192.168.10.10192.168.10.11 env.cluster.nodes.infra.ipv6 (Optional) IPv6 address of the infra nodes. iThis list can be expanded to any number of nodes, minimum 2. Use provided list formatting (if use_ipv6 variable is 'True'). fd00::10fd00::11 env.cluster.nodes.infra.hostname (Optional) Hostnames for infra nodes. Must match the total number of IP addresses for infra nodes. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). infra-01infra-02 11 - (Optional) Packages # Variable Name Description Example env.pkgs.galaxy A list of Ansible Galaxy collections that will be installed during the setup playbook. The collections listed are required. Feel free to add more as needed, just make sure to follow the same list format. community.general env.pkgs.controller A list of packages that will be installed on the machine running Ansible during the setup playbook. Feel free to add more as needed, just make sure to follow the same list format. openssh env.pkgs.kvm A list of packages that will be installed on the KVM Host during the setup_kvm_host playbook. Feel free to add more as needed, just make sure to follow the same list format. qemu-kvm env.pkgs.bastion A list of packages that will be installed on the bastion during the setup_bastion playbook. Feel free to add more as needed, just make sure to follow the same list format. haproxy 12 - OpenShift Settings # Variable Name Description Example env.install_config.api_version Kubernetes API version for the cluster. These install_config variables will be passed to the OCP install_config file. This file is templated in the get_ocp role during the setup_bastion playbook. To make more fine-tuned adjustments to the install_config, you can find it at roles/get_ocp/templates/install-config.yaml.j2 v1 env.install_config.compute.architecture Computing architecture for the compute nodes. Must be s390x for clusters on IBM zSystems. s390x env.install_config.compute.hyperthreading Enable or disable hyperthreading on compute nodes. Recommended enabled. Enabled env.install_config.control.architecture Computing architecture for the control nodes. Must be s390x for clusters on IBM zSystems, amd64 for Intel or AMD systems, and arm64 for ARM servers. s390x env.install_config.control.hyperthreading Enable or disable hyperthreading on control nodes. Recommended enabled. Enabled env.install_config.cluster_network.cidr IPv4 block in Internal cluster networking in Classless Inter-Domain Routing (CIDR) notation. Recommended to keep as is. 10.128.0.0/14 env.install_config.cluster_network.host_prefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. 23 env.install_config.cluster_network.type The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes (default). OVNKubernetes env.install_config.service_network The IP address block for services. The default value is 172.30.0.0/16. The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. 172.30.0.0/16 env.install_config.fips True or False (boolean) for whether or not to use the United States' Federal Information Processing Standards (FIPS). Not yet certified on IBM zSystems. Enclosed in 'single quotes'. 'false' 13 - (Optional) Proxy # Variable Name Description Example env.proxy.http (Optional) A proxy URL to use for creating HTTP connections outside the cluster. Will be used in the install-config and applied to other Ansible hosts unless set otherwise in no_proxy below. Must follow this pattern: http://username:pswd>@ip:port http://ocp-admin:Pa$sw0rd@9.72.10.1:80 env.proxy.https (Optional) A proxy URL to use for creating HTTPS connections outside the cluster. Will be used in the install-config and applied to other Ansible hosts unless set otherwise in no_proxy below. Must follow this pattern: https://username:pswd@ip:port https://ocp-admin:Pa$sw0rd@9.72.10.1:80 env.proxy.no (Optional) A comma-separated list (no spaces) of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. When using a proxy, all necessary IPs and domains for your cluster will be added automatically. See roles/get_ocp/templates/install-config.yaml.j2 for more details on the template. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all listed destinations. example.com,192.168.10.1 14 - (Optional) Misc # Variable Name Description Example env.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here , in the \"Locale\" column of Table 2.1. en_US.UTF-8 env.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here . America/New_York env.keyboard Which keyboard layout would you like Red Hat Enterprise Linux to use? us env.ansible_key_name (Optional) Name of the SSH key that Ansible will use to connect to hosts. ansible-ocpz env.ocp_key_name Comment to describe the SSH key used for OCP. Arbitrary value. OCPZ-01 key env.vnet_name (Optional) Name of the bridged virtual network that will be created on the KVM host if network mode is not set to NAT. In case of NAT network mode the name of the NAT network definition used to create the nodes(usually it is 'default'). If NAT is being used and a jumphost is needed, the parameters network_mode, jumphost.name, jumphost.user and jumphost.pass must be specified, too. For default (NAT) network verify that the configured IP ranges does not interfere with the IPs defined for the controle and compute nodes. Modify the default network (dhcp range setting) to prevent issues with VMs using dhcp and OCP nodes having fixed IPs. Default is create a bridge network. macvtap-net env.network_mode (Optional) In case the network mode will be NAT and the installation will be executed from remote (e.g. your laptop), a jumphost needs to be defined to let the installation access the bastion host. If macvtap for networking is being used this variable should be empty. NAT env.use_ipv6 If ipv6 addresses should be assigned to the controle and compute nodes, this variable should be true (default) and the matching ipv6 settings should be specified. True env.use_dhcp If dhcp service should be used to get an IP address, this variable should be true and the matching mac address must be specified. False env.jumphost.name (Optional) If env.network.mode is set to 'NAT' the name of the jumphost (e.g. the name of KVM host if used as jumphost) should be specified. kvm-host-01 env.jumphost.ip (Optional) The ip of the jumphost. 192.168.10.1 env.jumphost.user (Optional) The user name to login to the jumphost. admin env.jumphost.pass (Optional) The password for user to login to the jumphost. ch4ngeMe! env.jumphost.path_to_keypair (Optional) The absolute path to the public key file on the jumphost to be copied to the bastion. /home/admin/.ssh/id_rsa.pub 15 - OCP and RHCOS (CoreOS) # Variable Name Description Example ocp_download_url Link to the mirror for the OpenShift client and installer from Red Hat. https://mirror.openshift.com/pub/openshift-v4/multi/clients/ocp/4.13.1/s390x/ ocp_client_tgz OpenShift client filename (tar.gz). openshift-client-linux.tar.gz ocp_install_tgz OpenShift installer filename (tar.gz). openshift-install-linux.tar.gz rhcos_download_url Link to the CoreOS files to be used for the bootstrap, control and compute nodes. Feel free to change to a different version. https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.12/4.12.3/ rhcos_os_variant CoreOS base OS. Use the OS string as defined in 'osinfo-query os -f short-id' rhel8.6 rhcos_live_kernel CoreOS kernel filename to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-kernel-s390x rhcos_live_initrd CoreOS initramfs to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-initramfs.s390x.img rhcos_live_rootfs CoreOS rootfs to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-rootfs.s390x.img 16 - (Optional) Disconnected cluster setup # Variable Name Description Example disconnected.enabled True or False, to enable disconnected mode False disconnected.registry.url String containing url of disconnected registry with or without port and without protocol registry.tt.testing:5000 disconnected.registry.pull_secret String containing pull secret of the disconnected registry to be applied on the cluster . Make sure to enclose pull_secret in 'single quotes' and it has appropriate pull access. '{\"auths\":{\"registry.tt..testing:5000\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"test.user@example.com\"}}}' disconnected.registry.mirror_pull_ecret String containing pull secret to use for mirroring. Contains Red Hat secret and registry pull secret. Make sure to enclose pull_secret in 'single quotes' and must be able to push to mirror registry. '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\", \"registry.tt..testing:5000\":...user@example.com\"}}}' disconnected.registry.ca_trusted True or False to indicate that mirror registry CA is implicitly trusted or needs to be made trusted on mirror host and cluster. False disconnected.registry.ca_cert Multiline string containing the mirror registry CA bundle -----BEGIN CERTIFICATE-----MIIDqDCCApCgAwIBAgIULL+d1HTYsiP+8jeWnqBis3N4BskwDQYJKoZIhvcNAQEF...-----END CERTIFICATE----- disconnected.mirroring.host.name String containing the hostname of the host, which will be used for mirroring mirror-host-1 disconnected.mirroring.host.ip String containing ip of the host, which will be used for mirroring 192.168.10.99 disconnected.mirroring.host.user String containing the username of the host, which will be used for mirroring mirroruser disconnected.mirroring.host.pass String containing the password of the host, which will be used for mirroring mirrorpassword disconnected.mirroring.file_server.clients_dir Directory path relative to the HTTP/FTP accessible directory on env.file_server where client binary tarballs are kept clients disconnected.mirroring.file_server.oc_mirror_tgz Name of oc-mirror tarball on env.file_server in disconnected.mirroring.file_server.clients_dir oc-mirror.tar.gz disconnected.mirroring.legacy.platform True or False if the platform should be mirrored using oc adm release mirror . False disconnected.mirroring.legacy.ocp_quay_release_image_tag The tag of the release image quay.io/openshift-release-dev/ocp-release to mirror and use 4.13.1-s390x disconnected.mirroring.legacy.ocp_org The org part of the repo on the mirror registry where the release image will be pushed ocp4 disconnected.mirroring.legacy.ocp_repo The repo part of the repo on the mirror registry where the release image will be pushed openshift4 disconnected.mirroring.legacy.ocp_tag The tag part of the repo on the mirror registry where the release image will be pushed. Full image would be as below.: disconnected.registry.url/disconnected.mirroring.legacy.ocp_org/disconnected...ocp_repo:disconnected..ocp_tag v4.13.1 disconnected.mirroring.oc_mirror.release_image_tag The ocp release image tag you want to install the cluster with. Used when legacy platform mirroring is disabled and disconnected.mirroring.oc_mirror.image_set contains platform entries. 4.13.1-multi disconnected.mirroring.oc_mirror.oc_mirror_args.continue_on_error True or False to give --continue-on-error flag to oc-mirror False disconnected.mirroring.oc_mirror.oc_mirror_args.source_skip_tls True or False to give --source-skip-tls flag to oc-mirror False disconnected.mirroring.oc_mirror.post_mirror.mapping.replace.enabled True or False to replace values in mapping.txt generated by oc-mirror. This also does a manual repush of the images in mapping.txt . False disconnected.mirroring.oc_mirror.post_mirror.mapping.replace.list List of regexp and replace where every string/regular expression gets replaced by corresponding replace value. regexp: interal-url.com replace: external-url.com disconnected.mirroring.oc_mirror.image_set YAML fields containing a standard oc-mirror image set with some minor changes to schema. Differences are documented as needed. Used to generate final image set. see template disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.enabled True or False to use registry storage backend for pushing mirrored content directly to the registry. Currently only this backend is supported. True disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.imageURL.org The org part of registry imageURL from standard image set. mirror disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.imageURL.repo The repo part of registry imageURL from standard image set. Final imageURL will be as below: disconnected.registry.url/disconnected.mirroring.oc_mirror.image_set.storageConfig .registry.imageURL.org/disconnected...imageURL.repo oc-mirror-metadata disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.skipTLS True of False same purpose served as in standard image set i.e. skip the tls for the registry during mirroring. false disconnected.mirrroing.oc_mirror.image_set.mirror YAML containing a list of what needs to be mirrored. See the oc mirror image set documentation. see oc-mirror image set documentation 17 - (Optional) Create compute node in a day-2 operation # Variable Name Description Example day2_compute_node.vm_name Name of the compute node VM. compute-4 day2_compute_node.vm_hostname Hostnames for compute node. compute-4 day2_compute_node.vm_vm_ip IPv4 address of the compute node. 192.168.10.99 day2_compute_node.vm_vm_ipv6 IPv6 address of the compute node. fd00::99 day2_compute_node.vm_mac MAC address of the compute node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B day2_compute_node.vm_interface The network interface used for given IP addresses of the compute node. enc1 day2_compute_node.hostname The hostname of the KVM host kvm-host-01 day2_compute_node.host_user KVM host user which is used to create the VM root day2_compute_node.host_arch KVM host architecture. s390x 18 - (Optional) Agent Based Installer # Variable Name Description Example abi.flag This is the flag, Will be used to identify during execution. Few checks in the playbook will be depend on this (default value will be False) True abi.ansible_workdir This will be work directory name, it will keep required data that need to be present during or after execution ansible_workdir abi.ocp_installer_version Version will contain value of openshift-installer binary version user desired to be used '4.15.0-rc.8' abi.ocp_installer_url This is the base url of openshift installer binary it will remain same as static value, User Do not need to give value until user wants to change the mirror 'https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/' Hosted Control Plane ( Optional ) # Variable Name Description Example hcp.compute_node_type Select the compute node type for HCP , either zKVM or zVM zvm hcp.mgmt_cluster_nameserver IP Address of Nameserver of Management Cluster 192.168.10.1 hcp.oc_url URL for OC Client that you want to install on the host https://... ..openshift-client-linux-4.13.0-ec.4.tar.gz hcp.ansible_key_name ssh key name ansible-ocpz hcp.pkgs list of packages for different hosts hcp.mce.version version for multicluster-engine Operator 2.4 hcp.mce.instance_name name of the MultiClusterEngine instance engine hcp.mce.delete true or false - deletes mce and related resources while running deletion playbook true hcp.asc.url_for_ocp_release_file Add URL for OCP release.txt File https://... ..../release.txt hcp.asc.db_volume_size DatabaseStorage Volume Size 10Gi hcp.asc.fs_volume_size FileSystem Storage Volume Size 10Gi hcp.asc.ocp_version OCP Version for AgentServiceConfig 4.13.0-ec.4 hcp.asc.iso_url Give URL for ISO image https://... ...s390x-live.s390x.iso hcp.asc.root_fs_url Give URL for rootfs image https://... ... live-rootfs.s390x.img hcp.asc.mce_namespace Namespace where your Multicluster Engine Operator is installed. Recommended Namespace for MCE is 'multicluster-engine'. Change this only if MCE is installed in other namespace. multicluster-engine hcp.control_plane.high_availabiliy Availability for Control Plane true hcp.control_plane.clusters_namespace Namespace for Creating Hosted Control Plane clusters hcp.control_plane.hosted_cluster_name Name for the Hosted Cluster hosted0 hcp.control_plane.basedomain Base domain for Hosted Cluster example.com hcp.control_plane.pull_secret_file Path for the pull secret No need to change this as we are copying the pullsecret to same file /root/ansible_workdir/auth_file /root/ansible_workdir/auth_file hcp.control_plane.ocp_release_image OCP Release version for Hosted Control Cluster and Nodepool 4.13.0-rc.4-multi hcp.control_plane.arch Architecture for InfraEnv and AgentServiceConfig\" s390x hcp.control_plane.additional_flags Any additional flags for creating hcp ( In hcp create cluster agent command ) --fips hcp.control_plane.pull_secret Pull Secret of Management Cluster Make sure to enclose pull_secret in 'single quotes' '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\"}}}' hcp.bastion_params.create true or false - create bastion with the provided IP true hcp.bastion_params.ip IPv4 address for bastion of Hosted Cluster 192.168.10.1 hcp.bastion_params.user User for bastion of Hosted Cluster root hcp.bastion_params.host IPv4 address of KVM host (kvm host where you want to run all oc commands and create VMs) 192.168.10.1 hcp.bastion_params.host_user User for KVM host root hcp.bastion_params.hostname Hostname for bastion bastion hcp.bastion_params.base_domain DNS base domain for the bastion. ihost.com hcp.bastion_params.nameserver Nameserver for creating bastion 192.168.10.1 hhcp.bastion_params.gateway Gateway IP for creating bastion This is how it well be used ip= :: : 192.168.10.1 hcp.bastion_params.subnet_mask IPv4 address of subnetmask 255.255.255.0 hcp.bastion_params.interface Interface for bastion enc1 hcp.bastion_params.file_server.ip IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. 192.168.10.201 hcp.bastion_params.file_server.protocol Protocol used to serve the files, either 'ftp' or 'http' http hcp.bastion_params.file_server.iso_mount_dir Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. RHEL/8.7 hcp.bastion_params.os_variant rhel os variant for creating bastion 8.7 hcp.bastion_params.disk rhel os variant for creating bastion 8.7 hcp.bastion_params.network_name rhel os variant for creating bastion 8.7 hcp.bastion_params.networking_device The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. enc1100 hcp.bastion_params.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here, in the \"Locale\" column of Table 2.1. en_US.UTF-8 hcp.bastion_params.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here. America/New_York hcp.bastion_params.keyboard Which keyboard layout would you like Red Hat Enterprise Linux to use? us hcp.data_plane.compute_count Number of agents for the hosted cluster The same number of compute nodes will be attached to Hosted Cotrol Plane 2 hcp.data_plane.vcpus vCPUs for compute nodes 4 hcp.data_plane.memory RAM for compute nodes 16384 hcp.data_plane.nameserver Nameserver for compute nodes 192.168.10.1 hcp.data_plane.storage.type Storage type for KVM guests qcow/dasd qcow hcp.data_plane.storage.qcow.disk_size Disk size for kvm guests 100G hcp.data_plane.storage.qcow.pool_path Storage pool path for creating disks /home/images/ hcp.data_plane.storage.dasd dasd disks for kvm guests /disk hcp.data_plane.kvm.ip_params.static_ip.enabled true or false - use static IPs for agents using NMState true hcp.data_plane.kvm.ip_params.static_ip.ip List of IP addresses for agents 192.168.10.1 hcp.data_plane.kvm.ip_params.static_ip.interface Interface for agents for configuring NMStateConfig eth0 hcp.data_plane.kvm.ip_params.mac List of macaddresses for the agents. Configure in DHCP if you are using dynamic IPs for Agents. - 52:54:00:ba:d3:f7 hcp.data_plane.zvm.network_mode Network mode for zvm nodes Supported modes: vswitch,osa, RoCE vswitch hcp.data_plane.zvm.disk_type Disk type for zvm nodes Supported disk types: fcp, dasd dasd hcp.data_plane.zvm.subnetmask Subnet mask for compute nodes 255.255.255.0 hcp.data_plane.zvm.gateway Gateway for compute nodes 192.168.10.1 hcp.data_plane.zvm.nodes Set of parameters for zvm nodes Give the details of each zvm node here hcp.data_plane.zvm.name Name of the zVM guest m1317002 hcp.data_plane.zvm.nodes.host Host name of the zVM guests which we use to login 3270 console boem1317 hcp.data_plane.zvmnodes.user Username for zVM guests to login m1317002 hcp.data_plane.zvm.nodes.password password for the zVM guests to login password hcp.data_plane.zvm.nodes.interface.ifname Network interface name for zVM guests encbdf0 hcp.data_plane.zvm.nodes.interface.nettype Network type for zVM guests for network connectivity qeth hcp.data_plane.zvm.nodes.interface.subchannels subchannels for zVM guests interfaces 0.0.bdf0,0.0.bdf1,0.0.bdf2 hcp.data_plane.zvm.nodes.interface.options Configurations options layer2=1 hcp.data_plane.zvm.interface.ip IP addresses for to be used for zVM nodes 192.168.10.1 hcp.data_plane.zvm.nodes.dasd.disk_id Disk id for dasd disk to be used for zVM node 4404 hcp.data_plane.zvm.nodes.lun Disk details of fcp disk to be used for zVM node 4404","title":"2 Set Variables (group_vars)"},{"location":"set-variables-group-vars/#step-2-set-variables-group_vars","text":"","title":"Step 2: Set Variables (group_vars)"},{"location":"set-variables-group-vars/#overview","text":"In a text editor of your choice, open the template of the environment variables file . Make a copy of it called all.yaml and paste it into the same directory with its template. all.yaml is your master variables file and you will likely reference it many times throughout the process. The default inventory can be found at inventories/default . The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in. This is the most important step in the process. Take the time to make sure everything here is correct. Note on YAML syntax : Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file env and z don't need to be filled in, but the cpc_name does. There are X's where input is required to help you with this. Scroll the table to the right to see examples for each variable.","title":"Overview"},{"location":"set-variables-group-vars/#1-controller","text":"Variable Name Description Example env.installation_type Can be of type kvm or lpar. Some packages will be ignored for installation in case of non lpar based installation. kvm env.controller.sudo_pass The password to the machine running Ansible (localhost). This will only be used for two things. To ensure you've installed the pre-requisite packages if you're on Linux, and to add the login URL to your /etc/hosts file. Pas$w0rd!","title":"1 - Controller"},{"location":"set-variables-group-vars/#2-lpars","text":"Variable Name Description Example env.z.high_availability Is this cluster spread across three LPARs? If yes, mark True. If not (just in one LPAR), mark False True env.z.ip_forward This variable specifies if ip forwarding is enabled or not if NAT network is selected. If ip_forwarding is set to 0, the installed OCP cluster will not be able to access external services because using NAT keep the nodes isolated. This parameter will be set via sysctl on the KVM host. The change of the value is instantly active. This setting will be configured during 3_setup_kvm playbook. If NAT will be configured after 3_setup_kvm playbook, the setup needs to be done manually before bastion is being created, configured or reconfigured by running the 3_setup_kvm playbook with parameter: --tags cfg_ip_forward 1 env.z.lpar1.create To have Ansible create an LPAR and install RHEL on it for the KVM host, mark True. If using a pre-existing LPAR with RHEL already installed, mark False. True env.z.lpar1.hostname The hostname of the KVM host. kvm-host-01 env.z.lpar1.ip The IPv4 address of the KVM host. 192.168.10.1 env.z.lpar1.user Username for Linux admin on KVM host 1. Recommended to run as a non-root user with sudo access. admin env.z.lpar1.pass The password for the user that will be created or exists on the KVM host. ch4ngeMe! env.z.lpar2.create To create a second LPAR and install RHEL on it to act as another KVM host, mark True. If using pre-existing LPAR(s) with RHEL already installed, mark False. True env.z.lpar2.hostname (Optional) The hostname of the second KVM host. kvm-host-02 env.z.lpar2.ip (Optional) The IPv4 address of the second KVM host. 192.168.10.2 env.z.lpar2.user Username for Linux admin on KVM host 2. Recommended to run as a non-root user with sudo access. admin env.z.lpar2.pass (Optional) The password for the admin user on the second KVM host. ch4ngeMe! env.z.lpar3.create To create a third LPAR and install RHEL on it to act as another KVM host, mark True. If using pre-existing LPAR(s) with RHEL already installed, mark False. True env.z.lpar3.hostname (Optional) The hostname of the third KVM host. kvm-host-03 env.z.lpar3.ip (Optional) The IPv4 address of the third KVM host. 192.168.10.3 env.z.lpar3.user Username for Linux admin on KVM host 3. Recommended to run as a non-root user with sudo access. admin env.z.lpar3.pass (Optional) The password for the admin user on the third KVM host. ch4ngeMe!","title":"2 - LPAR(s)"},{"location":"set-variables-group-vars/#3-file-server","text":"Variable Name Description Example env.file_server.ip IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. 192.168.10.201 env.file_server.port The port on which the file server is listening. Will be embedded into all download urls. Defaults to protocol default port. Keep empty '' to use default port 10000 env.file_server.user Username to connect to the file server. Must have sudo and SSH access. user1 env.file_server.pass Password to connect to the file server as above user. user1pa$s! env.file_server.protocol Protocol used to serve the files, either 'ftp' or 'http' http env.file_server.iso_os_variant The os variant for the bastion kvm to be created rhel8.8 env.file_server.iso_mount_dir Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. RHEL/8.7 env.file_server.cfgs_dir Directory path relative to to the HTTP/FTP accessible directory where configuration files can be stored. For example, if FTP root is /home/user1 and you would like to store the configs at /home/user1/ocpz-config then this variable would be ocpz-config. No slash before or after. ocpz-config","title":"3 - File Server"},{"location":"set-variables-group-vars/#4-red-hat-info","text":"Variable Name Description Example env.redhat.username Red Hat username with a valid license or free trial to Red Hat OpenShift Container Platform (RHOCP), which comes with necessary licenses for Red Hat Enterprise Linux (RHEL) and Red Hat CoreOS (RHCOS). redhat.user env.redhat.password Password to Red Hat above user's account. Used to auto-attach necessary subscriptions to KVM Host, bastion VM, and pull live images for OpenShift. rEdHatPa$s! env.redhat.manage_subscription True or False. Would you like to subscribe the server with Red Hat? True env.redhat.pull_secret Pull secret for OpenShift, comes from Red Hat's Hybrid Cloud Console . Make sure to enclose in 'single quotes'. '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\"}}}'","title":"4 - Red Hat Info"},{"location":"set-variables-group-vars/#5-bastion","text":"Variable Name Description Example env.bastion.create True or False. Would you like to create a bastion KVM guest to host essential infrastructure services like DNS, load balancer, firewall, etc? Can de-select certain services with the env.bastion.options variables below. True env.bastion.vm_name Name of the bastion VM. Arbitrary value. bastion env.bastion.resources.disk_size How much of the storage pool would you like to allocate to the bastion (in Gigabytes)? Recommended 30 or more. 30 env.bastion.resources.ram How much memory would you like to allocate the bastion (in megabytes)? Recommended 4096 or more 4096 env.bastion.resources.swap How much swap storage would you like to allocate the bastion (in megabytes)? Recommended 4096 or more. 4096 env.bastion.resources.vcpu How many virtual CPUs would you like to allocate to the bastion? Recommended 4 or more. 4 env.bastion.resources.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.bastion.networking.ip IPv4 address for the bastion. 192.168.10.3 env.bastion.networking.ipv6 IPv6 address for the bastion if use_ipv6 variable is 'True'. fd00::3 env.bastion.networking.mac MAC address for the bastion if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.bastion.networking.hostname Hostname of the bastion. Will be combined with env.bastion.networking.base_domain to create a Fully Qualified Domain Name (FQDN). ocpz-bastion env.bastion.networking.base_domain Base domain that, when combined with the hostname, creates a fully-qualified domain name (FQDN) for the bastion? ihost.com env.bastion.networking.subnetmask Subnet of the bastion. 255.255.255.0 env.bastion.networking.gateway IPv4 of he bastion's gateway server. 192.168.10.0 env.bastion.networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 env.bastion.networking.ipv6_prefix IPv6 prefix. 64 env.bastion.networking.nameserver1 IPv4 address of the server that resolves the bastion's hostname. 192.168.10.200 env.bastion.networking.nameserver2 (Optional) A second IPv4 address that resolves the bastion's hostname. 192.168.10.201 env.bastion.networking.forwarder What IPv4 address will be used to make external DNS calls for the bastion? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8 env.bastion.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1 env.bastion.access.user What would you like the admin's username to be on the bastion? If root, make pass and root_pass vars the same. admin env.bastion.access.pass The password to the bastion's admin user. If using root, make pass and root_pass vars the same. cH4ngeM3! env.bastion.access.root_pass The root password for the bastion. If using root, make pass and root_pass vars the same. R0OtPa$s! env.bastion.options.dns Would you like the bastion to host the DNS information for the cluster? True or False. If false, resolution must come from elsewhere in your environment. Make sure to add IP addresses for KVM hosts, bastion, bootstrap, control, compute nodes, AND api, api-int and *.apps as described here in section \"User-provisioned DNS Requirements\" Table 5. If True this will be done for you in the dns and check_dns roles. True env.bastion.options.loadbalancer.on_bastion Would you like the bastion to host the load balancer (HAProxy) for the cluster? True or False (boolean). If false, this service must be provided elsewhere in your environment, and public and private IP of the load balancer must be provided in the following two variables. True env.bastion.options.loadbalancer.public_ip (Only required if env.bastion.options.loadbalancer.on_bastion is True). The public IPv4 address for your environment's loadbalancer. api, apps, *.apps must use this. 192.168.10.50 env.bastion.options.loadbalancer.private_ip (Only required if env.bastion.options.loadbalancer.on_bastion is True). The private IPv4 address for your environment's loadbalancer. api-int must use this. 10.24.17.12","title":"5 - Bastion"},{"location":"set-variables-group-vars/#6-cluster-networking","text":"Variable Name Description Example env.cluster.networking.metadata_name Name to describe the cluster as a whole, can be anything if DNS will be hosted on the bastion. If DNS is not on the bastion, must match your DNS configuration. Will be combined with the base_domain and hostnames to create Fully Qualified Domain Names (FQDN). ocpz env.cluster.networking.base_domain The site name, where is the cluster being hosted? This will be combined with the metadata_name and hostnames to create FQDNs. host.com env.bastion.networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 env.bastion.networking.ipv6_prefix IPv6 prefix. 64 env.cluster.networking.nameserver1 IPv4 address that the cluster get its hostname resolution from. If env.bastion.options.dns is True, this should be the IP address of the bastion. 192.168.10.200 env.cluster.networking.nameserver2 (Optional) A second IPv4 address will the cluster get its hostname resolution from? If env.bastion.options.dns is True, this should be left commented out. 192.168.10.201 env.cluster.networking.forwarder What IPv4 address will be used to make external DNS calls for the cluster? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8 env.cluster.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1","title":"6 - Cluster Networking"},{"location":"set-variables-group-vars/#7-bootstrap-node","text":"Variable Name Description Example env.cluster.nodes.bootstrap.disk_size How much disk space do you want to allocate to the bootstrap node (in Gigabytes)? Bootstrap node is temporary and will be brought down automatically when its job completes. 120 or more recommended. 120 env.cluster.nodes.bootstrap.ram How much memory would you like to allocate to the temporary bootstrap node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.bootstrap.vcpu How many virtual CPUs would you like to allocate to the temporary bootstrap node? Recommended 4 or more. 4 env.cluster.nodes.bootstrap.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.bootstrap.vm_name Name of the temporary bootstrap node VM. Arbitrary value. bootstrap env.cluster.nodes.bootstrap.ip IPv4 address of the temporary bootstrap node. 192.168.10.4 env.cluster.nodes.bootstrap.ipv6 IPv6 address for the bootstrap if use_ipv6 variable is 'True'. fd00::4 env.cluster.nodes.bootstrap.mac MAC address for the bootstrap node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.bootstrap.hostname Hostname of the temporary boostrap node. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). bootstrap-ocpz","title":"7 - Bootstrap Node"},{"location":"set-variables-group-vars/#8-control-nodes","text":"Variable Name Description Example env.cluster.nodes.control.disk_size How much disk space do you want to allocate to each control node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.control.ram How much memory would you like to allocate to the each control node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.control.vcpu How many virtual CPUs would you like to allocate to each control node? Recommended 4 or more. 4 env.cluster.nodes.control.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.control.vm_name Name of the control node VMs. Arbitrary values. Usually no more or less than 3 are used. Must match the total number of IP addresses and hostnames for control nodes. Use provided list format. control-1control-2control-3 env.cluster.nodes.control.ip IPv4 address of the control nodes. Use provided list formatting. 192.168.10.5192.168.10.6192.168.10.7 env.cluster.nodes.control.ipv6 IPv6 address for the control nodes. Use iprovided list formatting (if use_ipv6 variable is 'True'). fd00::5fd00::6fd00::7 env.cluster.nodes.control.mac MAC address for the control node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.control.hostname Hostnames for control nodes. Must match the total number of IP addresses for control nodes (usually 3). If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). control-01control-02control-03","title":"8 - Control Nodes"},{"location":"set-variables-group-vars/#9-compute-nodes","text":"Variable Name Description Example env.cluster.nodes.compute.disk_size How much disk space do you want to allocate to each compute node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.compute.ram How much memory would you like to allocate to the each compute node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.compute.vcpu How many virtual CPUs would you like to allocate to each compute node? Recommended 2 or more. 2 env.cluster.nodes.compute.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.compute.vm_name Name of the compute node VMs. Arbitrary values. This list can be expanded to any number of nodes, minimum 2. Must match the total number of IP addresses and hostnames for compute nodes. Use provided list format. compute-1compute-2 env.cluster.nodes.compute.ip IPv4 address of the compute nodes. Must match the total number of VM names and hostnames for compute nodes. Use provided list formatting. 192.168.10.8192.168.10.9 env.cluster.nodes.control.ipv6 IPv6 address for the compute nodes. Use iprovided list formatting (if use_ipv6 variable is 'True'). fd00::8fd00::9 env.cluster.nodes.compute.mac MAC address for the compute node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.compute.hostname Hostnames for compute nodes. Must match the total number of IP addresses and VM names for compute nodes. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). compute-01compute-02","title":"9 - Compute Nodes"},{"location":"set-variables-group-vars/#10-infra-nodes","text":"Variable Name Description Example env.cluster.nodes.infra.disk_size (Optional) Set up compute nodes that are made for infrastructure workloads (ingress, monitoring, logging)? How much disk space do you want to allocate to each infra node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.infra.ram (Optional) How much memory would you like to allocate to the each infra node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.infra.vcpu (Optional) How many virtual CPUs would you like to allocate to each infra node? Recommended 2 or more. 2 env.cluster.nodes.infra.vcpu_model_option (Optional) Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.infra.vm_name (Optional) Name of additional infra node VMs. Arbitrary values. This list can be expanded to any number of nodes, minimum 2. Must match the total number of IP addresses and hostnames for infra nodes. Use provided list format. infra-1infra-2 env.cluster.nodes.infra.ip (Optional) IPv4 address of the infra nodes. This list can be expanded to any number of nodes, minimum 2. Use provided list formatting. 192.168.10.10192.168.10.11 env.cluster.nodes.infra.ipv6 (Optional) IPv6 address of the infra nodes. iThis list can be expanded to any number of nodes, minimum 2. Use provided list formatting (if use_ipv6 variable is 'True'). fd00::10fd00::11 env.cluster.nodes.infra.hostname (Optional) Hostnames for infra nodes. Must match the total number of IP addresses for infra nodes. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). infra-01infra-02","title":"10 - Infra Nodes"},{"location":"set-variables-group-vars/#11-optional-packages","text":"Variable Name Description Example env.pkgs.galaxy A list of Ansible Galaxy collections that will be installed during the setup playbook. The collections listed are required. Feel free to add more as needed, just make sure to follow the same list format. community.general env.pkgs.controller A list of packages that will be installed on the machine running Ansible during the setup playbook. Feel free to add more as needed, just make sure to follow the same list format. openssh env.pkgs.kvm A list of packages that will be installed on the KVM Host during the setup_kvm_host playbook. Feel free to add more as needed, just make sure to follow the same list format. qemu-kvm env.pkgs.bastion A list of packages that will be installed on the bastion during the setup_bastion playbook. Feel free to add more as needed, just make sure to follow the same list format. haproxy","title":"11 - (Optional) Packages"},{"location":"set-variables-group-vars/#12-openshift-settings","text":"Variable Name Description Example env.install_config.api_version Kubernetes API version for the cluster. These install_config variables will be passed to the OCP install_config file. This file is templated in the get_ocp role during the setup_bastion playbook. To make more fine-tuned adjustments to the install_config, you can find it at roles/get_ocp/templates/install-config.yaml.j2 v1 env.install_config.compute.architecture Computing architecture for the compute nodes. Must be s390x for clusters on IBM zSystems. s390x env.install_config.compute.hyperthreading Enable or disable hyperthreading on compute nodes. Recommended enabled. Enabled env.install_config.control.architecture Computing architecture for the control nodes. Must be s390x for clusters on IBM zSystems, amd64 for Intel or AMD systems, and arm64 for ARM servers. s390x env.install_config.control.hyperthreading Enable or disable hyperthreading on control nodes. Recommended enabled. Enabled env.install_config.cluster_network.cidr IPv4 block in Internal cluster networking in Classless Inter-Domain Routing (CIDR) notation. Recommended to keep as is. 10.128.0.0/14 env.install_config.cluster_network.host_prefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. 23 env.install_config.cluster_network.type The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes (default). OVNKubernetes env.install_config.service_network The IP address block for services. The default value is 172.30.0.0/16. The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. 172.30.0.0/16 env.install_config.fips True or False (boolean) for whether or not to use the United States' Federal Information Processing Standards (FIPS). Not yet certified on IBM zSystems. Enclosed in 'single quotes'. 'false'","title":"12 - OpenShift Settings"},{"location":"set-variables-group-vars/#13-optional-proxy","text":"Variable Name Description Example env.proxy.http (Optional) A proxy URL to use for creating HTTP connections outside the cluster. Will be used in the install-config and applied to other Ansible hosts unless set otherwise in no_proxy below. Must follow this pattern: http://username:pswd>@ip:port http://ocp-admin:Pa$sw0rd@9.72.10.1:80 env.proxy.https (Optional) A proxy URL to use for creating HTTPS connections outside the cluster. Will be used in the install-config and applied to other Ansible hosts unless set otherwise in no_proxy below. Must follow this pattern: https://username:pswd@ip:port https://ocp-admin:Pa$sw0rd@9.72.10.1:80 env.proxy.no (Optional) A comma-separated list (no spaces) of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. When using a proxy, all necessary IPs and domains for your cluster will be added automatically. See roles/get_ocp/templates/install-config.yaml.j2 for more details on the template. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all listed destinations. example.com,192.168.10.1","title":"13 - (Optional) Proxy"},{"location":"set-variables-group-vars/#14-optional-misc","text":"Variable Name Description Example env.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here , in the \"Locale\" column of Table 2.1. en_US.UTF-8 env.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here . America/New_York env.keyboard Which keyboard layout would you like Red Hat Enterprise Linux to use? us env.ansible_key_name (Optional) Name of the SSH key that Ansible will use to connect to hosts. ansible-ocpz env.ocp_key_name Comment to describe the SSH key used for OCP. Arbitrary value. OCPZ-01 key env.vnet_name (Optional) Name of the bridged virtual network that will be created on the KVM host if network mode is not set to NAT. In case of NAT network mode the name of the NAT network definition used to create the nodes(usually it is 'default'). If NAT is being used and a jumphost is needed, the parameters network_mode, jumphost.name, jumphost.user and jumphost.pass must be specified, too. For default (NAT) network verify that the configured IP ranges does not interfere with the IPs defined for the controle and compute nodes. Modify the default network (dhcp range setting) to prevent issues with VMs using dhcp and OCP nodes having fixed IPs. Default is create a bridge network. macvtap-net env.network_mode (Optional) In case the network mode will be NAT and the installation will be executed from remote (e.g. your laptop), a jumphost needs to be defined to let the installation access the bastion host. If macvtap for networking is being used this variable should be empty. NAT env.use_ipv6 If ipv6 addresses should be assigned to the controle and compute nodes, this variable should be true (default) and the matching ipv6 settings should be specified. True env.use_dhcp If dhcp service should be used to get an IP address, this variable should be true and the matching mac address must be specified. False env.jumphost.name (Optional) If env.network.mode is set to 'NAT' the name of the jumphost (e.g. the name of KVM host if used as jumphost) should be specified. kvm-host-01 env.jumphost.ip (Optional) The ip of the jumphost. 192.168.10.1 env.jumphost.user (Optional) The user name to login to the jumphost. admin env.jumphost.pass (Optional) The password for user to login to the jumphost. ch4ngeMe! env.jumphost.path_to_keypair (Optional) The absolute path to the public key file on the jumphost to be copied to the bastion. /home/admin/.ssh/id_rsa.pub","title":"14 - (Optional) Misc"},{"location":"set-variables-group-vars/#15-ocp-and-rhcos-coreos","text":"Variable Name Description Example ocp_download_url Link to the mirror for the OpenShift client and installer from Red Hat. https://mirror.openshift.com/pub/openshift-v4/multi/clients/ocp/4.13.1/s390x/ ocp_client_tgz OpenShift client filename (tar.gz). openshift-client-linux.tar.gz ocp_install_tgz OpenShift installer filename (tar.gz). openshift-install-linux.tar.gz rhcos_download_url Link to the CoreOS files to be used for the bootstrap, control and compute nodes. Feel free to change to a different version. https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.12/4.12.3/ rhcos_os_variant CoreOS base OS. Use the OS string as defined in 'osinfo-query os -f short-id' rhel8.6 rhcos_live_kernel CoreOS kernel filename to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-kernel-s390x rhcos_live_initrd CoreOS initramfs to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-initramfs.s390x.img rhcos_live_rootfs CoreOS rootfs to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-rootfs.s390x.img","title":"15 - OCP and RHCOS (CoreOS)"},{"location":"set-variables-group-vars/#16-optional-disconnected-cluster-setup","text":"Variable Name Description Example disconnected.enabled True or False, to enable disconnected mode False disconnected.registry.url String containing url of disconnected registry with or without port and without protocol registry.tt.testing:5000 disconnected.registry.pull_secret String containing pull secret of the disconnected registry to be applied on the cluster . Make sure to enclose pull_secret in 'single quotes' and it has appropriate pull access. '{\"auths\":{\"registry.tt..testing:5000\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"test.user@example.com\"}}}' disconnected.registry.mirror_pull_ecret String containing pull secret to use for mirroring. Contains Red Hat secret and registry pull secret. Make sure to enclose pull_secret in 'single quotes' and must be able to push to mirror registry. '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\", \"registry.tt..testing:5000\":...user@example.com\"}}}' disconnected.registry.ca_trusted True or False to indicate that mirror registry CA is implicitly trusted or needs to be made trusted on mirror host and cluster. False disconnected.registry.ca_cert Multiline string containing the mirror registry CA bundle -----BEGIN CERTIFICATE-----MIIDqDCCApCgAwIBAgIULL+d1HTYsiP+8jeWnqBis3N4BskwDQYJKoZIhvcNAQEF...-----END CERTIFICATE----- disconnected.mirroring.host.name String containing the hostname of the host, which will be used for mirroring mirror-host-1 disconnected.mirroring.host.ip String containing ip of the host, which will be used for mirroring 192.168.10.99 disconnected.mirroring.host.user String containing the username of the host, which will be used for mirroring mirroruser disconnected.mirroring.host.pass String containing the password of the host, which will be used for mirroring mirrorpassword disconnected.mirroring.file_server.clients_dir Directory path relative to the HTTP/FTP accessible directory on env.file_server where client binary tarballs are kept clients disconnected.mirroring.file_server.oc_mirror_tgz Name of oc-mirror tarball on env.file_server in disconnected.mirroring.file_server.clients_dir oc-mirror.tar.gz disconnected.mirroring.legacy.platform True or False if the platform should be mirrored using oc adm release mirror . False disconnected.mirroring.legacy.ocp_quay_release_image_tag The tag of the release image quay.io/openshift-release-dev/ocp-release to mirror and use 4.13.1-s390x disconnected.mirroring.legacy.ocp_org The org part of the repo on the mirror registry where the release image will be pushed ocp4 disconnected.mirroring.legacy.ocp_repo The repo part of the repo on the mirror registry where the release image will be pushed openshift4 disconnected.mirroring.legacy.ocp_tag The tag part of the repo on the mirror registry where the release image will be pushed. Full image would be as below.: disconnected.registry.url/disconnected.mirroring.legacy.ocp_org/disconnected...ocp_repo:disconnected..ocp_tag v4.13.1 disconnected.mirroring.oc_mirror.release_image_tag The ocp release image tag you want to install the cluster with. Used when legacy platform mirroring is disabled and disconnected.mirroring.oc_mirror.image_set contains platform entries. 4.13.1-multi disconnected.mirroring.oc_mirror.oc_mirror_args.continue_on_error True or False to give --continue-on-error flag to oc-mirror False disconnected.mirroring.oc_mirror.oc_mirror_args.source_skip_tls True or False to give --source-skip-tls flag to oc-mirror False disconnected.mirroring.oc_mirror.post_mirror.mapping.replace.enabled True or False to replace values in mapping.txt generated by oc-mirror. This also does a manual repush of the images in mapping.txt . False disconnected.mirroring.oc_mirror.post_mirror.mapping.replace.list List of regexp and replace where every string/regular expression gets replaced by corresponding replace value. regexp: interal-url.com replace: external-url.com disconnected.mirroring.oc_mirror.image_set YAML fields containing a standard oc-mirror image set with some minor changes to schema. Differences are documented as needed. Used to generate final image set. see template disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.enabled True or False to use registry storage backend for pushing mirrored content directly to the registry. Currently only this backend is supported. True disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.imageURL.org The org part of registry imageURL from standard image set. mirror disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.imageURL.repo The repo part of registry imageURL from standard image set. Final imageURL will be as below: disconnected.registry.url/disconnected.mirroring.oc_mirror.image_set.storageConfig .registry.imageURL.org/disconnected...imageURL.repo oc-mirror-metadata disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.skipTLS True of False same purpose served as in standard image set i.e. skip the tls for the registry during mirroring. false disconnected.mirrroing.oc_mirror.image_set.mirror YAML containing a list of what needs to be mirrored. See the oc mirror image set documentation. see oc-mirror image set documentation","title":"16 - (Optional) Disconnected cluster setup"},{"location":"set-variables-group-vars/#17-optional-create-compute-node-in-a-day-2-operation","text":"Variable Name Description Example day2_compute_node.vm_name Name of the compute node VM. compute-4 day2_compute_node.vm_hostname Hostnames for compute node. compute-4 day2_compute_node.vm_vm_ip IPv4 address of the compute node. 192.168.10.99 day2_compute_node.vm_vm_ipv6 IPv6 address of the compute node. fd00::99 day2_compute_node.vm_mac MAC address of the compute node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B day2_compute_node.vm_interface The network interface used for given IP addresses of the compute node. enc1 day2_compute_node.hostname The hostname of the KVM host kvm-host-01 day2_compute_node.host_user KVM host user which is used to create the VM root day2_compute_node.host_arch KVM host architecture. s390x","title":"17 - (Optional) Create compute node in a day-2 operation"},{"location":"set-variables-group-vars/#18-optional-agent-based-installer","text":"Variable Name Description Example abi.flag This is the flag, Will be used to identify during execution. Few checks in the playbook will be depend on this (default value will be False) True abi.ansible_workdir This will be work directory name, it will keep required data that need to be present during or after execution ansible_workdir abi.ocp_installer_version Version will contain value of openshift-installer binary version user desired to be used '4.15.0-rc.8' abi.ocp_installer_url This is the base url of openshift installer binary it will remain same as static value, User Do not need to give value until user wants to change the mirror 'https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/'","title":"18 - (Optional) Agent Based Installer"},{"location":"set-variables-group-vars/#hosted-control-plane-optional","text":"Variable Name Description Example hcp.compute_node_type Select the compute node type for HCP , either zKVM or zVM zvm hcp.mgmt_cluster_nameserver IP Address of Nameserver of Management Cluster 192.168.10.1 hcp.oc_url URL for OC Client that you want to install on the host https://... ..openshift-client-linux-4.13.0-ec.4.tar.gz hcp.ansible_key_name ssh key name ansible-ocpz hcp.pkgs list of packages for different hosts hcp.mce.version version for multicluster-engine Operator 2.4 hcp.mce.instance_name name of the MultiClusterEngine instance engine hcp.mce.delete true or false - deletes mce and related resources while running deletion playbook true hcp.asc.url_for_ocp_release_file Add URL for OCP release.txt File https://... ..../release.txt hcp.asc.db_volume_size DatabaseStorage Volume Size 10Gi hcp.asc.fs_volume_size FileSystem Storage Volume Size 10Gi hcp.asc.ocp_version OCP Version for AgentServiceConfig 4.13.0-ec.4 hcp.asc.iso_url Give URL for ISO image https://... ...s390x-live.s390x.iso hcp.asc.root_fs_url Give URL for rootfs image https://... ... live-rootfs.s390x.img hcp.asc.mce_namespace Namespace where your Multicluster Engine Operator is installed. Recommended Namespace for MCE is 'multicluster-engine'. Change this only if MCE is installed in other namespace. multicluster-engine hcp.control_plane.high_availabiliy Availability for Control Plane true hcp.control_plane.clusters_namespace Namespace for Creating Hosted Control Plane clusters hcp.control_plane.hosted_cluster_name Name for the Hosted Cluster hosted0 hcp.control_plane.basedomain Base domain for Hosted Cluster example.com hcp.control_plane.pull_secret_file Path for the pull secret No need to change this as we are copying the pullsecret to same file /root/ansible_workdir/auth_file /root/ansible_workdir/auth_file hcp.control_plane.ocp_release_image OCP Release version for Hosted Control Cluster and Nodepool 4.13.0-rc.4-multi hcp.control_plane.arch Architecture for InfraEnv and AgentServiceConfig\" s390x hcp.control_plane.additional_flags Any additional flags for creating hcp ( In hcp create cluster agent command ) --fips hcp.control_plane.pull_secret Pull Secret of Management Cluster Make sure to enclose pull_secret in 'single quotes' '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\"}}}' hcp.bastion_params.create true or false - create bastion with the provided IP true hcp.bastion_params.ip IPv4 address for bastion of Hosted Cluster 192.168.10.1 hcp.bastion_params.user User for bastion of Hosted Cluster root hcp.bastion_params.host IPv4 address of KVM host (kvm host where you want to run all oc commands and create VMs) 192.168.10.1 hcp.bastion_params.host_user User for KVM host root hcp.bastion_params.hostname Hostname for bastion bastion hcp.bastion_params.base_domain DNS base domain for the bastion. ihost.com hcp.bastion_params.nameserver Nameserver for creating bastion 192.168.10.1 hhcp.bastion_params.gateway Gateway IP for creating bastion This is how it well be used ip= :: : 192.168.10.1 hcp.bastion_params.subnet_mask IPv4 address of subnetmask 255.255.255.0 hcp.bastion_params.interface Interface for bastion enc1 hcp.bastion_params.file_server.ip IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. 192.168.10.201 hcp.bastion_params.file_server.protocol Protocol used to serve the files, either 'ftp' or 'http' http hcp.bastion_params.file_server.iso_mount_dir Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. RHEL/8.7 hcp.bastion_params.os_variant rhel os variant for creating bastion 8.7 hcp.bastion_params.disk rhel os variant for creating bastion 8.7 hcp.bastion_params.network_name rhel os variant for creating bastion 8.7 hcp.bastion_params.networking_device The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. enc1100 hcp.bastion_params.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here, in the \"Locale\" column of Table 2.1. en_US.UTF-8 hcp.bastion_params.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here. America/New_York hcp.bastion_params.keyboard Which keyboard layout would you like Red Hat Enterprise Linux to use? us hcp.data_plane.compute_count Number of agents for the hosted cluster The same number of compute nodes will be attached to Hosted Cotrol Plane 2 hcp.data_plane.vcpus vCPUs for compute nodes 4 hcp.data_plane.memory RAM for compute nodes 16384 hcp.data_plane.nameserver Nameserver for compute nodes 192.168.10.1 hcp.data_plane.storage.type Storage type for KVM guests qcow/dasd qcow hcp.data_plane.storage.qcow.disk_size Disk size for kvm guests 100G hcp.data_plane.storage.qcow.pool_path Storage pool path for creating disks /home/images/ hcp.data_plane.storage.dasd dasd disks for kvm guests /disk hcp.data_plane.kvm.ip_params.static_ip.enabled true or false - use static IPs for agents using NMState true hcp.data_plane.kvm.ip_params.static_ip.ip List of IP addresses for agents 192.168.10.1 hcp.data_plane.kvm.ip_params.static_ip.interface Interface for agents for configuring NMStateConfig eth0 hcp.data_plane.kvm.ip_params.mac List of macaddresses for the agents. Configure in DHCP if you are using dynamic IPs for Agents. - 52:54:00:ba:d3:f7 hcp.data_plane.zvm.network_mode Network mode for zvm nodes Supported modes: vswitch,osa, RoCE vswitch hcp.data_plane.zvm.disk_type Disk type for zvm nodes Supported disk types: fcp, dasd dasd hcp.data_plane.zvm.subnetmask Subnet mask for compute nodes 255.255.255.0 hcp.data_plane.zvm.gateway Gateway for compute nodes 192.168.10.1 hcp.data_plane.zvm.nodes Set of parameters for zvm nodes Give the details of each zvm node here hcp.data_plane.zvm.name Name of the zVM guest m1317002 hcp.data_plane.zvm.nodes.host Host name of the zVM guests which we use to login 3270 console boem1317 hcp.data_plane.zvmnodes.user Username for zVM guests to login m1317002 hcp.data_plane.zvm.nodes.password password for the zVM guests to login password hcp.data_plane.zvm.nodes.interface.ifname Network interface name for zVM guests encbdf0 hcp.data_plane.zvm.nodes.interface.nettype Network type for zVM guests for network connectivity qeth hcp.data_plane.zvm.nodes.interface.subchannels subchannels for zVM guests interfaces 0.0.bdf0,0.0.bdf1,0.0.bdf2 hcp.data_plane.zvm.nodes.interface.options Configurations options layer2=1 hcp.data_plane.zvm.interface.ip IP addresses for to be used for zVM nodes 192.168.10.1 hcp.data_plane.zvm.nodes.dasd.disk_id Disk id for dasd disk to be used for zVM node 4404 hcp.data_plane.zvm.nodes.lun Disk details of fcp disk to be used for zVM node 4404","title":"Hosted Control Plane ( Optional )"},{"location":"set-variables-host-vars/","text":"Step 3: Set Variables (host_vars) # Overview # Similar to the group_vars file, the host_vars files for each LPAR (KVM host) must be filled in. For each KVM host to be acted upon with Ansible, you must have a corresponding host_vars file named .yaml (i.e. ocpz1.yaml, ocpz2.yaml, ocpz3.yaml), so you must copy and rename the templates found in the host_vars folder accordingly. The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in. Many of the variables in these host_vars files are only required if you are NOT using pre-existing LPARs with RHEL installed. See the Important Note below this first section for more details. This is the most important step in the process. Take the time to make sure everything here is correct. Note on YAML syntax : Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file networking does not need to be filled in, but the hostname does. There are X's where input is required to help you with this. Scroll the table to the right to see examples for each variable. 1 - KVM Host # Variable Name Description Example networking.hostname The hostname of the LPAR with RHEL installed natively (the KVM host). kvm-host-01 networking.ip The IPv4 address of the LPAR with RHEL installed natively (the KVM host). 192.168.10.2 networking.ipv6 IPv6 address for the bastion if use_ipv6 variable is 'True'. fd00::3 networking.subnetmask The subnet that the LPAR resides in within your network. 255.255.255.0 networking.gateway The IPv4 address of the gateway to the network where the KVM host resides. 192.168.10.0 networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 networking.ipv6_prefix IPv6 prefix. 64 networking.nameserver1 The IPv4 address from which the KVM host gets its hostname resolved. 192.168.10.200 networking.nameserver2 (Optional) A second IPv4 address from which the KVM host can get its hostname resolved. Used for high availability. 192.168.10.201 networking.device1 The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. enc100 networking.device2 (Optional) Another Linux network interface card. Usually enc and then a number that comes from the dev_num of the second network adapter. enc1 storage.pool_path The absolute path to a directory on your KVM host that will be used to store qcow2 images for the cluster and other installation artifacts. A sub-directory will be created here that matches your clsuter's metadata name that will act as the cluster's libvirt storage pool directory. Note: all directories present in this path will be made executable for the 'qemu' group, as is required. /home/kvm_admin/VirtualMachines Important Note # You can skip the rest of the variables on this page IF you are using existing LPAR(s) that has RHEL already installed. If you are installing an LPAR based cluster then the information below must be provided and are not optional. You must create a host file corresponding to each lpar node. Since this is how most production deployments on-prem are done on IBM zSystems, these variables have been marked as optional. With pre-existing LPARs with RHEL installed, you can also skip 1_create_lpar.yaml and 2_create_kvm_host.yaml playbooks. Make sure to still do 0_setup.yaml first though, then skip to 3_setup_kvm_host.yaml In the scenario of lpar based installation you can skip 1_create_lpar.yaml and 2_create_kvm_host.yaml . You can also optionally skip 3_setup_kvm_host.yaml and 4_create_bastion.yaml unless you are planning on having the bastion on the same host. In case of lpar based installation one is expected to have a tessia live disk accessible by the lpar nodes for network boot. The details of which are to be filled in section #7 below. The steps to create a tessia livedisk can be found here . 2 - (Optional) CPC & HMC # Variable Name Description Example cpc_name The name of the IBM zSystems / LinuxONE mainframe that you are creating a Red Hat OpenShift Container Platform cluster on. Can be found under the \"Systems Management\" tab of the Hardware Management Console (HMC). SYS1 hmc.host The IPv4 address of the HMC you will be connecting to in order to create a Logical Partition (LPAR) on which will act as the Kernel-based Virtual Machine (KVM) host aftering installing and setting up Red Hat Enterprise Linux (RHEL). 192.168.10.1 hmc.user The username that the HMC API call will use to connect to the HMC. Must have access to create LPARs, attach storage groups and networking cards. hmc-user hmc.pass The password that the HMC API call will use to connect to the HMC. Must have access to create LPARs, attach storage groups and networking cards. hmcPas$w0rd! 3 - (Optional) LPAR # Variable Name Description Example lpar.name The name of the Logical Partition (LPAR) that you would like to create/target for the creation of your cluster. This LPAR will act as the KVM host, with RHEL installed natively. OCPKVM1 lpar.description A short description of what this LPAR will be used for, will only be displayed in the HMC next to the LPAR name for identification purposes. KVM host LPAR for RHOCP cluster. lpar.access.user The username that will be created in RHEL when it is installed on the LPAR (the KVM host). kvm-admin lpar.access.pass The password for the user that will be created in RHEL when it is installed on the LPAR (the KVM host). ch4ngeMe! lpar.root_pass The root password for RHEL installed on the LPAR (the KVM host). $ecureP4ass! 4 - (Optional) IFL & Memory # Variable Name Description Example lpar.ifl.count Number of Integrated Facilities for Linux (IFL) processors will be assigned to this LPAR. 6 or more recommended. 6 lpar.ifl.initial memory Initial memory allocation for LPAR to have at start-up (in megabytes). 55000 lpar.ifl.max_memory The most amount of memory this LPAR can be using at any one time (in megabytes). 99000 lpar.ifl.initial_weight For LPAR load balancing purposes, the processing weight this LPAR will have at start-up (1-999). 100 lpar.ifl.min_weight For LPAR load balancing purposes, the minimum weight that this LPAR can have at any one time (1-999). 50 lpar.ifl.max_weight For LPAR load balancing purposes, the maximum weight that this LPAR can have at any one time (1-999). 500 5 - (Optional) Networking # Variable Name Description Example lpar.networking.subnet_cidr The same value as the above variable but in Classless Inter-Domain Routing (CIDR) notation. 23 lpar.networking.nic.card1.name The logical name of the Network Interface Card (NIC) within the HMC. An arbitrary value that is human-readable that points to the NIC. SYS-NIC-01 lpar.networking.nic.card1.adapter The physical adapter name reference to the logical adapter for the LPAR. 10Gb-A lpar.networking.nic.card1.port The port number for the NIC. 0 lpar.networking.nic.card1.dev_num The logical device number for the NIC. In hex format. 0x0100 lpar.networking.nic.card2.name (Optional) The logical name of a second Network Interface Card (NIC) within the HMC. An arbitrary value that is human-readable that points to the NIC. SYS-NIC-02 lpar.networking.nic.card2.adapter (Optional) The physical adapter name of a second NIC. 10Gb-B lpar.networking.nic.card2.port (Optional) The port number for a second NIC. 1 lpar.networking.nic.card2.dev_num (Optional) The logical device number for a second NIC. In hex format. 0x0001 6 - (Optional) Storage # Variable Name Description Example lpar.storage_group_1.name The name of the storage group that will be attached to the LPAR. OCP-storage-01 lpar.storage_group_1.type Storage type. FCP is the only tested type as of now. fcp lpar.storage_group_1.storage_wwpn World-wide port numbers for storage group. Use provided list formatting. 500708680235c3f0 500708680235c3f1 500708680235c3f2 500708680235c3f3 lpar.storage_group_1.dev_num The logical device number of the Host Bus Adapter (HBA) for the storage group. C001 lpar.storage_group_1.lun_name The Logical Unit Numbers (LUN) that points to a specific virtual disk behind the WWPN. 4200569309ahhd240000000000000c001 lpar.storage_group_2.name (Optional) The name of the storage group that will be attached to the LPAR. OCP-storage-01 lpar.storage_group_2.auto_config (Optional) Attempt to automate the addition of the disk space to the existing logical volume. Check out roles/configure_storage/tasks/main.yaml to ensure this will work properly with your setup. True lpar.storage_group_2.type (Optional) Storage type. FCP is the only tested type as of now. fcp lpar.storage_group_2_.storage_wwpn (Optional) World-wide port numbers for storage group. Use provided list formatting. 500708680235c3f0 500708680235c3f1 500708680235c3f2 500708680235c3f3 lpar.storage_group_2_.dev_num (Optional) The logical device number of the Host Bus Adapter (HBA) for the storage group. C001 lpar.storage_group_2_.lun_name (Optional) The Logical Unit Numbers (LUN) that points to a specific virtual disk behind the WWPN. 4200569309ahhd240000000000000c001 7 - (Optional) Livedisk info # Variable Name Description Example lpar.livedisk.livedisktype (Optional) Storage type. DASD and SCSI are tested types as of now. dasd/scsi lpar.livedisk.lun (Required if livedisktype is scsi) The Lunid of the disk when the livedisktype is SCSI. 4003402b00000000 lpar.livedisk.wwpn (Required if livedisktype is scsi) World-wide port number when livedisktype is SCSI. 500507630a1b50a4 lpar.livedisk.devicenr (Optional) the device no of the live disk c6h1 lpar.livedisk.livedisk_root_pass (Optional) root password for the livedisk p@ssword","title":"3 Set Variables (host_vars)"},{"location":"set-variables-host-vars/#step-3-set-variables-host_vars","text":"","title":"Step 3: Set Variables (host_vars)"},{"location":"set-variables-host-vars/#overview","text":"Similar to the group_vars file, the host_vars files for each LPAR (KVM host) must be filled in. For each KVM host to be acted upon with Ansible, you must have a corresponding host_vars file named .yaml (i.e. ocpz1.yaml, ocpz2.yaml, ocpz3.yaml), so you must copy and rename the templates found in the host_vars folder accordingly. The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in. Many of the variables in these host_vars files are only required if you are NOT using pre-existing LPARs with RHEL installed. See the Important Note below this first section for more details. This is the most important step in the process. Take the time to make sure everything here is correct. Note on YAML syntax : Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file networking does not need to be filled in, but the hostname does. There are X's where input is required to help you with this. Scroll the table to the right to see examples for each variable.","title":"Overview"},{"location":"set-variables-host-vars/#1-kvm-host","text":"Variable Name Description Example networking.hostname The hostname of the LPAR with RHEL installed natively (the KVM host). kvm-host-01 networking.ip The IPv4 address of the LPAR with RHEL installed natively (the KVM host). 192.168.10.2 networking.ipv6 IPv6 address for the bastion if use_ipv6 variable is 'True'. fd00::3 networking.subnetmask The subnet that the LPAR resides in within your network. 255.255.255.0 networking.gateway The IPv4 address of the gateway to the network where the KVM host resides. 192.168.10.0 networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 networking.ipv6_prefix IPv6 prefix. 64 networking.nameserver1 The IPv4 address from which the KVM host gets its hostname resolved. 192.168.10.200 networking.nameserver2 (Optional) A second IPv4 address from which the KVM host can get its hostname resolved. Used for high availability. 192.168.10.201 networking.device1 The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. enc100 networking.device2 (Optional) Another Linux network interface card. Usually enc and then a number that comes from the dev_num of the second network adapter. enc1 storage.pool_path The absolute path to a directory on your KVM host that will be used to store qcow2 images for the cluster and other installation artifacts. A sub-directory will be created here that matches your clsuter's metadata name that will act as the cluster's libvirt storage pool directory. Note: all directories present in this path will be made executable for the 'qemu' group, as is required. /home/kvm_admin/VirtualMachines","title":"1 - KVM Host"},{"location":"set-variables-host-vars/#important-note","text":"You can skip the rest of the variables on this page IF you are using existing LPAR(s) that has RHEL already installed. If you are installing an LPAR based cluster then the information below must be provided and are not optional. You must create a host file corresponding to each lpar node. Since this is how most production deployments on-prem are done on IBM zSystems, these variables have been marked as optional. With pre-existing LPARs with RHEL installed, you can also skip 1_create_lpar.yaml and 2_create_kvm_host.yaml playbooks. Make sure to still do 0_setup.yaml first though, then skip to 3_setup_kvm_host.yaml In the scenario of lpar based installation you can skip 1_create_lpar.yaml and 2_create_kvm_host.yaml . You can also optionally skip 3_setup_kvm_host.yaml and 4_create_bastion.yaml unless you are planning on having the bastion on the same host. In case of lpar based installation one is expected to have a tessia live disk accessible by the lpar nodes for network boot. The details of which are to be filled in section #7 below. The steps to create a tessia livedisk can be found here .","title":"Important Note"},{"location":"set-variables-host-vars/#2-optional-cpc-hmc","text":"Variable Name Description Example cpc_name The name of the IBM zSystems / LinuxONE mainframe that you are creating a Red Hat OpenShift Container Platform cluster on. Can be found under the \"Systems Management\" tab of the Hardware Management Console (HMC). SYS1 hmc.host The IPv4 address of the HMC you will be connecting to in order to create a Logical Partition (LPAR) on which will act as the Kernel-based Virtual Machine (KVM) host aftering installing and setting up Red Hat Enterprise Linux (RHEL). 192.168.10.1 hmc.user The username that the HMC API call will use to connect to the HMC. Must have access to create LPARs, attach storage groups and networking cards. hmc-user hmc.pass The password that the HMC API call will use to connect to the HMC. Must have access to create LPARs, attach storage groups and networking cards. hmcPas$w0rd!","title":"2 - (Optional) CPC & HMC"},{"location":"set-variables-host-vars/#3-optional-lpar","text":"Variable Name Description Example lpar.name The name of the Logical Partition (LPAR) that you would like to create/target for the creation of your cluster. This LPAR will act as the KVM host, with RHEL installed natively. OCPKVM1 lpar.description A short description of what this LPAR will be used for, will only be displayed in the HMC next to the LPAR name for identification purposes. KVM host LPAR for RHOCP cluster. lpar.access.user The username that will be created in RHEL when it is installed on the LPAR (the KVM host). kvm-admin lpar.access.pass The password for the user that will be created in RHEL when it is installed on the LPAR (the KVM host). ch4ngeMe! lpar.root_pass The root password for RHEL installed on the LPAR (the KVM host). $ecureP4ass!","title":"3 - (Optional) LPAR"},{"location":"set-variables-host-vars/#4-optional-ifl-memory","text":"Variable Name Description Example lpar.ifl.count Number of Integrated Facilities for Linux (IFL) processors will be assigned to this LPAR. 6 or more recommended. 6 lpar.ifl.initial memory Initial memory allocation for LPAR to have at start-up (in megabytes). 55000 lpar.ifl.max_memory The most amount of memory this LPAR can be using at any one time (in megabytes). 99000 lpar.ifl.initial_weight For LPAR load balancing purposes, the processing weight this LPAR will have at start-up (1-999). 100 lpar.ifl.min_weight For LPAR load balancing purposes, the minimum weight that this LPAR can have at any one time (1-999). 50 lpar.ifl.max_weight For LPAR load balancing purposes, the maximum weight that this LPAR can have at any one time (1-999). 500","title":"4 - (Optional) IFL & Memory"},{"location":"set-variables-host-vars/#5-optional-networking","text":"Variable Name Description Example lpar.networking.subnet_cidr The same value as the above variable but in Classless Inter-Domain Routing (CIDR) notation. 23 lpar.networking.nic.card1.name The logical name of the Network Interface Card (NIC) within the HMC. An arbitrary value that is human-readable that points to the NIC. SYS-NIC-01 lpar.networking.nic.card1.adapter The physical adapter name reference to the logical adapter for the LPAR. 10Gb-A lpar.networking.nic.card1.port The port number for the NIC. 0 lpar.networking.nic.card1.dev_num The logical device number for the NIC. In hex format. 0x0100 lpar.networking.nic.card2.name (Optional) The logical name of a second Network Interface Card (NIC) within the HMC. An arbitrary value that is human-readable that points to the NIC. SYS-NIC-02 lpar.networking.nic.card2.adapter (Optional) The physical adapter name of a second NIC. 10Gb-B lpar.networking.nic.card2.port (Optional) The port number for a second NIC. 1 lpar.networking.nic.card2.dev_num (Optional) The logical device number for a second NIC. In hex format. 0x0001","title":"5 - (Optional) Networking"},{"location":"set-variables-host-vars/#6-optional-storage","text":"Variable Name Description Example lpar.storage_group_1.name The name of the storage group that will be attached to the LPAR. OCP-storage-01 lpar.storage_group_1.type Storage type. FCP is the only tested type as of now. fcp lpar.storage_group_1.storage_wwpn World-wide port numbers for storage group. Use provided list formatting. 500708680235c3f0 500708680235c3f1 500708680235c3f2 500708680235c3f3 lpar.storage_group_1.dev_num The logical device number of the Host Bus Adapter (HBA) for the storage group. C001 lpar.storage_group_1.lun_name The Logical Unit Numbers (LUN) that points to a specific virtual disk behind the WWPN. 4200569309ahhd240000000000000c001 lpar.storage_group_2.name (Optional) The name of the storage group that will be attached to the LPAR. OCP-storage-01 lpar.storage_group_2.auto_config (Optional) Attempt to automate the addition of the disk space to the existing logical volume. Check out roles/configure_storage/tasks/main.yaml to ensure this will work properly with your setup. True lpar.storage_group_2.type (Optional) Storage type. FCP is the only tested type as of now. fcp lpar.storage_group_2_.storage_wwpn (Optional) World-wide port numbers for storage group. Use provided list formatting. 500708680235c3f0 500708680235c3f1 500708680235c3f2 500708680235c3f3 lpar.storage_group_2_.dev_num (Optional) The logical device number of the Host Bus Adapter (HBA) for the storage group. C001 lpar.storage_group_2_.lun_name (Optional) The Logical Unit Numbers (LUN) that points to a specific virtual disk behind the WWPN. 4200569309ahhd240000000000000c001","title":"6 - (Optional) Storage"},{"location":"set-variables-host-vars/#7-optional-livedisk-info","text":"Variable Name Description Example lpar.livedisk.livedisktype (Optional) Storage type. DASD and SCSI are tested types as of now. dasd/scsi lpar.livedisk.lun (Required if livedisktype is scsi) The Lunid of the disk when the livedisktype is SCSI. 4003402b00000000 lpar.livedisk.wwpn (Required if livedisktype is scsi) World-wide port number when livedisktype is SCSI. 500507630a1b50a4 lpar.livedisk.devicenr (Optional) the device no of the live disk c6h1 lpar.livedisk.livedisk_root_pass (Optional) root password for the livedisk p@ssword","title":"7 - (Optional) Livedisk info"},{"location":"troubleshooting/","text":"Troubleshooting # If you encounter errors while running the main playbook, there are a few things you can do: Double check your variables. Inspect the part that failed by opening the playbook or role at roles/role-name/tasks/main.yaml Google the specific error message. Re-run the role with the verbosity '-v' option to get more debugging information (more v's give more info). For example: ansible-playbook playbooks/setup_bastion.yaml -vvv Use tags To be more selective with what parts of a playbook are run, use tags. To determine what part of a playbook you would like to run, open the playbook you'd like to run and find the roles parameter. Each role has a corresponding tag. There are also occasionally tags for sections of a playbook or within the role themselves. This is especially helpful for troubleshooting. You can add in tags under the name parameter for individual tasks you'd like to run. Here's an example of using a tag: ansible-playbook playbooks/setup_kvm_host.yaml --tags \"section_2,section_3\" This runs only the parts of the setup_kvm_host playbook marked with tags section_2 and section_3. To use more than one tag, they must be quoted (single or double) and comma-separated (with or without spaces between). E-mail Jacob Emery at jacob.emery@ibm.com If it's a problem with an OpenShift verification step: Open the cockpit to monitor the VMs. In a web browser, go to https://kvm-host-IP-here:9090 Sign-in with your credentials set in the variables file Enable administrative access in the top right. Open the 'Virtual Machines' tab from the left side toolbar. Sometimes it just takes a while, especially if it's lacking resources. Give it some time and then re-reun the playbook/role with tags. If that doesn't work, SSH into the bastion as root (\"ssh root@\\\") and then run, \"export KUBECONFIG=/root/ocpinst/auth/kubeconfig\" and then \"oc whoami\" and make sure it ouputs \"system:admin\". Then run the shell command from the role you would like to check on manually: i.e. 'oc get nodes', 'oc get co', etc. Open the .openshift_install.log file for information on what happened and try to debug the issue.","title":"Troubleshooting"},{"location":"troubleshooting/#troubleshooting","text":"If you encounter errors while running the main playbook, there are a few things you can do: Double check your variables. Inspect the part that failed by opening the playbook or role at roles/role-name/tasks/main.yaml Google the specific error message. Re-run the role with the verbosity '-v' option to get more debugging information (more v's give more info). For example: ansible-playbook playbooks/setup_bastion.yaml -vvv Use tags To be more selective with what parts of a playbook are run, use tags. To determine what part of a playbook you would like to run, open the playbook you'd like to run and find the roles parameter. Each role has a corresponding tag. There are also occasionally tags for sections of a playbook or within the role themselves. This is especially helpful for troubleshooting. You can add in tags under the name parameter for individual tasks you'd like to run. Here's an example of using a tag: ansible-playbook playbooks/setup_kvm_host.yaml --tags \"section_2,section_3\" This runs only the parts of the setup_kvm_host playbook marked with tags section_2 and section_3. To use more than one tag, they must be quoted (single or double) and comma-separated (with or without spaces between). E-mail Jacob Emery at jacob.emery@ibm.com If it's a problem with an OpenShift verification step: Open the cockpit to monitor the VMs. In a web browser, go to https://kvm-host-IP-here:9090 Sign-in with your credentials set in the variables file Enable administrative access in the top right. Open the 'Virtual Machines' tab from the left side toolbar. Sometimes it just takes a while, especially if it's lacking resources. Give it some time and then re-reun the playbook/role with tags. If that doesn't work, SSH into the bastion as root (\"ssh root@\\\") and then run, \"export KUBECONFIG=/root/ocpinst/auth/kubeconfig\" and then \"oc whoami\" and make sure it ouputs \"system:admin\". Then run the shell command from the role you would like to check on manually: i.e. 'oc get nodes', 'oc get co', etc. Open the .openshift_install.log file for information on what happened and try to debug the issue.","title":"Troubleshooting"}]} \ No newline at end of file +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Ansible-Automated OpenShift Provisioning on KVM on IBM zSystems / LinuxONE # Overview # These Ansible Playbooks automate the setup and deployment of a Red Hat OpenShift Container Platform (RHOCP) cluster on IBM zSystems / LinuxONE with Kernel Virtual Machine (KVM) as the hypervisor. Ready to Start? # Use the left-hand panel to navigate the site. Start with the Before You Begin page. Need Help? # Contact Jacob Emery at jacob.emery@ibm.com","title":"Home"},{"location":"#ansible-automated-openshift-provisioning-on-kvm-on-ibm-zsystems-linuxone","text":"","title":"Ansible-Automated OpenShift Provisioning on KVM on IBM zSystems / LinuxONE"},{"location":"#overview","text":"These Ansible Playbooks automate the setup and deployment of a Red Hat OpenShift Container Platform (RHOCP) cluster on IBM zSystems / LinuxONE with Kernel Virtual Machine (KVM) as the hypervisor.","title":"Overview"},{"location":"#ready-to-start","text":"Use the left-hand panel to navigate the site. Start with the Before You Begin page.","title":"Ready to Start?"},{"location":"#need-help","text":"Contact Jacob Emery at jacob.emery@ibm.com","title":"Need Help?"},{"location":"acknowledgements/","text":"Phillip Wilson Filipe Miranda Patrick Fruth Wasif Mohammad Stuart Tener Fred Bader Ken Morse Nico Boehr Trevor Vardeman Matt Mondics Klaus Smolin Amadeus Podvratnik Miao Zhang-Cohen","title":"Acknowledgements"},{"location":"before-you-begin/","text":"Before You Begin # Description # This project automates the User-Provisioned Infrastructure (UPI) method for deploying Red Hat OpenShift Container Platform (RHOCP) on IBM zSystems / LinuxONE using Kernel-based Virtual Machine (KVM) as the hypervisor. Support # This is an unofficial project created by IBMers. This installation method is not officially supported by either Red Hat or IBM. However, once installation is complete, the resulting cluster is supported by Red Hat. UPI is the only supported method for RHOCP on IBM zSystems. Difficulty # This process is much easier than doing so manually, but still not an easy task. You will likely encounter errors, but you will reach those errors quicker and understand the problem faster than if you were doing this process manually. After using these playbooks once, successive deployments will be much easier. A very basic understanding of what Ansible does is recommended. Advanced understanding is helpful for further customization of the playbooks. A basic understanding of the command-line is required. A basic understanding of git is recommended, especially for creating your organization's own fork of the repository for further customization. An advanced understanding of your computing environment is required for setting the environment variables. These Ansible Playbooks automate a User-Provisioned Infrastructure (UPI) deployment of Red Hat OpenShift Container Platform (RHOCP). This process, when done manually, is extremely tedious, time-consuming, and requires high levels of Linux AND IBM zSystems expertise. UPI is currently the only supported method for deploying RHOCP on IBM zSystems. Why Free and Open-Source? # Trust : IBM zSystems run some of the most highly-secure workloads in the world. Trust is paramount. Developing and using code transparently builds trust between developers and users, so that users feel safe using it on their highly sensitive systems. Customization : IBM zSystems exist in environments that can be highly complex and vary drastically from one datacenter to another. Using code that isn't in a proprietary black box allows you to see exactly what is being done so that you can change any part of it to meet your specific needs. Collaboration : If users encounter a problem, or have a feature request, they can get in contact with the developers directly. Submit an issue or pull request on GitHub or email jacob.emery@ibm.com. Collaboration is highly encouraged! Lower Barriers to Entry : The easier it is to get RHOCP on IBM zSystems up and running, the better - for you, IBM and Red Hat! It is free because RHOCP is an incredible product that should have the least amount of barriers to entry as possible. The world needs open-source, private, and hybrid cloud.","title":"Before You Begin"},{"location":"before-you-begin/#before-you-begin","text":"","title":"Before You Begin"},{"location":"before-you-begin/#description","text":"This project automates the User-Provisioned Infrastructure (UPI) method for deploying Red Hat OpenShift Container Platform (RHOCP) on IBM zSystems / LinuxONE using Kernel-based Virtual Machine (KVM) as the hypervisor.","title":"Description"},{"location":"before-you-begin/#support","text":"This is an unofficial project created by IBMers. This installation method is not officially supported by either Red Hat or IBM. However, once installation is complete, the resulting cluster is supported by Red Hat. UPI is the only supported method for RHOCP on IBM zSystems.","title":"Support"},{"location":"before-you-begin/#difficulty","text":"This process is much easier than doing so manually, but still not an easy task. You will likely encounter errors, but you will reach those errors quicker and understand the problem faster than if you were doing this process manually. After using these playbooks once, successive deployments will be much easier. A very basic understanding of what Ansible does is recommended. Advanced understanding is helpful for further customization of the playbooks. A basic understanding of the command-line is required. A basic understanding of git is recommended, especially for creating your organization's own fork of the repository for further customization. An advanced understanding of your computing environment is required for setting the environment variables. These Ansible Playbooks automate a User-Provisioned Infrastructure (UPI) deployment of Red Hat OpenShift Container Platform (RHOCP). This process, when done manually, is extremely tedious, time-consuming, and requires high levels of Linux AND IBM zSystems expertise. UPI is currently the only supported method for deploying RHOCP on IBM zSystems.","title":"Difficulty"},{"location":"before-you-begin/#why-free-and-open-source","text":"Trust : IBM zSystems run some of the most highly-secure workloads in the world. Trust is paramount. Developing and using code transparently builds trust between developers and users, so that users feel safe using it on their highly sensitive systems. Customization : IBM zSystems exist in environments that can be highly complex and vary drastically from one datacenter to another. Using code that isn't in a proprietary black box allows you to see exactly what is being done so that you can change any part of it to meet your specific needs. Collaboration : If users encounter a problem, or have a feature request, they can get in contact with the developers directly. Submit an issue or pull request on GitHub or email jacob.emery@ibm.com. Collaboration is highly encouraged! Lower Barriers to Entry : The easier it is to get RHOCP on IBM zSystems up and running, the better - for you, IBM and Red Hat! It is free because RHOCP is an incredible product that should have the least amount of barriers to entry as possible. The world needs open-source, private, and hybrid cloud.","title":"Why Free and Open-Source?"},{"location":"get-info/","text":"Step 1: Get Info # Get Repository # Open the terminal Navigate to a folder (AKA directory) where you would like to store this project. Either do so graphically, or use the command-line. Here are some helpful commands for doing so: pwd to see what directory you're currently in ls to list child directories cd to change directories ( cd .. to go up to the parent directory) mkdir to create a new directory Copy/paste the following and hit enter: git clone https://github.com/IBM/Ansible-OpenShift-Provisioning.git Change into the newly created directory The commands and output should resemble the following example: $ pwd /Users/example-user $ mkdir ansible-project $ cd ansible-project/ $ git clone https://github.com/IBM/Ansible-OpenShift-Provisioning.git Cloning into 'Ansible-OpenShift-Provisioning'... remote: Enumerating objects: 3472, done. remote: Counting objects: 100% (200/200), done. remote: Compressing objects: 100% (57/57), done. remote: Total 3472 (delta 152), reused 143 (delta 143), pack-reused 3272 Receiving objects: 100% (3472/3472), 506.29 KiB | 1.27 MiB/s, done. Resolving deltas: 100% (1699/1699), done. $ ls Ansible-OpenShift-Provisioning $ cd Ansible-OpenShift-Provisioning/ $ ls CHANGELOG.md README.md docs mkdocs.yaml roles LICENSE ansible.cfg inventories playbooks Get Pull Secret # In a web browser, navigate to Red Hat's Hybrid Cloud Console , click the text that says 'Copy pull secret' and save it for the next step. Gather Environment Information # You will need a lot of information about the environment this cluster will be set-up in. You will need the help of at least your IBM zSystems infrastructure team so they can provision you a storage group. You'll also need them to provide you with IP address range, hostnames, subnet, gateway, how much disk space you have to work with, etc. A full list of variables needed are found on the next page. Many of them are filled in with defaults or are optional. Please take your time. I would recommend having someone on stand-by in case you need more information or need to ask a question about the environment.","title":"1 Get Info"},{"location":"get-info/#step-1-get-info","text":"","title":"Step 1: Get Info"},{"location":"get-info/#get-repository","text":"Open the terminal Navigate to a folder (AKA directory) where you would like to store this project. Either do so graphically, or use the command-line. Here are some helpful commands for doing so: pwd to see what directory you're currently in ls to list child directories cd to change directories ( cd .. to go up to the parent directory) mkdir to create a new directory Copy/paste the following and hit enter: git clone https://github.com/IBM/Ansible-OpenShift-Provisioning.git Change into the newly created directory The commands and output should resemble the following example: $ pwd /Users/example-user $ mkdir ansible-project $ cd ansible-project/ $ git clone https://github.com/IBM/Ansible-OpenShift-Provisioning.git Cloning into 'Ansible-OpenShift-Provisioning'... remote: Enumerating objects: 3472, done. remote: Counting objects: 100% (200/200), done. remote: Compressing objects: 100% (57/57), done. remote: Total 3472 (delta 152), reused 143 (delta 143), pack-reused 3272 Receiving objects: 100% (3472/3472), 506.29 KiB | 1.27 MiB/s, done. Resolving deltas: 100% (1699/1699), done. $ ls Ansible-OpenShift-Provisioning $ cd Ansible-OpenShift-Provisioning/ $ ls CHANGELOG.md README.md docs mkdocs.yaml roles LICENSE ansible.cfg inventories playbooks","title":"Get Repository"},{"location":"get-info/#get-pull-secret","text":"In a web browser, navigate to Red Hat's Hybrid Cloud Console , click the text that says 'Copy pull secret' and save it for the next step.","title":"Get Pull Secret"},{"location":"get-info/#gather-environment-information","text":"You will need a lot of information about the environment this cluster will be set-up in. You will need the help of at least your IBM zSystems infrastructure team so they can provision you a storage group. You'll also need them to provide you with IP address range, hostnames, subnet, gateway, how much disk space you have to work with, etc. A full list of variables needed are found on the next page. Many of them are filled in with defaults or are optional. Please take your time. I would recommend having someone on stand-by in case you need more information or need to ask a question about the environment.","title":"Gather Environment Information"},{"location":"prerequisites/","text":"Prerequisites # Red Hat # Account ( Sign Up ) License or free trial of Red Hat OpenShift Container Platform for IBM Z systems - s390x architecture (comes with the required licenses for Red Hat Enterprise Linux (RHEL) and CoreOS) IBM zSystems # Hardware Management Console (HMC) access on IBM zSystems or LinuxONE In order to use the playbook that automates the creation of the KVM host Dynamic Partition Manager (DPM) mode is required. If DPM mode is not an option for your environment, that playbook can be skipped, but a bare-metal RHEL server must be set-up on an LPAR manually (Filipe Miranda's how-to article ) before moving on. Once that is done, continue with the playbook 3 that sets up the KVM host. For a minimum installation, at least: 6 Integrated Facilities for Linux (IFLs) with SMT2 enabled 85 GB of RAM An FCP storage group created with 1 TB of disk space 8 IPv4 addresses File Server # A file server accessible from your IBM zSystems / LinuxONE server. Either FTP or HTTP service configured and active. Once a RHEL server is installed natively on the LPAR, pre-existing or configured by this automation, (i.e. the KVM host), you can use that as the file server. If you are not using a pre-existing KVM host(s) and need to create them using this automation, you must use an FTP server because the HMC does not support HTTP. A user with sudo and SSH access on that server. A DVD ISO file of Red Hat Enterprise Linux (RHEL) 8 for s390x architecture mounted in an accessible folder (e.g. /home/ /rhel/ for FTP or /var/www/html/rhel for HTTP) If you do not have RHEL for s390x yet, go to the Red Hat Customer Portal and download it. Under 'Product Variant' use the drop-down menu to select 'Red Hat Enterprise Linux for IBM z Systems' Double-check it's for version 8 and for s390x architecture Then scroll down to Red Hat Enterprise Linux 8.x Binary DVD and click on the 'Download Now' button. To pull the image directly from the command-line of your file server, copy the link for the 'Download Now' button and use wget to pull it down. wget \"https://access.cdn.redhat.com/content/origin/files/sha256/13/13[...]40/rhel-8.7-s390x-dvd.iso?user=6[...]e\" Don't forget to mount it too: FTP: mount /home//rhel or HTTP: mount /var/www/html/rhel A folder created to store config files (e.g. /home/user/ocp-config for FTP or /var/www/html/ocp-config for http) For FTP: sudo mkdir /home//ocp-config or HTTP: sudo mkdir /var/www/html/ocp-config Ansible Controller # The computer/virtual machine running Ansible, sometimes referred to as localhost. Must be running on with MacOS or Linux operating systems. Network access to your IBM zSystems / LinuxONE hardware All you need to run Ansible is a terminal and a text editor. However, an IDE like VS Code is highly recommended for an integrated, user-friendly experience with helpful extensions like YAML . Python3 installed: MacOS, first install Homebrew package manager: /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\" then install Python3 brew install python3 #MacOS Fedora: sudo dnf install python3 #Fedora Debian: sudo apt install python3 #Debian Once Python3 is installed, you also need Ansible version 2.9 or above: pip3 install ansible Once Ansible is installed, you will need a few collections from Ansible Galaxy: ansible-galaxy collection install community.general community.crypto ansible.posix community.libvirt If you will be using these playbooks to automate the creation of the LPAR(s) that will act as KVM host(s) for the cluster, you will also need: ansible-galaxy collection install ibm.ibm_zhmc If you are using MacOS, you also need to have Xcode : xcode-select --install Jumphost for NAT network # If for KVM network NAT is used, instead of macvtap, a ssh tunnel using a jumphost is required to access the OCP cluster. To configure the ssh tunnel expect is required on the jumphost. Expect will be installed during the setup of the bastion (4_setup_bastion.yaml playbook). In case of missing access to install additional packages, install it manually on the jumphost by executing following command: yum install expect In addition make sure that python3 is installed on the jumphost otherwise ansible might fail to run the tasks. You can install python3 manually by executing the following command: yum install python3","title":"Prerequisites"},{"location":"prerequisites/#prerequisites","text":"","title":"Prerequisites"},{"location":"prerequisites/#red-hat","text":"Account ( Sign Up ) License or free trial of Red Hat OpenShift Container Platform for IBM Z systems - s390x architecture (comes with the required licenses for Red Hat Enterprise Linux (RHEL) and CoreOS)","title":"Red Hat"},{"location":"prerequisites/#ibm-zsystems","text":"Hardware Management Console (HMC) access on IBM zSystems or LinuxONE In order to use the playbook that automates the creation of the KVM host Dynamic Partition Manager (DPM) mode is required. If DPM mode is not an option for your environment, that playbook can be skipped, but a bare-metal RHEL server must be set-up on an LPAR manually (Filipe Miranda's how-to article ) before moving on. Once that is done, continue with the playbook 3 that sets up the KVM host. For a minimum installation, at least: 6 Integrated Facilities for Linux (IFLs) with SMT2 enabled 85 GB of RAM An FCP storage group created with 1 TB of disk space 8 IPv4 addresses","title":"IBM zSystems"},{"location":"prerequisites/#file-server","text":"A file server accessible from your IBM zSystems / LinuxONE server. Either FTP or HTTP service configured and active. Once a RHEL server is installed natively on the LPAR, pre-existing or configured by this automation, (i.e. the KVM host), you can use that as the file server. If you are not using a pre-existing KVM host(s) and need to create them using this automation, you must use an FTP server because the HMC does not support HTTP. A user with sudo and SSH access on that server. A DVD ISO file of Red Hat Enterprise Linux (RHEL) 8 for s390x architecture mounted in an accessible folder (e.g. /home/ /rhel/ for FTP or /var/www/html/rhel for HTTP) If you do not have RHEL for s390x yet, go to the Red Hat Customer Portal and download it. Under 'Product Variant' use the drop-down menu to select 'Red Hat Enterprise Linux for IBM z Systems' Double-check it's for version 8 and for s390x architecture Then scroll down to Red Hat Enterprise Linux 8.x Binary DVD and click on the 'Download Now' button. To pull the image directly from the command-line of your file server, copy the link for the 'Download Now' button and use wget to pull it down. wget \"https://access.cdn.redhat.com/content/origin/files/sha256/13/13[...]40/rhel-8.7-s390x-dvd.iso?user=6[...]e\" Don't forget to mount it too: FTP: mount /home//rhel or HTTP: mount /var/www/html/rhel A folder created to store config files (e.g. /home/user/ocp-config for FTP or /var/www/html/ocp-config for http) For FTP: sudo mkdir /home//ocp-config or HTTP: sudo mkdir /var/www/html/ocp-config","title":"File Server"},{"location":"prerequisites/#ansible-controller","text":"The computer/virtual machine running Ansible, sometimes referred to as localhost. Must be running on with MacOS or Linux operating systems. Network access to your IBM zSystems / LinuxONE hardware All you need to run Ansible is a terminal and a text editor. However, an IDE like VS Code is highly recommended for an integrated, user-friendly experience with helpful extensions like YAML . Python3 installed: MacOS, first install Homebrew package manager: /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\" then install Python3 brew install python3 #MacOS Fedora: sudo dnf install python3 #Fedora Debian: sudo apt install python3 #Debian Once Python3 is installed, you also need Ansible version 2.9 or above: pip3 install ansible Once Ansible is installed, you will need a few collections from Ansible Galaxy: ansible-galaxy collection install community.general community.crypto ansible.posix community.libvirt If you will be using these playbooks to automate the creation of the LPAR(s) that will act as KVM host(s) for the cluster, you will also need: ansible-galaxy collection install ibm.ibm_zhmc If you are using MacOS, you also need to have Xcode : xcode-select --install","title":"Ansible Controller"},{"location":"prerequisites/#jumphost-for-nat-network","text":"If for KVM network NAT is used, instead of macvtap, a ssh tunnel using a jumphost is required to access the OCP cluster. To configure the ssh tunnel expect is required on the jumphost. Expect will be installed during the setup of the bastion (4_setup_bastion.yaml playbook). In case of missing access to install additional packages, install it manually on the jumphost by executing following command: yum install expect In addition make sure that python3 is installed on the jumphost otherwise ansible might fail to run the tasks. You can install python3 manually by executing the following command: yum install python3","title":"Jumphost for NAT network"},{"location":"run-the-playbooks-for-abi/","text":"Run the Playbooks # Prerequisites # KVM host with root user access or user with sudo privileges. Note: # This playbook only support for single node cluster (SNO) on KVM using ABI. As of now we are supporting only macvtap for Agent based installation (ABI) on KVM Steps: # Step-1: Initial Setup for ABI # Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Update variables in Section (1 - 9) and Section 12 - OpenShift Settings Update variables in Section - 19 ( Agent Based Installer ) in all.yaml before running the playbooks. In case of SNO Section 9 ( Compute Nodes ) need to be comment or remove First playbook to be run is 0_setup.yaml which will create inventory file for ABI and will add ssh key to the kvm host. Run this shell command: ansible-playbook playbooks/0_setup.yaml Run each part step-by-step by running one playbook at a time, or all at once using playbooks/master_playbook_for_abi.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: 0_setup.yaml ( code ) 3_setup_kvm_host.yaml ( code ) 4_create_bastion.yaml ( code ) 5_setup_bastion.yaml ( code ) create_abi_cluster.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: ansible-playbook playbooks/master_playbook_for_abi.yaml If the process fails in error, go through the steps in the troubleshooting page. Step-2: Setup Playbook (0_setup.yaml) # Overview # First-time setup of the Ansible Controller, the machine running Ansible. Outcomes # Packages and Ansible Galaxy collections are confirmed to be installed properly. host_vars files are confirmed to match KVM host(s) hostnames. Ansible inventory is templated out and working properly. SSH key generated for Ansible passwordless authentication. SSH agent is setup on the Ansible Controller. Ansible SSH key is copied to the file server. Notes # You can use an existing SSH key as your Ansible key, or have Ansible create one for you. It is highly recommended to use one without a passphrase. Step-3: Setup KVM Host Playbook (3_setup_kvm_host.yaml) # Overview # Configures the RHEL server(s) installed natively on the LPAR(s) to act as virtualization hypervisor(s) to host the virtual machines that make up the eventual cluster. Outcomes # Ansible SSH key is copied to all KVM hosts for passwordless authentication. RHEL subscription is auto-attached to all KVM hosts. Software packages specified in group_vars/all.yaml have been installed. Cockpit console enabled for Graphical User Interface via web browser. Go to http://kvm-ip-here:9090 to view it. Libvirt is started and enabled. Logical volume group that was created during kickstart is extended to fill all available space. A macvtap bridge has been created on the host's networking interface. Notes # If you're using a pre-existing LPAR, take a look at roles/configure_storage/tasks/main.yaml to make sure that the commands that will be run to extend the logical volume will work. Storage configurations can vary widely. The values there are the defaults from using autopart during kickstart. Also be aware that if lpar.storage_group_2.auto_config is True, the role roles/configure_storage/tasks/main.yaml will be non-idempotent. Meaning, it will fail if you run it twice. Step-4: Create Bastion Playbook (4_create_bastion.yaml) # Overview # Creates the bastion KVM guest on the first KVM host. The bastion hosts essential services for the cluster. If you already have a bastion server, that can be used instead of running this playbook. Outcomes # Bastion configs are templated out to the file server. Bastion is booted using virt-install. Bastion is kickstarted for fully automated setup of the operating system. Notes # This can be a particularly sticky part of the process. If any of the variables used in the virt-install or kickstart are off, the bastion won't be able to boot. Recommend watching it come up from the first KVM host's cockpit. Go to http://kvm-ip-here:9090 via web-browser to view it. You'll have to sign in, enable administrative access (top right), and then click on the virtual machines tab on the left-hand toolbar. Step-5: Setup Bastion Playbook (5_setup_bastion.yaml) # Overview # Configuration of the bastion to host essential infrastructure services for the cluster. Can be first-time setup or use an existing server. Outcomes # Ansible SSH key copied to bastion for passwordless authentication. Software packages specified in group_vars/all.yaml have been installed. An OCP-specific SSH key is generated for passing into the install-config (then passed to the nodes). Firewall is configured to permit traffic through the necessary ports. Domain Name Server (DNS) configured to resolve cluster's IP addresses and APIs. Only done if env.bastion.options.dns is true. DNS is checked to make sure all the necessary Fully Qualified Domain Names, including APIs resolve properly. Also ensures outside access is working. High Availability Proxy (HAProxy) load balancer is configured. Only done if env.bastion.options.loadbalancer.on_bastion is true. If the the cluster is to be highly available (meaning spread across more than one LPAR), an OpenVPN server is setup on the bastion to allow for the KVM hosts to communicate between eachother. OpenVPN clients are configured on the KVM hosts. CoreOS roofts is pulled to the bastion if not already there. OCP client and installer are pulled down if not there already. oc, kubectl and openshift-install binaries are installed. OCP install-config is templated and backed up. In disconnected mode, if platform is mirrored (currently only legacy), image content source policy and additionalTrustBundle is also patched. Manfifests are created. OCP install directory found at /root/ocpinst/ is created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes. Notes # The stickiest part is DNS setup and get_ocp role at the end. Step-6: Master Playbook (master_playbook_for_abi) # Overview # Use this playbook to run all required 5 playbooks (0_setup, 3_setup_kvm_host,4_create_bastion, 5_setup_bastion, create_abi_cluster) at once. Outcomes # Same as all the above outcomes for all required playbooks. At the end you will have an OpenShift cluster deployed and first-time login credentials. Destroy ABI Cluster # Overview # Destroy the ABI Cluster and other resources created as part of installation Procedure # Run the playbook destroy_abi_cluster.yaml to destroy all the resources created while installation ansible-playbook playbooks/destroy_abi_cluster.yaml destroy_abi_cluster Playbook # Overview # Delete all the resources on ABI Cluster. Destroy the Bastion, Compute and Control Nodes. Outcomes # Monitors Deletion Of Compute Machines and Control Machines. Destroys VMs of Bastion and Compute and Control. Test Playbook (test.yaml) # Overview # Use this playbook for your testing purposes, if needed.","title":"Run the Playbooks"},{"location":"run-the-playbooks-for-abi/#run-the-playbooks","text":"","title":"Run the Playbooks"},{"location":"run-the-playbooks-for-abi/#prerequisites","text":"KVM host with root user access or user with sudo privileges.","title":"Prerequisites"},{"location":"run-the-playbooks-for-abi/#note","text":"This playbook only support for single node cluster (SNO) on KVM using ABI. As of now we are supporting only macvtap for Agent based installation (ABI) on KVM","title":"Note:"},{"location":"run-the-playbooks-for-abi/#steps","text":"","title":"Steps:"},{"location":"run-the-playbooks-for-abi/#step-1-initial-setup-for-abi","text":"Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Update variables in Section (1 - 9) and Section 12 - OpenShift Settings Update variables in Section - 19 ( Agent Based Installer ) in all.yaml before running the playbooks. In case of SNO Section 9 ( Compute Nodes ) need to be comment or remove First playbook to be run is 0_setup.yaml which will create inventory file for ABI and will add ssh key to the kvm host. Run this shell command: ansible-playbook playbooks/0_setup.yaml Run each part step-by-step by running one playbook at a time, or all at once using playbooks/master_playbook_for_abi.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: 0_setup.yaml ( code ) 3_setup_kvm_host.yaml ( code ) 4_create_bastion.yaml ( code ) 5_setup_bastion.yaml ( code ) create_abi_cluster.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: ansible-playbook playbooks/master_playbook_for_abi.yaml If the process fails in error, go through the steps in the troubleshooting page.","title":"Step-1: Initial Setup for ABI"},{"location":"run-the-playbooks-for-abi/#step-2-setup-playbook-0_setupyaml","text":"","title":"Step-2: Setup Playbook (0_setup.yaml)"},{"location":"run-the-playbooks-for-abi/#overview","text":"First-time setup of the Ansible Controller, the machine running Ansible.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes","text":"Packages and Ansible Galaxy collections are confirmed to be installed properly. host_vars files are confirmed to match KVM host(s) hostnames. Ansible inventory is templated out and working properly. SSH key generated for Ansible passwordless authentication. SSH agent is setup on the Ansible Controller. Ansible SSH key is copied to the file server.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#notes","text":"You can use an existing SSH key as your Ansible key, or have Ansible create one for you. It is highly recommended to use one without a passphrase.","title":"Notes"},{"location":"run-the-playbooks-for-abi/#step-3-setup-kvm-host-playbook-3_setup_kvm_hostyaml","text":"","title":"Step-3: Setup KVM Host Playbook (3_setup_kvm_host.yaml)"},{"location":"run-the-playbooks-for-abi/#overview_1","text":"Configures the RHEL server(s) installed natively on the LPAR(s) to act as virtualization hypervisor(s) to host the virtual machines that make up the eventual cluster.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_1","text":"Ansible SSH key is copied to all KVM hosts for passwordless authentication. RHEL subscription is auto-attached to all KVM hosts. Software packages specified in group_vars/all.yaml have been installed. Cockpit console enabled for Graphical User Interface via web browser. Go to http://kvm-ip-here:9090 to view it. Libvirt is started and enabled. Logical volume group that was created during kickstart is extended to fill all available space. A macvtap bridge has been created on the host's networking interface.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#notes_1","text":"If you're using a pre-existing LPAR, take a look at roles/configure_storage/tasks/main.yaml to make sure that the commands that will be run to extend the logical volume will work. Storage configurations can vary widely. The values there are the defaults from using autopart during kickstart. Also be aware that if lpar.storage_group_2.auto_config is True, the role roles/configure_storage/tasks/main.yaml will be non-idempotent. Meaning, it will fail if you run it twice.","title":"Notes"},{"location":"run-the-playbooks-for-abi/#step-4-create-bastion-playbook-4_create_bastionyaml","text":"","title":"Step-4: Create Bastion Playbook (4_create_bastion.yaml)"},{"location":"run-the-playbooks-for-abi/#overview_2","text":"Creates the bastion KVM guest on the first KVM host. The bastion hosts essential services for the cluster. If you already have a bastion server, that can be used instead of running this playbook.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_2","text":"Bastion configs are templated out to the file server. Bastion is booted using virt-install. Bastion is kickstarted for fully automated setup of the operating system.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#notes_2","text":"This can be a particularly sticky part of the process. If any of the variables used in the virt-install or kickstart are off, the bastion won't be able to boot. Recommend watching it come up from the first KVM host's cockpit. Go to http://kvm-ip-here:9090 via web-browser to view it. You'll have to sign in, enable administrative access (top right), and then click on the virtual machines tab on the left-hand toolbar.","title":"Notes"},{"location":"run-the-playbooks-for-abi/#step-5-setup-bastion-playbook-5_setup_bastionyaml","text":"","title":"Step-5: Setup Bastion Playbook (5_setup_bastion.yaml)"},{"location":"run-the-playbooks-for-abi/#overview_3","text":"Configuration of the bastion to host essential infrastructure services for the cluster. Can be first-time setup or use an existing server.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_3","text":"Ansible SSH key copied to bastion for passwordless authentication. Software packages specified in group_vars/all.yaml have been installed. An OCP-specific SSH key is generated for passing into the install-config (then passed to the nodes). Firewall is configured to permit traffic through the necessary ports. Domain Name Server (DNS) configured to resolve cluster's IP addresses and APIs. Only done if env.bastion.options.dns is true. DNS is checked to make sure all the necessary Fully Qualified Domain Names, including APIs resolve properly. Also ensures outside access is working. High Availability Proxy (HAProxy) load balancer is configured. Only done if env.bastion.options.loadbalancer.on_bastion is true. If the the cluster is to be highly available (meaning spread across more than one LPAR), an OpenVPN server is setup on the bastion to allow for the KVM hosts to communicate between eachother. OpenVPN clients are configured on the KVM hosts. CoreOS roofts is pulled to the bastion if not already there. OCP client and installer are pulled down if not there already. oc, kubectl and openshift-install binaries are installed. OCP install-config is templated and backed up. In disconnected mode, if platform is mirrored (currently only legacy), image content source policy and additionalTrustBundle is also patched. Manfifests are created. OCP install directory found at /root/ocpinst/ is created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#notes_3","text":"The stickiest part is DNS setup and get_ocp role at the end.","title":"Notes"},{"location":"run-the-playbooks-for-abi/#step-6-master-playbook-master_playbook_for_abi","text":"","title":"Step-6: Master Playbook (master_playbook_for_abi)"},{"location":"run-the-playbooks-for-abi/#overview_4","text":"Use this playbook to run all required 5 playbooks (0_setup, 3_setup_kvm_host,4_create_bastion, 5_setup_bastion, create_abi_cluster) at once.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_4","text":"Same as all the above outcomes for all required playbooks. At the end you will have an OpenShift cluster deployed and first-time login credentials.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#destroy-abi-cluster","text":"","title":"Destroy ABI Cluster"},{"location":"run-the-playbooks-for-abi/#overview_5","text":"Destroy the ABI Cluster and other resources created as part of installation","title":"Overview"},{"location":"run-the-playbooks-for-abi/#procedure","text":"Run the playbook destroy_abi_cluster.yaml to destroy all the resources created while installation ansible-playbook playbooks/destroy_abi_cluster.yaml","title":"Procedure"},{"location":"run-the-playbooks-for-abi/#destroy_abi_cluster-playbook","text":"","title":"destroy_abi_cluster Playbook"},{"location":"run-the-playbooks-for-abi/#overview_6","text":"Delete all the resources on ABI Cluster. Destroy the Bastion, Compute and Control Nodes.","title":"Overview"},{"location":"run-the-playbooks-for-abi/#outcomes_5","text":"Monitors Deletion Of Compute Machines and Control Machines. Destroys VMs of Bastion and Compute and Control.","title":"Outcomes"},{"location":"run-the-playbooks-for-abi/#test-playbook-testyaml","text":"","title":"Test Playbook (test.yaml)"},{"location":"run-the-playbooks-for-abi/#overview_7","text":"Use this playbook for your testing purposes, if needed.","title":"Overview"},{"location":"run-the-playbooks-for-disconnected/","text":"Run the Playbooks # Overview # For installing disconnected clusters, you will mostly be following rhe same process as a standard connected cluster. The main additional steps we would be doing is mirroring the OCP images to another registry which is accessible to the cluster and post the cluster coming up, we will be applying operator hub manifests such as image content source policy and catalog source, generated by oc-mirror , to the cluster. Disconnected playbook are mentioned below. Please refer the 4 Run the Playbooks documentation for details of rest of the playbooks: disconnected_mirror_artifacts.yaml ( code ) - Run before 6_create_nodes.yaml disconnected_apply_operator_manifests.yaml ( code ) - Run after 7_ocp_verification.yaml . Pre-requisites # A running registry where the OCP and operator hub images will be mirrored. If the CA of this registry is not automatically trusted, then keep the CA cert content handy to update in inventory file. The CA cert is the file with which, do dont need to skip tls to access the registry. Make sure you have required pull secrets handy. You will need 2 pull secrets, one to apply on the cluster and another which will be used for mirroring. The mirroring pull secret MUST have push access to the mirror registry as well as must give you access to Red Hat registries. A good way to create this would be take the Red Hat pull secret from Get Info page and do a podman login with creds having write access. cp -avrf /path/to/redhat-pull-secrets.json ./mirror-secret.json podman login -u admin -p admin --tls-verify=false --authfile=./mirror-secret.json cat ./mirror-secret.json | jq -r tostring A mirror host. This can be any host that can access the internet (mainly the registry being mirrored from) as well as the registry being mirrored to. This registries being mirrored from would typically be the Red Hat registries (registry.redhat.io, quay.io etc) The file server, configured mentioned below. Appropriately updated variables in your all.yaml . Refer the variables documentation. File Server # This configuration will take place on the file server mentioned under File Server section in overall pre-requisites documentaion. The additional configurations are mentioned over here. Make sure to have a directory housing the clients For FTP: sudo mkdir /home//clients or HTTP: sudo mkdir /var/www/html/clients Make sure this directory contains a pre-downloaded oc-mirror binary in tar.gz format. Currently the supported binary is available for x86_64 on Red Hat Customer portal openshift downloads page. It can also be found on mirror.openshift.com from 4.14 onwards for other architectures. NOTE # At this stage, only oc-mirror binary is fetched from File Server, so it is expected that the lpar for disconnected cluster can at least reach mirror.openshift.com to download the other artifacts for cluster installation. The platform related image content source policy will be baked into the install config as part of 5 Setup Bastion Playbook . For platform content, mirroring is supported both using oc-mirror plugin as well as legacy way. oc-mirror is used as default alhough it is possible to switch to using the legacy way of mirroing platform seperately as well. NOTE : Only legacy way supports specifying your own org on the registry for the ocp images. Manifests generated by oc-mirror will be applied to the cluster once it is up. Disconnected Mirror Artifacts Playbook # Overview # Mirror the ocp platform and other necessary images to the mirror registry. Please run this playbook before you run 6 Create Nodes Playbook and after 0 Setup Playbook . Outcomes # Download oc and oc-mirror to the mirror host. Template the mirror pull secret to the mirror host. Add the ca cert to the mirror host anchors if ca is not trusted. Mirror the platform images using oc adm release mirror if legacy mirroring is enabled. Template the image set to mirror host and then mirror it using oc-mirror plogin. Copy the results on the oc-mirror to ansible controller to apply to cluster in future steps. Notes # Platform can be mirrored both using oc-mirror as well as legacy way, using oc adm catalog mirror . oc-mirror is default method but you can also use legacy mirroring. oc-mirror manifests will be only be applied on the cluster, post verification using below playbook. This playbook can be run at any stage after the 0 Setup playbook. Make sure to run this before the cluster starts pulling at the images from the registry which typically happens where the Create Nodes Playbook is run. Disconnected apply oc mirror manifests to cluster Playbook # Overview # Post cluster creation, oc-mirror manifests are applied to the cluster. Please run this playbook after 7 OCP Verification Playbook . Outcomes # Copy the oc-mirror results manifests to the bastion. Apply the copied manifests to the cluster. Disable default content sources.","title":"Run the Playbooks (Disconnected)"},{"location":"run-the-playbooks-for-disconnected/#run-the-playbooks","text":"","title":"Run the Playbooks"},{"location":"run-the-playbooks-for-disconnected/#overview","text":"For installing disconnected clusters, you will mostly be following rhe same process as a standard connected cluster. The main additional steps we would be doing is mirroring the OCP images to another registry which is accessible to the cluster and post the cluster coming up, we will be applying operator hub manifests such as image content source policy and catalog source, generated by oc-mirror , to the cluster. Disconnected playbook are mentioned below. Please refer the 4 Run the Playbooks documentation for details of rest of the playbooks: disconnected_mirror_artifacts.yaml ( code ) - Run before 6_create_nodes.yaml disconnected_apply_operator_manifests.yaml ( code ) - Run after 7_ocp_verification.yaml .","title":"Overview"},{"location":"run-the-playbooks-for-disconnected/#pre-requisites","text":"A running registry where the OCP and operator hub images will be mirrored. If the CA of this registry is not automatically trusted, then keep the CA cert content handy to update in inventory file. The CA cert is the file with which, do dont need to skip tls to access the registry. Make sure you have required pull secrets handy. You will need 2 pull secrets, one to apply on the cluster and another which will be used for mirroring. The mirroring pull secret MUST have push access to the mirror registry as well as must give you access to Red Hat registries. A good way to create this would be take the Red Hat pull secret from Get Info page and do a podman login with creds having write access. cp -avrf /path/to/redhat-pull-secrets.json ./mirror-secret.json podman login -u admin -p admin --tls-verify=false --authfile=./mirror-secret.json cat ./mirror-secret.json | jq -r tostring A mirror host. This can be any host that can access the internet (mainly the registry being mirrored from) as well as the registry being mirrored to. This registries being mirrored from would typically be the Red Hat registries (registry.redhat.io, quay.io etc) The file server, configured mentioned below. Appropriately updated variables in your all.yaml . Refer the variables documentation.","title":"Pre-requisites"},{"location":"run-the-playbooks-for-disconnected/#file-server","text":"This configuration will take place on the file server mentioned under File Server section in overall pre-requisites documentaion. The additional configurations are mentioned over here. Make sure to have a directory housing the clients For FTP: sudo mkdir /home//clients or HTTP: sudo mkdir /var/www/html/clients Make sure this directory contains a pre-downloaded oc-mirror binary in tar.gz format. Currently the supported binary is available for x86_64 on Red Hat Customer portal openshift downloads page. It can also be found on mirror.openshift.com from 4.14 onwards for other architectures.","title":"File Server"},{"location":"run-the-playbooks-for-disconnected/#note","text":"At this stage, only oc-mirror binary is fetched from File Server, so it is expected that the lpar for disconnected cluster can at least reach mirror.openshift.com to download the other artifacts for cluster installation. The platform related image content source policy will be baked into the install config as part of 5 Setup Bastion Playbook . For platform content, mirroring is supported both using oc-mirror plugin as well as legacy way. oc-mirror is used as default alhough it is possible to switch to using the legacy way of mirroing platform seperately as well. NOTE : Only legacy way supports specifying your own org on the registry for the ocp images. Manifests generated by oc-mirror will be applied to the cluster once it is up.","title":"NOTE"},{"location":"run-the-playbooks-for-disconnected/#disconnected-mirror-artifacts-playbook","text":"","title":"Disconnected Mirror Artifacts Playbook"},{"location":"run-the-playbooks-for-disconnected/#overview_1","text":"Mirror the ocp platform and other necessary images to the mirror registry. Please run this playbook before you run 6 Create Nodes Playbook and after 0 Setup Playbook .","title":"Overview"},{"location":"run-the-playbooks-for-disconnected/#outcomes","text":"Download oc and oc-mirror to the mirror host. Template the mirror pull secret to the mirror host. Add the ca cert to the mirror host anchors if ca is not trusted. Mirror the platform images using oc adm release mirror if legacy mirroring is enabled. Template the image set to mirror host and then mirror it using oc-mirror plogin. Copy the results on the oc-mirror to ansible controller to apply to cluster in future steps.","title":"Outcomes"},{"location":"run-the-playbooks-for-disconnected/#notes","text":"Platform can be mirrored both using oc-mirror as well as legacy way, using oc adm catalog mirror . oc-mirror is default method but you can also use legacy mirroring. oc-mirror manifests will be only be applied on the cluster, post verification using below playbook. This playbook can be run at any stage after the 0 Setup playbook. Make sure to run this before the cluster starts pulling at the images from the registry which typically happens where the Create Nodes Playbook is run.","title":"Notes"},{"location":"run-the-playbooks-for-disconnected/#disconnected-apply-oc-mirror-manifests-to-cluster-playbook","text":"","title":"Disconnected apply oc mirror manifests to cluster Playbook"},{"location":"run-the-playbooks-for-disconnected/#overview_2","text":"Post cluster creation, oc-mirror manifests are applied to the cluster. Please run this playbook after 7 OCP Verification Playbook .","title":"Overview"},{"location":"run-the-playbooks-for-disconnected/#outcomes_1","text":"Copy the oc-mirror results manifests to the bastion. Apply the copied manifests to the cluster. Disable default content sources.","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/","text":"Run the Playbooks # Prerequisites # Running OCP Cluster ( Management Cluster ) KVM host with root user access or user with sudo privileges if compute nodes are KVM. zvm host ( bastion ) and nodes if compute nodes are zVM. Network Prerequisites # DNS entry to resolve api.${cluster}.${domain} , api-int.${cluster}.${domain} , *apps.${cluster}.${domain} to a load balancer deployed to redirect incoming traffic to the ingresses pod ( Bastion ). If using dynamic IP for agents, make sure you have entries in DHCP Server for macaddresses you are using in installation to map to IPv4 addresses and along with this DHCP server should make your IPs to use nameserver which you have configured. Note: # As of now we are supporting only macvtap for Hosted Control Plane Agent based installation for KVM compute nodes. Supported network modes for zVM : vswitch, OSA, RoCE, Hipersockets Step-1: Setup Ansible Vault for Management Cluster Credentials # Overview # Creating an encrypted file for storing Management Cluster Credentials and other passwords. Steps: # The ansible-vault create command is used to create the encrypted file. Create an encrypted file in playbooks directory and set the Vault password ( Below command will prompt for setting Vault password). ansible-vault create playbooks/secrets.yaml Give the credentials of Management Cluster in the encrypted file (created above) in following format. kvm_host_password: '' bastion_root_pw: '' api_server: ':' user_name: '' password: '' You can edit the encrypted file using below command ansible-vault edit playbooks/secrets.yaml Make sure you entered Manamegement cluster credenitails properly ,incorrect credentails will cause problem while logging in to the cluster in further steps. Step-2: Initial Setup for Hosted Control Plane # Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Update variables as per the compute node type (zKVM /zVM) in hcp.yaml ( hcp.yaml.template )before running the playbooks. First playbook to be run is setup_for_hcp.yaml which will create inventory file for HCP and will add ssh key to the kvm host. Run this shell command: ansible-playbook playbooks/setup_for_hcp.yaml --ask-vault-pass Step-3: Create Hosted Cluster # Run each part step-by-step by running one playbook at a time, or all at once using hcp.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: create_hosted_cluster.yaml ( code ) create_agents_and_wait_for_install_complete.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: After installation , you can find the details of cluster like kubeconfig and password in the installation directory ( $HOME/ansible_workdir/ ) ansible-playbook playbooks/hcp.yaml --ask-vault-pass Description for Playbooks # setup_for_hcp Playbook # Overview # First-time setup of the Ansible Controller,the machine running Ansible. Outcomes # Inventory file for hcp to be created. SSH key generated for Ansible passwordless authentication. Ansible SSH key is copied to kvm host. Notes # You can use an existing SSH key as your Ansible key, or have Ansible create one for you. create_hosted_cluster Playbook # Overview # Creates and configures bastion Creating AgentServiceConfig, HostedControlPlane, InfraEnv Resources, Download Images Outcomes # Install prerequisites on kvm_host Create bastion Configure bastion Log in to Management Cluster Creates AgentServiceConfig resource and required configmaps Deploys HostedControlPlane Creates InfraEnv resource and wait till ISO generation Download required Images to kvm_host (initrd.img and kernel.img) Download rootfs.img and configure httpd on bastion. create_agents_and_wait_for_install_complete Playbook # Overview # Boots the Agents Scale and Nodepool and monitor all the resources required. Outcomes # Boot Agents Monitor the attachment of agents Approves the agents Scale up the nodepool Monitor agentmachines and machines creation Monitor the worker nodes attachment Configure HAProxy for Hosted workers Monitor the Cluster operators Display Login Credentials for Hosted Cluster Destroy the Hosted Cluser # Overview # Destroy the Hosted Control Plane and other resources created as part of installation Procedure # Run the playbook destroy_cluster_hcp.yaml to destroy all the resources created while installation ansible-playbook playbooks/destroy_cluster_hcp.yaml --ask-vault-pass destroy_cluster_hcp Playbook # Overview # Delete all the resources on Hosted Cluster Destroy the Hosted Control Plane Outcomes # Scale in the nodepool to 0 Monitors the deletion of workers, agent machines and machines. Deletes the agents Deletes InfraEnv Resource Destroys the Hosted Control Plane Deletes AgentServiceConfig Deletes the images downloaded on kvm host Destroys VMs of Bastion and Agents Notes # Overriding OCP Release Image for HCP # If you want to use any other image as OCP release image for HCP , you can override it by environment variable. export HCP_RELEASE_IMAGE=\"\"","title":"Run the Playbooks (HostedControlPlane)"},{"location":"run-the-playbooks-for-hcp/#run-the-playbooks","text":"","title":"Run the Playbooks"},{"location":"run-the-playbooks-for-hcp/#prerequisites","text":"Running OCP Cluster ( Management Cluster ) KVM host with root user access or user with sudo privileges if compute nodes are KVM. zvm host ( bastion ) and nodes if compute nodes are zVM.","title":"Prerequisites"},{"location":"run-the-playbooks-for-hcp/#network-prerequisites","text":"DNS entry to resolve api.${cluster}.${domain} , api-int.${cluster}.${domain} , *apps.${cluster}.${domain} to a load balancer deployed to redirect incoming traffic to the ingresses pod ( Bastion ). If using dynamic IP for agents, make sure you have entries in DHCP Server for macaddresses you are using in installation to map to IPv4 addresses and along with this DHCP server should make your IPs to use nameserver which you have configured.","title":"Network Prerequisites"},{"location":"run-the-playbooks-for-hcp/#note","text":"As of now we are supporting only macvtap for Hosted Control Plane Agent based installation for KVM compute nodes. Supported network modes for zVM : vswitch, OSA, RoCE, Hipersockets","title":"Note:"},{"location":"run-the-playbooks-for-hcp/#step-1-setup-ansible-vault-for-management-cluster-credentials","text":"","title":"Step-1: Setup Ansible Vault for Management Cluster Credentials"},{"location":"run-the-playbooks-for-hcp/#overview","text":"Creating an encrypted file for storing Management Cluster Credentials and other passwords.","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#steps","text":"The ansible-vault create command is used to create the encrypted file. Create an encrypted file in playbooks directory and set the Vault password ( Below command will prompt for setting Vault password). ansible-vault create playbooks/secrets.yaml Give the credentials of Management Cluster in the encrypted file (created above) in following format. kvm_host_password: '' bastion_root_pw: '' api_server: ':' user_name: '' password: '' You can edit the encrypted file using below command ansible-vault edit playbooks/secrets.yaml Make sure you entered Manamegement cluster credenitails properly ,incorrect credentails will cause problem while logging in to the cluster in further steps.","title":"Steps:"},{"location":"run-the-playbooks-for-hcp/#step-2-initial-setup-for-hosted-control-plane","text":"Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Update variables as per the compute node type (zKVM /zVM) in hcp.yaml ( hcp.yaml.template )before running the playbooks. First playbook to be run is setup_for_hcp.yaml which will create inventory file for HCP and will add ssh key to the kvm host. Run this shell command: ansible-playbook playbooks/setup_for_hcp.yaml --ask-vault-pass","title":"Step-2: Initial Setup for Hosted Control Plane"},{"location":"run-the-playbooks-for-hcp/#step-3-create-hosted-cluster","text":"Run each part step-by-step by running one playbook at a time, or all at once using hcp.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: create_hosted_cluster.yaml ( code ) create_agents_and_wait_for_install_complete.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: After installation , you can find the details of cluster like kubeconfig and password in the installation directory ( $HOME/ansible_workdir/ ) ansible-playbook playbooks/hcp.yaml --ask-vault-pass","title":"Step-3: Create Hosted Cluster"},{"location":"run-the-playbooks-for-hcp/#description-for-playbooks","text":"","title":"Description for Playbooks"},{"location":"run-the-playbooks-for-hcp/#setup_for_hcp-playbook","text":"","title":"setup_for_hcp Playbook"},{"location":"run-the-playbooks-for-hcp/#overview_1","text":"First-time setup of the Ansible Controller,the machine running Ansible.","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#outcomes","text":"Inventory file for hcp to be created. SSH key generated for Ansible passwordless authentication. Ansible SSH key is copied to kvm host.","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/#notes","text":"You can use an existing SSH key as your Ansible key, or have Ansible create one for you.","title":"Notes"},{"location":"run-the-playbooks-for-hcp/#create_hosted_cluster-playbook","text":"","title":"create_hosted_cluster Playbook"},{"location":"run-the-playbooks-for-hcp/#overview_2","text":"Creates and configures bastion Creating AgentServiceConfig, HostedControlPlane, InfraEnv Resources, Download Images","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#outcomes_1","text":"Install prerequisites on kvm_host Create bastion Configure bastion Log in to Management Cluster Creates AgentServiceConfig resource and required configmaps Deploys HostedControlPlane Creates InfraEnv resource and wait till ISO generation Download required Images to kvm_host (initrd.img and kernel.img) Download rootfs.img and configure httpd on bastion.","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/#create_agents_and_wait_for_install_complete-playbook","text":"","title":"create_agents_and_wait_for_install_complete Playbook"},{"location":"run-the-playbooks-for-hcp/#overview_3","text":"Boots the Agents Scale and Nodepool and monitor all the resources required.","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#outcomes_2","text":"Boot Agents Monitor the attachment of agents Approves the agents Scale up the nodepool Monitor agentmachines and machines creation Monitor the worker nodes attachment Configure HAProxy for Hosted workers Monitor the Cluster operators Display Login Credentials for Hosted Cluster","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/#destroy-the-hosted-cluser","text":"","title":"Destroy the Hosted Cluser"},{"location":"run-the-playbooks-for-hcp/#overview_4","text":"Destroy the Hosted Control Plane and other resources created as part of installation","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#procedure","text":"Run the playbook destroy_cluster_hcp.yaml to destroy all the resources created while installation ansible-playbook playbooks/destroy_cluster_hcp.yaml --ask-vault-pass","title":"Procedure"},{"location":"run-the-playbooks-for-hcp/#destroy_cluster_hcp-playbook","text":"","title":"destroy_cluster_hcp Playbook"},{"location":"run-the-playbooks-for-hcp/#overview_5","text":"Delete all the resources on Hosted Cluster Destroy the Hosted Control Plane","title":"Overview"},{"location":"run-the-playbooks-for-hcp/#outcomes_3","text":"Scale in the nodepool to 0 Monitors the deletion of workers, agent machines and machines. Deletes the agents Deletes InfraEnv Resource Destroys the Hosted Control Plane Deletes AgentServiceConfig Deletes the images downloaded on kvm host Destroys VMs of Bastion and Agents","title":"Outcomes"},{"location":"run-the-playbooks-for-hcp/#notes_1","text":"","title":"Notes"},{"location":"run-the-playbooks-for-hcp/#overriding-ocp-release-image-for-hcp","text":"If you want to use any other image as OCP release image for HCP , you can override it by environment variable. export HCP_RELEASE_IMAGE=\"\"","title":"Overriding OCP Release Image for HCP"},{"location":"run-the-playbooks/","text":"Step 4: Run the Playbooks # Overview # Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Run this shell command: ansible-playbook playbooks/0_setup.yaml Run each part step-by-step by running one playbook at a time, or all at once using playbooks/site.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: 0_setup.yaml ( code ) 1_create_lpar.yaml ( code ) 2_create_kvm_host.yaml ( code ) 3_setup_kvm_host.yaml ( code ) 4_create_bastion.yaml ( code ) 5_setup_bastion.yaml ( code ) 6_create_nodes.yaml ( code ) 7_ocp_verification.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: ansible-playbook playbooks/site.yaml If the process fails in error, go through the steps in the troubleshooting page. At the end of the the last playbook, follow the printed instructions for first-time login to the cluster. If you make cluster configuration changes in all.yaml file, like increased number of nodes or a new bastion setup, after you have successfully installed a OCP cluster, then you just need to run these playbooks in order: 5_setup_bastion.yaml 6_create_nodes.yaml 7_ocp_verification.yaml 0 Setup Playbook # Overview # First-time setup of the Ansible Controller, the machine running Ansible. Outcomes # Packages and Ansible Galaxy collections are confirmed to be installed properly. host_vars files are confirmed to match KVM host(s) hostnames. Ansible inventory is templated out and working properly. SSH key generated for Ansible passwordless authentication. SSH agent is setup on the Ansible Controller. Ansible SSH key is copied to the file server. Notes # You can use an existing SSH key as your Ansible key, or have Ansible create one for you. It is highly recommended to use one without a passphrase. 1 Create LPAR Playbook # Overview # Creation of one to three Logical Partitions (LPARs), depending on your configuration. Uses the Hardware Management Console (HMC) API, so your system must be in Dynamic Partition Manager (DPM) mode. Outcomes # One to three LPARs created. One to two Networking Interface Cards (NICs) attached per LPAR. One to two storage groups attached per LPAR. LPARs are in 'Stopped' state. Notes # Recommend opening the HMC via web-browser to watch the LPARs come up. 2 Create KVM Host Playbook # Overview # First-time start-up of Red Hat Enterprise Linux installed natively on the LPAR(s). Uses the Hardware Management Console (HMC) API, so your system must be in Dynamic Partition Manager (DPM) mode. Configuration files are passed to the file server and RHEL is booted and then kickstarted for fully automated setup. Outcomes # LPAR(s) started up in 'Active' state. Configuration files (cfg, ins, prm) for the KVM host(s) are on the file server in the provided configs directory. Notes # Recommended to open the HMC via web-browser to watch the Operating System Messages for each LPAR as they boot in order to debug any potential problems. 3 Setup KVM Host Playbook # Overview # Configures the RHEL server(s) installed natively on the LPAR(s) to act as virtualization hypervisor(s) to host the virtual machines that make up the eventual cluster. Outcomes # Ansible SSH key is copied to all KVM hosts for passwordless authentication. RHEL subscription is auto-attached to all KVM hosts. Software packages specified in group_vars/all.yaml have been installed. Cockpit console enabled for Graphical User Interface via web browser. Go to http://kvm-ip-here:9090 to view it. Libvirt is started and enabled. Logical volume group that was created during kickstart is extended to fill all available space. A macvtap bridge has been created on the host's networking interface. Notes # If you're using a pre-existing LPAR, take a look at roles/configure_storage/tasks/main.yaml to make sure that the commands that will be run to extend the logical volume will work. Storage configurations can vary widely. The values there are the defaults from using autopart during kickstart. Also be aware that if lpar.storage_group_2.auto_config is True, the role roles/configure_storage/tasks/main.yaml will be non-idempotent. Meaning, it will fail if you run it twice. 4 Create Bastion Playbook # Overview # Creates the bastion KVM guest on the first KVM host. The bastion hosts essential services for the cluster. If you already have a bastion server, that can be used instead of running this playbook. Outcomes # Bastion configs are templated out to the file server. Bastion is booted using virt-install. Bastion is kickstarted for fully automated setup of the operating system. Notes # This can be a particularly sticky part of the process. If any of the variables used in the virt-install or kickstart are off, the bastion won't be able to boot. Recommend watching it come up from the first KVM host's cockpit. Go to http://kvm-ip-here:9090 via web-browser to view it. You'll have to sign in, enable administrative access (top right), and then click on the virtual machines tab on the left-hand toolbar. 5 Setup Bastion Playbook # Overview # Configuration of the bastion to host essential infrastructure services for the cluster. Can be first-time setup or use an existing server. Outcomes # Ansible SSH key copied to bastion for passwordless authentication. Software packages specified in group_vars/all.yaml have been installed. An OCP-specific SSH key is generated for passing into the install-config (then passed to the nodes). Firewall is configured to permit traffic through the necessary ports. Domain Name Server (DNS) configured to resolve cluster's IP addresses and APIs. Only done if env.bastion.options.dns is true. DNS is checked to make sure all the necessary Fully Qualified Domain Names, including APIs resolve properly. Also ensures outside access is working. High Availability Proxy (HAProxy) load balancer is configured. Only done if env.bastion.options.loadbalancer.on_bastion is true. If the the cluster is to be highly available (meaning spread across more than one LPAR), an OpenVPN server is setup on the bastion to allow for the KVM hosts to communicate between eachother. OpenVPN clients are configured on the KVM hosts. CoreOS roofts is pulled to the bastion if not already there. OCP client and installer are pulled down if not there already. oc, kubectl and openshift-install binaries are installed. OCP install-config is templated and backed up. In disconnected mode, if platform is mirrored (currently only legacy), image content source policy and additionalTrustBundle is also patched. Manfifests are created. OCP install directory found at /root/ocpinst/ is created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes. Notes # The stickiest part is DNS setup and get_ocp role at the end. 6 Create Nodes Playbook # Overview # OCP cluster's nodes are created and the control plane is bootstrapped. Outcomes # CoreOS initramfs and kernel are pulled down. Control nodes are created and bootstrapped. Bootstrap has been created, done its job connecting the control plane, and is then destroyed. Compute nodes are created, as many as is specified in groups_vars/all.yaml. Infra nodes, if defined in group_vars/all.yaml have been created, but are at this point essentially just compute nodes. Notes # To watch the bootstrap do its job connecting the control plane: first, SSH to the bastion, then change to root (sudo -i), from there SSH to the bootstrap node as user 'core' (e.g. ssh core@bootstrap-ip). Once you're in the bootstrap run 'journalctl -b -f -u release-image.service -u bootkube.service'. Expect many errors as the control planes come up. You're waiting for the message 'bootkube.service complete' If the cluster is highly available, the bootstrap node will be created on the last (usually third) KVM host in the group. Since the bastion is on the first host, this was done to spread out the load. 7 OCP Verification Playbook # Overview # Final steps of waiting for and verifying the OpenShift cluster to complete its installation. Outcomes # Certificate Signing Requests (CSRs) have been approved. All nodes are in ready state. All cluster operators are available. OpenShift installation is verified to be complete. Temporary credentials and URL are printed to allow easy first-time login to the cluster. Notes # These steps may take a long time and the tasks are very repetitive because of that. If your cluster has a very large number of compute nodes or insufficient resources, more rounds of approvals and time may be needed for these tasks. If you made it this far, congratulations! To install a new cluster, copy your inventory directory, change the default in the ansible.cfg, change the variables, and start again. With all the customizations to the playbooks you made along the way still intact. Additional Playbooks # Create additional compute nodes (create_compute_node.yaml) and delete compute nodes (delete_compute_node.yaml) # Overview # In case you want to add additional compute nodes in a day-2 operation to your cluster or delete existing compute nodes in your cluster, run these playbooks. Currently we support only env.network_mode macvtap for these two playbooks. We recommand to create a new config file for the additional compute node with such parameters: day2_compute_node: vm_name: control-4 vm_hostname: control-4 vm_ip: 172.192.100.101 hostname: kvm01 host_arch: s390x # rhcos_download_url with '/' at the end ! rhcos_download_url: \"https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.15/4.15.0/\" # RHCOS live image filenames rhcos_live_kernel: \"rhcos-4.15.0-s390x-live-kernel-s390x\" rhcos_live_initrd: \"rhcos-4.15.0-s390x-live-initramfs.s390x.img\" rhcos_live_rootfs: \"rhcos-4.15.0-s390x-live-rootfs.s390x.img\" Make sure that the hostname where you want to create the additional compute node is defined in the inventories/default/hosts file. Now you can execute the add_compute_node playbook with this command and parameter: ansible-playbook playbooks/add_compute_node.yaml --extra-vars \"@compute-node.yaml\" Outcomes # The defind compute node will be added or deleted, depends which playbook you have executed. Master Playbook (site.yaml) # Overview # Use this playbook to run all required playbooks (0-7) all at once. Outcomes # Same as all the above outcomes for all required playbooks. At the end you will have an OpenShift cluster deployed and first-time login credentials. Pre-Existing Host Master Playbook (pre-existing_site.yaml) # Overview # Use this version of the master playbook if you are using a pre-existing LPAR(s) with RHEL already installed. Outcomes # Same as all the above outcomes for all playbooks excluding 1 & 2. This will not create LPAR(s) nor boot your RHEL KVM host(s). At the end you will have an OpenShift cluster deployed and first-time login credentials. Reinstall Cluster Playbook (reinstall_cluster.yaml) # Overview # In case the cluster needs to be completely reinstalled, run this playbook. It will refresh the ingitions that expire after 24 hours, teardown the nodes and re-create them, and then verify the installation. Outcomes # get_ocp role runs. Delete the folders /var/www/html/bin and /var/www/html/ignition. CoreOS roofts is pulled to the bastion. OCP client and installer are pulled down. oc, kubectl and openshift-install binaries are installed. OCP install-config is created from scratch, templated and backed up. Manfifests are created. OCP install directory found at /root/ocpinst/ is deleted, re-created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes. 6 Create Nodes playbook runs, tearing down and recreating cluster nodes. 7 OCP Verification playbook runs, verifying new deployment. Test Playbook (test.yaml) # Overview # Use this playbook for your testing purposes, if needed.","title":"4 Run the Playbooks"},{"location":"run-the-playbooks/#step-4-run-the-playbooks","text":"","title":"Step 4: Run the Playbooks"},{"location":"run-the-playbooks/#overview","text":"Navigate to the root folder of the cloned Git repository in your terminal ( ls should show ansible.cfg ). Run this shell command: ansible-playbook playbooks/0_setup.yaml Run each part step-by-step by running one playbook at a time, or all at once using playbooks/site.yaml . Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: 0_setup.yaml ( code ) 1_create_lpar.yaml ( code ) 2_create_kvm_host.yaml ( code ) 3_setup_kvm_host.yaml ( code ) 4_create_bastion.yaml ( code ) 5_setup_bastion.yaml ( code ) 6_create_nodes.yaml ( code ) 7_ocp_verification.yaml ( code ) Watch Ansible as it completes the installation, correcting errors if they arise. To look at what tasks are running in detail, open the playbook or roles/role-name/tasks/main.yaml Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: ansible-playbook playbooks/site.yaml If the process fails in error, go through the steps in the troubleshooting page. At the end of the the last playbook, follow the printed instructions for first-time login to the cluster. If you make cluster configuration changes in all.yaml file, like increased number of nodes or a new bastion setup, after you have successfully installed a OCP cluster, then you just need to run these playbooks in order: 5_setup_bastion.yaml 6_create_nodes.yaml 7_ocp_verification.yaml","title":"Overview"},{"location":"run-the-playbooks/#0-setup-playbook","text":"","title":"0 Setup Playbook"},{"location":"run-the-playbooks/#overview_1","text":"First-time setup of the Ansible Controller, the machine running Ansible.","title":"Overview"},{"location":"run-the-playbooks/#outcomes","text":"Packages and Ansible Galaxy collections are confirmed to be installed properly. host_vars files are confirmed to match KVM host(s) hostnames. Ansible inventory is templated out and working properly. SSH key generated for Ansible passwordless authentication. SSH agent is setup on the Ansible Controller. Ansible SSH key is copied to the file server.","title":"Outcomes"},{"location":"run-the-playbooks/#notes","text":"You can use an existing SSH key as your Ansible key, or have Ansible create one for you. It is highly recommended to use one without a passphrase.","title":"Notes"},{"location":"run-the-playbooks/#1-create-lpar-playbook","text":"","title":"1 Create LPAR Playbook"},{"location":"run-the-playbooks/#overview_2","text":"Creation of one to three Logical Partitions (LPARs), depending on your configuration. Uses the Hardware Management Console (HMC) API, so your system must be in Dynamic Partition Manager (DPM) mode.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_1","text":"One to three LPARs created. One to two Networking Interface Cards (NICs) attached per LPAR. One to two storage groups attached per LPAR. LPARs are in 'Stopped' state.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_1","text":"Recommend opening the HMC via web-browser to watch the LPARs come up.","title":"Notes"},{"location":"run-the-playbooks/#2-create-kvm-host-playbook","text":"","title":"2 Create KVM Host Playbook"},{"location":"run-the-playbooks/#overview_3","text":"First-time start-up of Red Hat Enterprise Linux installed natively on the LPAR(s). Uses the Hardware Management Console (HMC) API, so your system must be in Dynamic Partition Manager (DPM) mode. Configuration files are passed to the file server and RHEL is booted and then kickstarted for fully automated setup.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_2","text":"LPAR(s) started up in 'Active' state. Configuration files (cfg, ins, prm) for the KVM host(s) are on the file server in the provided configs directory.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_2","text":"Recommended to open the HMC via web-browser to watch the Operating System Messages for each LPAR as they boot in order to debug any potential problems.","title":"Notes"},{"location":"run-the-playbooks/#3-setup-kvm-host-playbook","text":"","title":"3 Setup KVM Host Playbook"},{"location":"run-the-playbooks/#overview_4","text":"Configures the RHEL server(s) installed natively on the LPAR(s) to act as virtualization hypervisor(s) to host the virtual machines that make up the eventual cluster.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_3","text":"Ansible SSH key is copied to all KVM hosts for passwordless authentication. RHEL subscription is auto-attached to all KVM hosts. Software packages specified in group_vars/all.yaml have been installed. Cockpit console enabled for Graphical User Interface via web browser. Go to http://kvm-ip-here:9090 to view it. Libvirt is started and enabled. Logical volume group that was created during kickstart is extended to fill all available space. A macvtap bridge has been created on the host's networking interface.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_3","text":"If you're using a pre-existing LPAR, take a look at roles/configure_storage/tasks/main.yaml to make sure that the commands that will be run to extend the logical volume will work. Storage configurations can vary widely. The values there are the defaults from using autopart during kickstart. Also be aware that if lpar.storage_group_2.auto_config is True, the role roles/configure_storage/tasks/main.yaml will be non-idempotent. Meaning, it will fail if you run it twice.","title":"Notes"},{"location":"run-the-playbooks/#4-create-bastion-playbook","text":"","title":"4 Create Bastion Playbook"},{"location":"run-the-playbooks/#overview_5","text":"Creates the bastion KVM guest on the first KVM host. The bastion hosts essential services for the cluster. If you already have a bastion server, that can be used instead of running this playbook.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_4","text":"Bastion configs are templated out to the file server. Bastion is booted using virt-install. Bastion is kickstarted for fully automated setup of the operating system.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_4","text":"This can be a particularly sticky part of the process. If any of the variables used in the virt-install or kickstart are off, the bastion won't be able to boot. Recommend watching it come up from the first KVM host's cockpit. Go to http://kvm-ip-here:9090 via web-browser to view it. You'll have to sign in, enable administrative access (top right), and then click on the virtual machines tab on the left-hand toolbar.","title":"Notes"},{"location":"run-the-playbooks/#5-setup-bastion-playbook","text":"","title":"5 Setup Bastion Playbook"},{"location":"run-the-playbooks/#overview_6","text":"Configuration of the bastion to host essential infrastructure services for the cluster. Can be first-time setup or use an existing server.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_5","text":"Ansible SSH key copied to bastion for passwordless authentication. Software packages specified in group_vars/all.yaml have been installed. An OCP-specific SSH key is generated for passing into the install-config (then passed to the nodes). Firewall is configured to permit traffic through the necessary ports. Domain Name Server (DNS) configured to resolve cluster's IP addresses and APIs. Only done if env.bastion.options.dns is true. DNS is checked to make sure all the necessary Fully Qualified Domain Names, including APIs resolve properly. Also ensures outside access is working. High Availability Proxy (HAProxy) load balancer is configured. Only done if env.bastion.options.loadbalancer.on_bastion is true. If the the cluster is to be highly available (meaning spread across more than one LPAR), an OpenVPN server is setup on the bastion to allow for the KVM hosts to communicate between eachother. OpenVPN clients are configured on the KVM hosts. CoreOS roofts is pulled to the bastion if not already there. OCP client and installer are pulled down if not there already. oc, kubectl and openshift-install binaries are installed. OCP install-config is templated and backed up. In disconnected mode, if platform is mirrored (currently only legacy), image content source policy and additionalTrustBundle is also patched. Manfifests are created. OCP install directory found at /root/ocpinst/ is created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_5","text":"The stickiest part is DNS setup and get_ocp role at the end.","title":"Notes"},{"location":"run-the-playbooks/#6-create-nodes-playbook","text":"","title":"6 Create Nodes Playbook"},{"location":"run-the-playbooks/#overview_7","text":"OCP cluster's nodes are created and the control plane is bootstrapped.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_6","text":"CoreOS initramfs and kernel are pulled down. Control nodes are created and bootstrapped. Bootstrap has been created, done its job connecting the control plane, and is then destroyed. Compute nodes are created, as many as is specified in groups_vars/all.yaml. Infra nodes, if defined in group_vars/all.yaml have been created, but are at this point essentially just compute nodes.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_6","text":"To watch the bootstrap do its job connecting the control plane: first, SSH to the bastion, then change to root (sudo -i), from there SSH to the bootstrap node as user 'core' (e.g. ssh core@bootstrap-ip). Once you're in the bootstrap run 'journalctl -b -f -u release-image.service -u bootkube.service'. Expect many errors as the control planes come up. You're waiting for the message 'bootkube.service complete' If the cluster is highly available, the bootstrap node will be created on the last (usually third) KVM host in the group. Since the bastion is on the first host, this was done to spread out the load.","title":"Notes"},{"location":"run-the-playbooks/#7-ocp-verification-playbook","text":"","title":"7 OCP Verification Playbook"},{"location":"run-the-playbooks/#overview_8","text":"Final steps of waiting for and verifying the OpenShift cluster to complete its installation.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_7","text":"Certificate Signing Requests (CSRs) have been approved. All nodes are in ready state. All cluster operators are available. OpenShift installation is verified to be complete. Temporary credentials and URL are printed to allow easy first-time login to the cluster.","title":"Outcomes"},{"location":"run-the-playbooks/#notes_7","text":"These steps may take a long time and the tasks are very repetitive because of that. If your cluster has a very large number of compute nodes or insufficient resources, more rounds of approvals and time may be needed for these tasks. If you made it this far, congratulations! To install a new cluster, copy your inventory directory, change the default in the ansible.cfg, change the variables, and start again. With all the customizations to the playbooks you made along the way still intact.","title":"Notes"},{"location":"run-the-playbooks/#additional-playbooks","text":"","title":"Additional Playbooks"},{"location":"run-the-playbooks/#create-additional-compute-nodes-create_compute_nodeyaml-and-delete-compute-nodes-delete_compute_nodeyaml","text":"","title":"Create additional compute nodes (create_compute_node.yaml) and delete compute nodes (delete_compute_node.yaml)"},{"location":"run-the-playbooks/#overview_9","text":"In case you want to add additional compute nodes in a day-2 operation to your cluster or delete existing compute nodes in your cluster, run these playbooks. Currently we support only env.network_mode macvtap for these two playbooks. We recommand to create a new config file for the additional compute node with such parameters: day2_compute_node: vm_name: control-4 vm_hostname: control-4 vm_ip: 172.192.100.101 hostname: kvm01 host_arch: s390x # rhcos_download_url with '/' at the end ! rhcos_download_url: \"https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.15/4.15.0/\" # RHCOS live image filenames rhcos_live_kernel: \"rhcos-4.15.0-s390x-live-kernel-s390x\" rhcos_live_initrd: \"rhcos-4.15.0-s390x-live-initramfs.s390x.img\" rhcos_live_rootfs: \"rhcos-4.15.0-s390x-live-rootfs.s390x.img\" Make sure that the hostname where you want to create the additional compute node is defined in the inventories/default/hosts file. Now you can execute the add_compute_node playbook with this command and parameter: ansible-playbook playbooks/add_compute_node.yaml --extra-vars \"@compute-node.yaml\"","title":"Overview"},{"location":"run-the-playbooks/#outcomes_8","text":"The defind compute node will be added or deleted, depends which playbook you have executed.","title":"Outcomes"},{"location":"run-the-playbooks/#master-playbook-siteyaml","text":"","title":"Master Playbook (site.yaml)"},{"location":"run-the-playbooks/#overview_10","text":"Use this playbook to run all required playbooks (0-7) all at once.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_9","text":"Same as all the above outcomes for all required playbooks. At the end you will have an OpenShift cluster deployed and first-time login credentials.","title":"Outcomes"},{"location":"run-the-playbooks/#pre-existing-host-master-playbook-pre-existing_siteyaml","text":"","title":"Pre-Existing Host Master Playbook (pre-existing_site.yaml)"},{"location":"run-the-playbooks/#overview_11","text":"Use this version of the master playbook if you are using a pre-existing LPAR(s) with RHEL already installed.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_10","text":"Same as all the above outcomes for all playbooks excluding 1 & 2. This will not create LPAR(s) nor boot your RHEL KVM host(s). At the end you will have an OpenShift cluster deployed and first-time login credentials.","title":"Outcomes"},{"location":"run-the-playbooks/#reinstall-cluster-playbook-reinstall_clusteryaml","text":"","title":"Reinstall Cluster Playbook (reinstall_cluster.yaml)"},{"location":"run-the-playbooks/#overview_12","text":"In case the cluster needs to be completely reinstalled, run this playbook. It will refresh the ingitions that expire after 24 hours, teardown the nodes and re-create them, and then verify the installation.","title":"Overview"},{"location":"run-the-playbooks/#outcomes_11","text":"get_ocp role runs. Delete the folders /var/www/html/bin and /var/www/html/ignition. CoreOS roofts is pulled to the bastion. OCP client and installer are pulled down. oc, kubectl and openshift-install binaries are installed. OCP install-config is created from scratch, templated and backed up. Manfifests are created. OCP install directory found at /root/ocpinst/ is deleted, re-created and populated with necessary files. Ignition files for the bootstrap, control, and compute nodes are transferred to HTTP-accessible directory for booting nodes. 6 Create Nodes playbook runs, tearing down and recreating cluster nodes. 7 OCP Verification playbook runs, verifying new deployment.","title":"Outcomes"},{"location":"run-the-playbooks/#test-playbook-testyaml","text":"","title":"Test Playbook (test.yaml)"},{"location":"run-the-playbooks/#overview_13","text":"Use this playbook for your testing purposes, if needed.","title":"Overview"},{"location":"set-variables-group-vars/","text":"Step 2: Set Variables (group_vars) # Overview # In a text editor of your choice, open the template of the environment variables file . Make a copy of it called all.yaml and paste it into the same directory with its template. all.yaml is your master variables file and you will likely reference it many times throughout the process. The default inventory can be found at inventories/default . The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in. This is the most important step in the process. Take the time to make sure everything here is correct. Note on YAML syntax : Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file env and z don't need to be filled in, but the cpc_name does. There are X's where input is required to help you with this. Scroll the table to the right to see examples for each variable. 1 - Controller # Variable Name Description Example env.installation_type Can be of type kvm or lpar. Some packages will be ignored for installation in case of non lpar based installation. kvm env.controller.sudo_pass The password to the machine running Ansible (localhost). This will only be used for two things. To ensure you've installed the pre-requisite packages if you're on Linux, and to add the login URL to your /etc/hosts file. Pas$w0rd! 2 - LPAR(s) # Variable Name Description Example env.z.high_availability Is this cluster spread across three LPARs? If yes, mark True. If not (just in one LPAR), mark False True env.z.ip_forward This variable specifies if ip forwarding is enabled or not if NAT network is selected. If ip_forwarding is set to 0, the installed OCP cluster will not be able to access external services because using NAT keep the nodes isolated. This parameter will be set via sysctl on the KVM host. The change of the value is instantly active. This setting will be configured during 3_setup_kvm playbook. If NAT will be configured after 3_setup_kvm playbook, the setup needs to be done manually before bastion is being created, configured or reconfigured by running the 3_setup_kvm playbook with parameter: --tags cfg_ip_forward 1 env.z.lpar1.create To have Ansible create an LPAR and install RHEL on it for the KVM host, mark True. If using a pre-existing LPAR with RHEL already installed, mark False. True env.z.lpar1.hostname The hostname of the KVM host. kvm-host-01 env.z.lpar1.ip The IPv4 address of the KVM host. 192.168.10.1 env.z.lpar1.user Username for Linux admin on KVM host 1. Recommended to run as a non-root user with sudo access. admin env.z.lpar1.pass The password for the user that will be created or exists on the KVM host. ch4ngeMe! env.z.lpar2.create To create a second LPAR and install RHEL on it to act as another KVM host, mark True. If using pre-existing LPAR(s) with RHEL already installed, mark False. True env.z.lpar2.hostname (Optional) The hostname of the second KVM host. kvm-host-02 env.z.lpar2.ip (Optional) The IPv4 address of the second KVM host. 192.168.10.2 env.z.lpar2.user Username for Linux admin on KVM host 2. Recommended to run as a non-root user with sudo access. admin env.z.lpar2.pass (Optional) The password for the admin user on the second KVM host. ch4ngeMe! env.z.lpar3.create To create a third LPAR and install RHEL on it to act as another KVM host, mark True. If using pre-existing LPAR(s) with RHEL already installed, mark False. True env.z.lpar3.hostname (Optional) The hostname of the third KVM host. kvm-host-03 env.z.lpar3.ip (Optional) The IPv4 address of the third KVM host. 192.168.10.3 env.z.lpar3.user Username for Linux admin on KVM host 3. Recommended to run as a non-root user with sudo access. admin env.z.lpar3.pass (Optional) The password for the admin user on the third KVM host. ch4ngeMe! 3 - File Server # Variable Name Description Example env.file_server.ip IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. 192.168.10.201 env.file_server.port The port on which the file server is listening. Will be embedded into all download urls. Defaults to protocol default port. Keep empty '' to use default port 10000 env.file_server.user Username to connect to the file server. Must have sudo and SSH access. user1 env.file_server.pass Password to connect to the file server as above user. user1pa$s! env.file_server.protocol Protocol used to serve the files, either 'ftp' or 'http' http env.file_server.iso_os_variant The os variant for the bastion kvm to be created rhel8.8 env.file_server.iso_mount_dir Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. RHEL/8.7 env.file_server.cfgs_dir Directory path relative to to the HTTP/FTP accessible directory where configuration files can be stored. For example, if FTP root is /home/user1 and you would like to store the configs at /home/user1/ocpz-config then this variable would be ocpz-config. No slash before or after. ocpz-config 4 - Red Hat Info # Variable Name Description Example env.redhat.username Red Hat username with a valid license or free trial to Red Hat OpenShift Container Platform (RHOCP), which comes with necessary licenses for Red Hat Enterprise Linux (RHEL) and Red Hat CoreOS (RHCOS). redhat.user env.redhat.password Password to Red Hat above user's account. Used to auto-attach necessary subscriptions to KVM Host, bastion VM, and pull live images for OpenShift. rEdHatPa$s! env.redhat.manage_subscription True or False. Would you like to subscribe the server with Red Hat? True env.redhat.pull_secret Pull secret for OpenShift, comes from Red Hat's Hybrid Cloud Console . Make sure to enclose in 'single quotes'. '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\"}}}' 5 - Bastion # Variable Name Description Example env.bastion.create True or False. Would you like to create a bastion KVM guest to host essential infrastructure services like DNS, load balancer, firewall, etc? Can de-select certain services with the env.bastion.options variables below. True env.bastion.vm_name Name of the bastion VM. Arbitrary value. bastion env.bastion.resources.disk_size How much of the storage pool would you like to allocate to the bastion (in Gigabytes)? Recommended 30 or more. 30 env.bastion.resources.ram How much memory would you like to allocate the bastion (in megabytes)? Recommended 4096 or more 4096 env.bastion.resources.swap How much swap storage would you like to allocate the bastion (in megabytes)? Recommended 4096 or more. 4096 env.bastion.resources.vcpu How many virtual CPUs would you like to allocate to the bastion? Recommended 4 or more. 4 env.bastion.resources.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.bastion.networking.ip IPv4 address for the bastion. 192.168.10.3 env.bastion.networking.ipv6 IPv6 address for the bastion if use_ipv6 variable is 'True'. fd00::3 env.bastion.networking.mac MAC address for the bastion if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.bastion.networking.hostname Hostname of the bastion. Will be combined with env.bastion.networking.base_domain to create a Fully Qualified Domain Name (FQDN). ocpz-bastion env.bastion.networking.base_domain Base domain that, when combined with the hostname, creates a fully-qualified domain name (FQDN) for the bastion? ihost.com env.bastion.networking.subnetmask Subnet of the bastion. 255.255.255.0 env.bastion.networking.gateway IPv4 of he bastion's gateway server. 192.168.10.0 env.bastion.networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 env.bastion.networking.ipv6_prefix IPv6 prefix. 64 env.bastion.networking.nameserver1 IPv4 address of the server that resolves the bastion's hostname. 192.168.10.200 env.bastion.networking.nameserver2 (Optional) A second IPv4 address that resolves the bastion's hostname. 192.168.10.201 env.bastion.networking.forwarder What IPv4 address will be used to make external DNS calls for the bastion? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8 env.bastion.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1 env.bastion.access.user What would you like the admin's username to be on the bastion? If root, make pass and root_pass vars the same. admin env.bastion.access.pass The password to the bastion's admin user. If using root, make pass and root_pass vars the same. cH4ngeM3! env.bastion.access.root_pass The root password for the bastion. If using root, make pass and root_pass vars the same. R0OtPa$s! env.bastion.options.dns Would you like the bastion to host the DNS information for the cluster? True or False. If false, resolution must come from elsewhere in your environment. Make sure to add IP addresses for KVM hosts, bastion, bootstrap, control, compute nodes, AND api, api-int and *.apps as described here in section \"User-provisioned DNS Requirements\" Table 5. If True this will be done for you in the dns and check_dns roles. True env.bastion.options.loadbalancer.on_bastion Would you like the bastion to host the load balancer (HAProxy) for the cluster? True or False (boolean). If false, this service must be provided elsewhere in your environment, and public and private IP of the load balancer must be provided in the following two variables. True env.bastion.options.loadbalancer.public_ip (Only required if env.bastion.options.loadbalancer.on_bastion is True). The public IPv4 address for your environment's loadbalancer. api, apps, *.apps must use this. 192.168.10.50 env.bastion.options.loadbalancer.private_ip (Only required if env.bastion.options.loadbalancer.on_bastion is True). The private IPv4 address for your environment's loadbalancer. api-int must use this. 10.24.17.12 6 - Cluster Networking # Variable Name Description Example env.cluster.networking.metadata_name Name to describe the cluster as a whole, can be anything if DNS will be hosted on the bastion. If DNS is not on the bastion, must match your DNS configuration. Will be combined with the base_domain and hostnames to create Fully Qualified Domain Names (FQDN). ocpz env.cluster.networking.base_domain The site name, where is the cluster being hosted? This will be combined with the metadata_name and hostnames to create FQDNs. host.com env.bastion.networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 env.bastion.networking.ipv6_prefix IPv6 prefix. 64 env.cluster.networking.nameserver1 IPv4 address that the cluster get its hostname resolution from. If env.bastion.options.dns is True, this should be the IP address of the bastion. 192.168.10.200 env.cluster.networking.nameserver2 (Optional) A second IPv4 address will the cluster get its hostname resolution from? If env.bastion.options.dns is True, this should be left commented out. 192.168.10.201 env.cluster.networking.forwarder What IPv4 address will be used to make external DNS calls for the cluster? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8 env.cluster.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1 7 - Bootstrap Node # Variable Name Description Example env.cluster.nodes.bootstrap.disk_size How much disk space do you want to allocate to the bootstrap node (in Gigabytes)? Bootstrap node is temporary and will be brought down automatically when its job completes. 120 or more recommended. 120 env.cluster.nodes.bootstrap.ram How much memory would you like to allocate to the temporary bootstrap node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.bootstrap.vcpu How many virtual CPUs would you like to allocate to the temporary bootstrap node? Recommended 4 or more. 4 env.cluster.nodes.bootstrap.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.bootstrap.vm_name Name of the temporary bootstrap node VM. Arbitrary value. bootstrap env.cluster.nodes.bootstrap.ip IPv4 address of the temporary bootstrap node. 192.168.10.4 env.cluster.nodes.bootstrap.ipv6 IPv6 address for the bootstrap if use_ipv6 variable is 'True'. fd00::4 env.cluster.nodes.bootstrap.mac MAC address for the bootstrap node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.bootstrap.hostname Hostname of the temporary boostrap node. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). bootstrap-ocpz 8 - Control Nodes # Variable Name Description Example env.cluster.nodes.control.disk_size How much disk space do you want to allocate to each control node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.control.ram How much memory would you like to allocate to the each control node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.control.vcpu How many virtual CPUs would you like to allocate to each control node? Recommended 4 or more. 4 env.cluster.nodes.control.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.control.vm_name Name of the control node VMs. Arbitrary values. Usually no more or less than 3 are used. Must match the total number of IP addresses and hostnames for control nodes. Use provided list format. control-1control-2control-3 env.cluster.nodes.control.ip IPv4 address of the control nodes. Use provided list formatting. 192.168.10.5192.168.10.6192.168.10.7 env.cluster.nodes.control.ipv6 IPv6 address for the control nodes. Use iprovided list formatting (if use_ipv6 variable is 'True'). fd00::5fd00::6fd00::7 env.cluster.nodes.control.mac MAC address for the control node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.control.hostname Hostnames for control nodes. Must match the total number of IP addresses for control nodes (usually 3). If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). control-01control-02control-03 9 - Compute Nodes # Variable Name Description Example env.cluster.nodes.compute.disk_size How much disk space do you want to allocate to each compute node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.compute.ram How much memory would you like to allocate to the each compute node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.compute.vcpu How many virtual CPUs would you like to allocate to each compute node? Recommended 2 or more. 2 env.cluster.nodes.compute.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.compute.vm_name Name of the compute node VMs. Arbitrary values. This list can be expanded to any number of nodes, minimum 2. Must match the total number of IP addresses and hostnames for compute nodes. Use provided list format. compute-1compute-2 env.cluster.nodes.compute.ip IPv4 address of the compute nodes. Must match the total number of VM names and hostnames for compute nodes. Use provided list formatting. 192.168.10.8192.168.10.9 env.cluster.nodes.control.ipv6 IPv6 address for the compute nodes. Use iprovided list formatting (if use_ipv6 variable is 'True'). fd00::8fd00::9 env.cluster.nodes.compute.mac MAC address for the compute node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.compute.hostname Hostnames for compute nodes. Must match the total number of IP addresses and VM names for compute nodes. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). compute-01compute-02 10 - Infra Nodes # Variable Name Description Example env.cluster.nodes.infra.disk_size (Optional) Set up compute nodes that are made for infrastructure workloads (ingress, monitoring, logging)? How much disk space do you want to allocate to each infra node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.infra.ram (Optional) How much memory would you like to allocate to the each infra node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.infra.vcpu (Optional) How many virtual CPUs would you like to allocate to each infra node? Recommended 2 or more. 2 env.cluster.nodes.infra.vcpu_model_option (Optional) Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.infra.vm_name (Optional) Name of additional infra node VMs. Arbitrary values. This list can be expanded to any number of nodes, minimum 2. Must match the total number of IP addresses and hostnames for infra nodes. Use provided list format. infra-1infra-2 env.cluster.nodes.infra.ip (Optional) IPv4 address of the infra nodes. This list can be expanded to any number of nodes, minimum 2. Use provided list formatting. 192.168.10.10192.168.10.11 env.cluster.nodes.infra.ipv6 (Optional) IPv6 address of the infra nodes. iThis list can be expanded to any number of nodes, minimum 2. Use provided list formatting (if use_ipv6 variable is 'True'). fd00::10fd00::11 env.cluster.nodes.infra.hostname (Optional) Hostnames for infra nodes. Must match the total number of IP addresses for infra nodes. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). infra-01infra-02 11 - (Optional) Packages # Variable Name Description Example env.pkgs.galaxy A list of Ansible Galaxy collections that will be installed during the setup playbook. The collections listed are required. Feel free to add more as needed, just make sure to follow the same list format. community.general env.pkgs.controller A list of packages that will be installed on the machine running Ansible during the setup playbook. Feel free to add more as needed, just make sure to follow the same list format. openssh env.pkgs.kvm A list of packages that will be installed on the KVM Host during the setup_kvm_host playbook. Feel free to add more as needed, just make sure to follow the same list format. qemu-kvm env.pkgs.bastion A list of packages that will be installed on the bastion during the setup_bastion playbook. Feel free to add more as needed, just make sure to follow the same list format. haproxy 12 - OpenShift Settings # Variable Name Description Example env.install_config.api_version Kubernetes API version for the cluster. These install_config variables will be passed to the OCP install_config file. This file is templated in the get_ocp role during the setup_bastion playbook. To make more fine-tuned adjustments to the install_config, you can find it at roles/get_ocp/templates/install-config.yaml.j2 v1 env.install_config.compute.architecture Computing architecture for the compute nodes. Must be s390x for clusters on IBM zSystems. s390x env.install_config.compute.hyperthreading Enable or disable hyperthreading on compute nodes. Recommended enabled. Enabled env.install_config.control.architecture Computing architecture for the control nodes. Must be s390x for clusters on IBM zSystems, amd64 for Intel or AMD systems, and arm64 for ARM servers. s390x env.install_config.control.hyperthreading Enable or disable hyperthreading on control nodes. Recommended enabled. Enabled env.install_config.cluster_network.cidr IPv4 block in Internal cluster networking in Classless Inter-Domain Routing (CIDR) notation. Recommended to keep as is. 10.128.0.0/14 env.install_config.cluster_network.host_prefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. 23 env.install_config.cluster_network.type The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes (default). OVNKubernetes env.install_config.service_network The IP address block for services. The default value is 172.30.0.0/16. The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. 172.30.0.0/16 env.install_config.machine_network The IP address block for Nodes IP Pool. The default value is 192.168.122.0/24 For NAT Network Mode. In case of MacvTap it will be depend on Inteface IP assignment. An array with an IP address block in CIDR format. 192.168.122.0/24 env.install_config.fips True or False (boolean) for whether or not to use the United States' Federal Information Processing Standards (FIPS). Not yet certified on IBM zSystems. Enclosed in 'single quotes'. 'false' 13 - (Optional) Proxy # Variable Name Description Example env.proxy.http (Optional) A proxy URL to use for creating HTTP connections outside the cluster. Will be used in the install-config and applied to other Ansible hosts unless set otherwise in no_proxy below. Must follow this pattern: http://username:pswd>@ip:port http://ocp-admin:Pa$sw0rd@9.72.10.1:80 env.proxy.https (Optional) A proxy URL to use for creating HTTPS connections outside the cluster. Will be used in the install-config and applied to other Ansible hosts unless set otherwise in no_proxy below. Must follow this pattern: https://username:pswd@ip:port https://ocp-admin:Pa$sw0rd@9.72.10.1:80 env.proxy.no (Optional) A comma-separated list (no spaces) of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. When using a proxy, all necessary IPs and domains for your cluster will be added automatically. See roles/get_ocp/templates/install-config.yaml.j2 for more details on the template. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all listed destinations. example.com,192.168.10.1 14 - (Optional) Misc # Variable Name Description Example env.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here , in the \"Locale\" column of Table 2.1. en_US.UTF-8 env.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here . America/New_York env.keyboard Which keyboard layout would you like Red Hat Enterprise Linux to use? us env.ansible_key_name (Optional) Name of the SSH key that Ansible will use to connect to hosts. ansible-ocpz env.ocp_key_name Comment to describe the SSH key used for OCP. Arbitrary value. OCPZ-01 key env.vnet_name (Optional) Name of the bridged virtual network that will be created on the KVM host if network mode is not set to NAT. In case of NAT network mode the name of the NAT network definition used to create the nodes(usually it is 'default'). If NAT is being used and a jumphost is needed, the parameters network_mode, jumphost.name, jumphost.user and jumphost.pass must be specified, too. For default (NAT) network verify that the configured IP ranges does not interfere with the IPs defined for the controle and compute nodes. Modify the default network (dhcp range setting) to prevent issues with VMs using dhcp and OCP nodes having fixed IPs. Default is create a bridge network. macvtap-net env.network_mode (Optional) In case the network mode will be NAT and the installation will be executed from remote (e.g. your laptop), a jumphost needs to be defined to let the installation access the bastion host. If macvtap for networking is being used this variable should be empty. NAT env.use_ipv6 If ipv6 addresses should be assigned to the controle and compute nodes, this variable should be true (default) and the matching ipv6 settings should be specified. True env.use_dhcp If dhcp service should be used to get an IP address, this variable should be true and the matching mac address must be specified. False env.jumphost.name (Optional) If env.network.mode is set to 'NAT' the name of the jumphost (e.g. the name of KVM host if used as jumphost) should be specified. kvm-host-01 env.jumphost.ip (Optional) The ip of the jumphost. 192.168.10.1 env.jumphost.user (Optional) The user name to login to the jumphost. admin env.jumphost.pass (Optional) The password for user to login to the jumphost. ch4ngeMe! env.jumphost.path_to_keypair (Optional) The absolute path to the public key file on the jumphost to be copied to the bastion. /home/admin/.ssh/id_rsa.pub 15 - OCP and RHCOS (CoreOS) # Variable Name Description Example ocp_download_url Link to the mirror for the OpenShift client and installer from Red Hat. https://mirror.openshift.com/pub/openshift-v4/multi/clients/ocp/4.13.1/s390x/ ocp_client_tgz OpenShift client filename (tar.gz). openshift-client-linux.tar.gz ocp_install_tgz OpenShift installer filename (tar.gz). openshift-install-linux.tar.gz rhcos_download_url Link to the CoreOS files to be used for the bootstrap, control and compute nodes. Feel free to change to a different version. https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.12/4.12.3/ rhcos_os_variant CoreOS base OS. Use the OS string as defined in 'osinfo-query os -f short-id' rhel8.6 rhcos_live_kernel CoreOS kernel filename to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-kernel-s390x rhcos_live_initrd CoreOS initramfs to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-initramfs.s390x.img rhcos_live_rootfs CoreOS rootfs to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-rootfs.s390x.img 16 - (Optional) Disconnected cluster setup # Variable Name Description Example disconnected.enabled True or False, to enable disconnected mode False disconnected.registry.url String containing url of disconnected registry with or without port and without protocol registry.tt.testing:5000 disconnected.registry.pull_secret String containing pull secret of the disconnected registry to be applied on the cluster . Make sure to enclose pull_secret in 'single quotes' and it has appropriate pull access. '{\"auths\":{\"registry.tt..testing:5000\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"test.user@example.com\"}}}' disconnected.registry.mirror_pull_ecret String containing pull secret to use for mirroring. Contains Red Hat secret and registry pull secret. Make sure to enclose pull_secret in 'single quotes' and must be able to push to mirror registry. '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\", \"registry.tt..testing:5000\":...user@example.com\"}}}' disconnected.registry.ca_trusted True or False to indicate that mirror registry CA is implicitly trusted or needs to be made trusted on mirror host and cluster. False disconnected.registry.ca_cert Multiline string containing the mirror registry CA bundle -----BEGIN CERTIFICATE-----MIIDqDCCApCgAwIBAgIULL+d1HTYsiP+8jeWnqBis3N4BskwDQYJKoZIhvcNAQEF...-----END CERTIFICATE----- disconnected.mirroring.host.name String containing the hostname of the host, which will be used for mirroring mirror-host-1 disconnected.mirroring.host.ip String containing ip of the host, which will be used for mirroring 192.168.10.99 disconnected.mirroring.host.user String containing the username of the host, which will be used for mirroring mirroruser disconnected.mirroring.host.pass String containing the password of the host, which will be used for mirroring mirrorpassword disconnected.mirroring.file_server.clients_dir Directory path relative to the HTTP/FTP accessible directory on env.file_server where client binary tarballs are kept clients disconnected.mirroring.file_server.oc_mirror_tgz Name of oc-mirror tarball on env.file_server in disconnected.mirroring.file_server.clients_dir oc-mirror.tar.gz disconnected.mirroring.legacy.platform True or False if the platform should be mirrored using oc adm release mirror . False disconnected.mirroring.legacy.ocp_quay_release_image_tag The tag of the release image quay.io/openshift-release-dev/ocp-release to mirror and use 4.13.1-s390x disconnected.mirroring.legacy.ocp_org The org part of the repo on the mirror registry where the release image will be pushed ocp4 disconnected.mirroring.legacy.ocp_repo The repo part of the repo on the mirror registry where the release image will be pushed openshift4 disconnected.mirroring.legacy.ocp_tag The tag part of the repo on the mirror registry where the release image will be pushed. Full image would be as below.: disconnected.registry.url/disconnected.mirroring.legacy.ocp_org/disconnected...ocp_repo:disconnected..ocp_tag v4.13.1 disconnected.mirroring.oc_mirror.release_image_tag The ocp release image tag you want to install the cluster with. Used when legacy platform mirroring is disabled and disconnected.mirroring.oc_mirror.image_set contains platform entries. 4.13.1-multi disconnected.mirroring.oc_mirror.oc_mirror_args.continue_on_error True or False to give --continue-on-error flag to oc-mirror False disconnected.mirroring.oc_mirror.oc_mirror_args.source_skip_tls True or False to give --source-skip-tls flag to oc-mirror False disconnected.mirroring.oc_mirror.post_mirror.mapping.replace.enabled True or False to replace values in mapping.txt generated by oc-mirror. This also does a manual repush of the images in mapping.txt . False disconnected.mirroring.oc_mirror.post_mirror.mapping.replace.list List of regexp and replace where every string/regular expression gets replaced by corresponding replace value. regexp: interal-url.com replace: external-url.com disconnected.mirroring.oc_mirror.image_set YAML fields containing a standard oc-mirror image set with some minor changes to schema. Differences are documented as needed. Used to generate final image set. see template disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.enabled True or False to use registry storage backend for pushing mirrored content directly to the registry. Currently only this backend is supported. True disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.imageURL.org The org part of registry imageURL from standard image set. mirror disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.imageURL.repo The repo part of registry imageURL from standard image set. Final imageURL will be as below: disconnected.registry.url/disconnected.mirroring.oc_mirror.image_set.storageConfig .registry.imageURL.org/disconnected...imageURL.repo oc-mirror-metadata disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.skipTLS True of False same purpose served as in standard image set i.e. skip the tls for the registry during mirroring. false disconnected.mirrroing.oc_mirror.image_set.mirror YAML containing a list of what needs to be mirrored. See the oc mirror image set documentation. see oc-mirror image set documentation 17 - (Optional) Create compute node in a day-2 operation # Variable Name Description Example day2_compute_node.vm_name Name of the compute node VM. compute-4 day2_compute_node.vm_hostname Hostnames for compute node. compute-4 day2_compute_node.vm_vm_ip IPv4 address of the compute node. 192.168.10.99 day2_compute_node.vm_vm_ipv6 IPv6 address of the compute node. fd00::99 day2_compute_node.vm_mac MAC address of the compute node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B day2_compute_node.vm_interface The network interface used for given IP addresses of the compute node. enc1 day2_compute_node.hostname The hostname of the KVM host kvm-host-01 day2_compute_node.host_user KVM host user which is used to create the VM root day2_compute_node.host_arch KVM host architecture. s390x 18 - (Optional) Agent Based Installer # Variable Name Description Example abi.flag This is the flag, Will be used to identify during execution. Few checks in the playbook will be depend on this (default value will be False) True abi.ansible_workdir This will be work directory name, it will keep required data that need to be present during or after execution ansible_workdir abi.ocp_installer_version Version will contain value of openshift-installer binary version user desired to be used '4.15.0-rc.8' abi.ocp_installer_url This is the base url of openshift installer binary it will remain same as static value, User Do not need to give value until user wants to change the mirror 'https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/' Hosted Control Plane ( Optional ) # Variable Name Description Example hcp.compute_node_type Select the compute node type for HCP , either zKVM or zVM zvm hcp.mgmt_cluster_nameserver IP Address of Nameserver of Management Cluster 192.168.10.1 hcp.oc_url URL for OC Client that you want to install on the host https://... ..openshift-client-linux-4.13.0-ec.4.tar.gz hcp.ansible_key_name ssh key name ansible-ocpz hcp.pkgs list of packages for different hosts hcp.mce.version version for multicluster-engine Operator 2.4 hcp.mce.instance_name name of the MultiClusterEngine instance engine hcp.mce.delete true or false - deletes mce and related resources while running deletion playbook true hcp.asc.url_for_ocp_release_file Add URL for OCP release.txt File https://... ..../release.txt hcp.asc.db_volume_size DatabaseStorage Volume Size 10Gi hcp.asc.fs_volume_size FileSystem Storage Volume Size 10Gi hcp.asc.ocp_version OCP Version for AgentServiceConfig 4.13.0-ec.4 hcp.asc.iso_url Give URL for ISO image https://... ...s390x-live.s390x.iso hcp.asc.root_fs_url Give URL for rootfs image https://... ... live-rootfs.s390x.img hcp.asc.mce_namespace Namespace where your Multicluster Engine Operator is installed. Recommended Namespace for MCE is 'multicluster-engine'. Change this only if MCE is installed in other namespace. multicluster-engine hcp.control_plane.high_availabiliy Availability for Control Plane true hcp.control_plane.clusters_namespace Namespace for Creating Hosted Control Plane clusters hcp.control_plane.hosted_cluster_name Name for the Hosted Cluster hosted0 hcp.control_plane.basedomain Base domain for Hosted Cluster example.com hcp.control_plane.pull_secret_file Path for the pull secret No need to change this as we are copying the pullsecret to same file /root/ansible_workdir/auth_file /root/ansible_workdir/auth_file hcp.control_plane.ocp_release_image OCP Release version for Hosted Control Cluster and Nodepool 4.13.0-rc.4-multi hcp.control_plane.arch Architecture for InfraEnv and AgentServiceConfig\" s390x hcp.control_plane.additional_flags Any additional flags for creating hcp ( In hcp create cluster agent command ) --fips hcp.control_plane.pull_secret Pull Secret of Management Cluster Make sure to enclose pull_secret in 'single quotes' '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\"}}}' hcp.bastion_params.create true or false - create bastion with the provided IP true hcp.bastion_params.ip IPv4 address for bastion of Hosted Cluster 192.168.10.1 hcp.bastion_params.user User for bastion of Hosted Cluster root hcp.bastion_params.host IPv4 address of KVM host (kvm host where you want to run all oc commands and create VMs) 192.168.10.1 hcp.bastion_params.host_user User for KVM host root hcp.bastion_params.hostname Hostname for bastion bastion hcp.bastion_params.base_domain DNS base domain for the bastion. ihost.com hcp.bastion_params.nameserver Nameserver for creating bastion 192.168.10.1 hhcp.bastion_params.gateway Gateway IP for creating bastion This is how it well be used ip= :: : 192.168.10.1 hcp.bastion_params.subnet_mask IPv4 address of subnetmask 255.255.255.0 hcp.bastion_params.interface Interface for bastion enc1 hcp.bastion_params.file_server.ip IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. 192.168.10.201 hcp.bastion_params.file_server.protocol Protocol used to serve the files, either 'ftp' or 'http' http hcp.bastion_params.file_server.iso_mount_dir Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. RHEL/8.7 hcp.bastion_params.os_variant rhel os variant for creating bastion 8.7 hcp.bastion_params.disk rhel os variant for creating bastion 8.7 hcp.bastion_params.network_name rhel os variant for creating bastion 8.7 hcp.bastion_params.networking_device The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. enc1100 hcp.bastion_params.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here, in the \"Locale\" column of Table 2.1. en_US.UTF-8 hcp.bastion_params.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here. America/New_York hcp.bastion_params.keyboard Which keyboard layout would you like Red Hat Enterprise Linux to use? us hcp.data_plane.compute_count Number of agents for the hosted cluster The same number of compute nodes will be attached to Hosted Cotrol Plane 2 hcp.data_plane.vcpus vCPUs for compute nodes 4 hcp.data_plane.memory RAM for compute nodes 16384 hcp.data_plane.nameserver Nameserver for compute nodes 192.168.10.1 hcp.data_plane.storage.type Storage type for KVM guests qcow/dasd qcow hcp.data_plane.storage.qcow.disk_size Disk size for kvm guests 100G hcp.data_plane.storage.qcow.pool_path Storage pool path for creating disks /home/images/ hcp.data_plane.storage.dasd dasd disks for kvm guests /disk hcp.data_plane.kvm.ip_params.static_ip.enabled true or false - use static IPs for agents using NMState true hcp.data_plane.kvm.ip_params.static_ip.ip List of IP addresses for agents 192.168.10.1 hcp.data_plane.kvm.ip_params.static_ip.interface Interface for agents for configuring NMStateConfig eth0 hcp.data_plane.kvm.ip_params.mac List of macaddresses for the agents. Configure in DHCP if you are using dynamic IPs for Agents. - 52:54:00:ba:d3:f7 hcp.data_plane.zvm.network_mode Network mode for zvm nodes Supported modes: vswitch,osa, RoCE vswitch hcp.data_plane.zvm.disk_type Disk type for zvm nodes Supported disk types: fcp, dasd dasd hcp.data_plane.zvm.subnetmask Subnet mask for compute nodes 255.255.255.0 hcp.data_plane.zvm.gateway Gateway for compute nodes 192.168.10.1 hcp.data_plane.zvm.nodes Set of parameters for zvm nodes Give the details of each zvm node here hcp.data_plane.zvm.name Name of the zVM guest m1317002 hcp.data_plane.zvm.nodes.host Host name of the zVM guests which we use to login 3270 console boem1317 hcp.data_plane.zvmnodes.user Username for zVM guests to login m1317002 hcp.data_plane.zvm.nodes.password password for the zVM guests to login password hcp.data_plane.zvm.nodes.interface.ifname Network interface name for zVM guests encbdf0 hcp.data_plane.zvm.nodes.interface.nettype Network type for zVM guests for network connectivity qeth hcp.data_plane.zvm.nodes.interface.subchannels subchannels for zVM guests interfaces 0.0.bdf0,0.0.bdf1,0.0.bdf2 hcp.data_plane.zvm.nodes.interface.options Configurations options layer2=1 hcp.data_plane.zvm.interface.ip IP addresses for to be used for zVM nodes 192.168.10.1 hcp.data_plane.zvm.nodes.dasd.disk_id Disk id for dasd disk to be used for zVM node 4404 hcp.data_plane.zvm.nodes.lun Disk details of fcp disk to be used for zVM node 4404","title":"2 Set Variables (group_vars)"},{"location":"set-variables-group-vars/#step-2-set-variables-group_vars","text":"","title":"Step 2: Set Variables (group_vars)"},{"location":"set-variables-group-vars/#overview","text":"In a text editor of your choice, open the template of the environment variables file . Make a copy of it called all.yaml and paste it into the same directory with its template. all.yaml is your master variables file and you will likely reference it many times throughout the process. The default inventory can be found at inventories/default . The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in. This is the most important step in the process. Take the time to make sure everything here is correct. Note on YAML syntax : Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file env and z don't need to be filled in, but the cpc_name does. There are X's where input is required to help you with this. Scroll the table to the right to see examples for each variable.","title":"Overview"},{"location":"set-variables-group-vars/#1-controller","text":"Variable Name Description Example env.installation_type Can be of type kvm or lpar. Some packages will be ignored for installation in case of non lpar based installation. kvm env.controller.sudo_pass The password to the machine running Ansible (localhost). This will only be used for two things. To ensure you've installed the pre-requisite packages if you're on Linux, and to add the login URL to your /etc/hosts file. Pas$w0rd!","title":"1 - Controller"},{"location":"set-variables-group-vars/#2-lpars","text":"Variable Name Description Example env.z.high_availability Is this cluster spread across three LPARs? If yes, mark True. If not (just in one LPAR), mark False True env.z.ip_forward This variable specifies if ip forwarding is enabled or not if NAT network is selected. If ip_forwarding is set to 0, the installed OCP cluster will not be able to access external services because using NAT keep the nodes isolated. This parameter will be set via sysctl on the KVM host. The change of the value is instantly active. This setting will be configured during 3_setup_kvm playbook. If NAT will be configured after 3_setup_kvm playbook, the setup needs to be done manually before bastion is being created, configured or reconfigured by running the 3_setup_kvm playbook with parameter: --tags cfg_ip_forward 1 env.z.lpar1.create To have Ansible create an LPAR and install RHEL on it for the KVM host, mark True. If using a pre-existing LPAR with RHEL already installed, mark False. True env.z.lpar1.hostname The hostname of the KVM host. kvm-host-01 env.z.lpar1.ip The IPv4 address of the KVM host. 192.168.10.1 env.z.lpar1.user Username for Linux admin on KVM host 1. Recommended to run as a non-root user with sudo access. admin env.z.lpar1.pass The password for the user that will be created or exists on the KVM host. ch4ngeMe! env.z.lpar2.create To create a second LPAR and install RHEL on it to act as another KVM host, mark True. If using pre-existing LPAR(s) with RHEL already installed, mark False. True env.z.lpar2.hostname (Optional) The hostname of the second KVM host. kvm-host-02 env.z.lpar2.ip (Optional) The IPv4 address of the second KVM host. 192.168.10.2 env.z.lpar2.user Username for Linux admin on KVM host 2. Recommended to run as a non-root user with sudo access. admin env.z.lpar2.pass (Optional) The password for the admin user on the second KVM host. ch4ngeMe! env.z.lpar3.create To create a third LPAR and install RHEL on it to act as another KVM host, mark True. If using pre-existing LPAR(s) with RHEL already installed, mark False. True env.z.lpar3.hostname (Optional) The hostname of the third KVM host. kvm-host-03 env.z.lpar3.ip (Optional) The IPv4 address of the third KVM host. 192.168.10.3 env.z.lpar3.user Username for Linux admin on KVM host 3. Recommended to run as a non-root user with sudo access. admin env.z.lpar3.pass (Optional) The password for the admin user on the third KVM host. ch4ngeMe!","title":"2 - LPAR(s)"},{"location":"set-variables-group-vars/#3-file-server","text":"Variable Name Description Example env.file_server.ip IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. 192.168.10.201 env.file_server.port The port on which the file server is listening. Will be embedded into all download urls. Defaults to protocol default port. Keep empty '' to use default port 10000 env.file_server.user Username to connect to the file server. Must have sudo and SSH access. user1 env.file_server.pass Password to connect to the file server as above user. user1pa$s! env.file_server.protocol Protocol used to serve the files, either 'ftp' or 'http' http env.file_server.iso_os_variant The os variant for the bastion kvm to be created rhel8.8 env.file_server.iso_mount_dir Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. RHEL/8.7 env.file_server.cfgs_dir Directory path relative to to the HTTP/FTP accessible directory where configuration files can be stored. For example, if FTP root is /home/user1 and you would like to store the configs at /home/user1/ocpz-config then this variable would be ocpz-config. No slash before or after. ocpz-config","title":"3 - File Server"},{"location":"set-variables-group-vars/#4-red-hat-info","text":"Variable Name Description Example env.redhat.username Red Hat username with a valid license or free trial to Red Hat OpenShift Container Platform (RHOCP), which comes with necessary licenses for Red Hat Enterprise Linux (RHEL) and Red Hat CoreOS (RHCOS). redhat.user env.redhat.password Password to Red Hat above user's account. Used to auto-attach necessary subscriptions to KVM Host, bastion VM, and pull live images for OpenShift. rEdHatPa$s! env.redhat.manage_subscription True or False. Would you like to subscribe the server with Red Hat? True env.redhat.pull_secret Pull secret for OpenShift, comes from Red Hat's Hybrid Cloud Console . Make sure to enclose in 'single quotes'. '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\"}}}'","title":"4 - Red Hat Info"},{"location":"set-variables-group-vars/#5-bastion","text":"Variable Name Description Example env.bastion.create True or False. Would you like to create a bastion KVM guest to host essential infrastructure services like DNS, load balancer, firewall, etc? Can de-select certain services with the env.bastion.options variables below. True env.bastion.vm_name Name of the bastion VM. Arbitrary value. bastion env.bastion.resources.disk_size How much of the storage pool would you like to allocate to the bastion (in Gigabytes)? Recommended 30 or more. 30 env.bastion.resources.ram How much memory would you like to allocate the bastion (in megabytes)? Recommended 4096 or more 4096 env.bastion.resources.swap How much swap storage would you like to allocate the bastion (in megabytes)? Recommended 4096 or more. 4096 env.bastion.resources.vcpu How many virtual CPUs would you like to allocate to the bastion? Recommended 4 or more. 4 env.bastion.resources.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.bastion.networking.ip IPv4 address for the bastion. 192.168.10.3 env.bastion.networking.ipv6 IPv6 address for the bastion if use_ipv6 variable is 'True'. fd00::3 env.bastion.networking.mac MAC address for the bastion if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.bastion.networking.hostname Hostname of the bastion. Will be combined with env.bastion.networking.base_domain to create a Fully Qualified Domain Name (FQDN). ocpz-bastion env.bastion.networking.base_domain Base domain that, when combined with the hostname, creates a fully-qualified domain name (FQDN) for the bastion? ihost.com env.bastion.networking.subnetmask Subnet of the bastion. 255.255.255.0 env.bastion.networking.gateway IPv4 of he bastion's gateway server. 192.168.10.0 env.bastion.networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 env.bastion.networking.ipv6_prefix IPv6 prefix. 64 env.bastion.networking.nameserver1 IPv4 address of the server that resolves the bastion's hostname. 192.168.10.200 env.bastion.networking.nameserver2 (Optional) A second IPv4 address that resolves the bastion's hostname. 192.168.10.201 env.bastion.networking.forwarder What IPv4 address will be used to make external DNS calls for the bastion? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8 env.bastion.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1 env.bastion.access.user What would you like the admin's username to be on the bastion? If root, make pass and root_pass vars the same. admin env.bastion.access.pass The password to the bastion's admin user. If using root, make pass and root_pass vars the same. cH4ngeM3! env.bastion.access.root_pass The root password for the bastion. If using root, make pass and root_pass vars the same. R0OtPa$s! env.bastion.options.dns Would you like the bastion to host the DNS information for the cluster? True or False. If false, resolution must come from elsewhere in your environment. Make sure to add IP addresses for KVM hosts, bastion, bootstrap, control, compute nodes, AND api, api-int and *.apps as described here in section \"User-provisioned DNS Requirements\" Table 5. If True this will be done for you in the dns and check_dns roles. True env.bastion.options.loadbalancer.on_bastion Would you like the bastion to host the load balancer (HAProxy) for the cluster? True or False (boolean). If false, this service must be provided elsewhere in your environment, and public and private IP of the load balancer must be provided in the following two variables. True env.bastion.options.loadbalancer.public_ip (Only required if env.bastion.options.loadbalancer.on_bastion is True). The public IPv4 address for your environment's loadbalancer. api, apps, *.apps must use this. 192.168.10.50 env.bastion.options.loadbalancer.private_ip (Only required if env.bastion.options.loadbalancer.on_bastion is True). The private IPv4 address for your environment's loadbalancer. api-int must use this. 10.24.17.12","title":"5 - Bastion"},{"location":"set-variables-group-vars/#6-cluster-networking","text":"Variable Name Description Example env.cluster.networking.metadata_name Name to describe the cluster as a whole, can be anything if DNS will be hosted on the bastion. If DNS is not on the bastion, must match your DNS configuration. Will be combined with the base_domain and hostnames to create Fully Qualified Domain Names (FQDN). ocpz env.cluster.networking.base_domain The site name, where is the cluster being hosted? This will be combined with the metadata_name and hostnames to create FQDNs. host.com env.bastion.networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 env.bastion.networking.ipv6_prefix IPv6 prefix. 64 env.cluster.networking.nameserver1 IPv4 address that the cluster get its hostname resolution from. If env.bastion.options.dns is True, this should be the IP address of the bastion. 192.168.10.200 env.cluster.networking.nameserver2 (Optional) A second IPv4 address will the cluster get its hostname resolution from? If env.bastion.options.dns is True, this should be left commented out. 192.168.10.201 env.cluster.networking.forwarder What IPv4 address will be used to make external DNS calls for the cluster? Can use 1.1.1.1 or 8.8.8.8 as defaults. 8.8.8.8 env.cluster.networking.interface Name of the networking interface on the bastion from Linux's perspective. Most likely enc1. enc1","title":"6 - Cluster Networking"},{"location":"set-variables-group-vars/#7-bootstrap-node","text":"Variable Name Description Example env.cluster.nodes.bootstrap.disk_size How much disk space do you want to allocate to the bootstrap node (in Gigabytes)? Bootstrap node is temporary and will be brought down automatically when its job completes. 120 or more recommended. 120 env.cluster.nodes.bootstrap.ram How much memory would you like to allocate to the temporary bootstrap node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.bootstrap.vcpu How many virtual CPUs would you like to allocate to the temporary bootstrap node? Recommended 4 or more. 4 env.cluster.nodes.bootstrap.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.bootstrap.vm_name Name of the temporary bootstrap node VM. Arbitrary value. bootstrap env.cluster.nodes.bootstrap.ip IPv4 address of the temporary bootstrap node. 192.168.10.4 env.cluster.nodes.bootstrap.ipv6 IPv6 address for the bootstrap if use_ipv6 variable is 'True'. fd00::4 env.cluster.nodes.bootstrap.mac MAC address for the bootstrap node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.bootstrap.hostname Hostname of the temporary boostrap node. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). bootstrap-ocpz","title":"7 - Bootstrap Node"},{"location":"set-variables-group-vars/#8-control-nodes","text":"Variable Name Description Example env.cluster.nodes.control.disk_size How much disk space do you want to allocate to each control node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.control.ram How much memory would you like to allocate to the each control node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.control.vcpu How many virtual CPUs would you like to allocate to each control node? Recommended 4 or more. 4 env.cluster.nodes.control.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.control.vm_name Name of the control node VMs. Arbitrary values. Usually no more or less than 3 are used. Must match the total number of IP addresses and hostnames for control nodes. Use provided list format. control-1control-2control-3 env.cluster.nodes.control.ip IPv4 address of the control nodes. Use provided list formatting. 192.168.10.5192.168.10.6192.168.10.7 env.cluster.nodes.control.ipv6 IPv6 address for the control nodes. Use iprovided list formatting (if use_ipv6 variable is 'True'). fd00::5fd00::6fd00::7 env.cluster.nodes.control.mac MAC address for the control node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.control.hostname Hostnames for control nodes. Must match the total number of IP addresses for control nodes (usually 3). If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). control-01control-02control-03","title":"8 - Control Nodes"},{"location":"set-variables-group-vars/#9-compute-nodes","text":"Variable Name Description Example env.cluster.nodes.compute.disk_size How much disk space do you want to allocate to each compute node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.compute.ram How much memory would you like to allocate to the each compute node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.compute.vcpu How many virtual CPUs would you like to allocate to each compute node? Recommended 2 or more. 2 env.cluster.nodes.compute.vcpu_model_option Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.compute.vm_name Name of the compute node VMs. Arbitrary values. This list can be expanded to any number of nodes, minimum 2. Must match the total number of IP addresses and hostnames for compute nodes. Use provided list format. compute-1compute-2 env.cluster.nodes.compute.ip IPv4 address of the compute nodes. Must match the total number of VM names and hostnames for compute nodes. Use provided list formatting. 192.168.10.8192.168.10.9 env.cluster.nodes.control.ipv6 IPv6 address for the compute nodes. Use iprovided list formatting (if use_ipv6 variable is 'True'). fd00::8fd00::9 env.cluster.nodes.compute.mac MAC address for the compute node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B env.cluster.nodes.compute.hostname Hostnames for compute nodes. Must match the total number of IP addresses and VM names for compute nodes. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). compute-01compute-02","title":"9 - Compute Nodes"},{"location":"set-variables-group-vars/#10-infra-nodes","text":"Variable Name Description Example env.cluster.nodes.infra.disk_size (Optional) Set up compute nodes that are made for infrastructure workloads (ingress, monitoring, logging)? How much disk space do you want to allocate to each infra node (in Gigabytes)? 120 or more recommended. 120 env.cluster.nodes.infra.ram (Optional) How much memory would you like to allocate to the each infra node (in megabytes)? Recommended 16384 or more. 16384 env.cluster.nodes.infra.vcpu (Optional) How many virtual CPUs would you like to allocate to each infra node? Recommended 2 or more. 2 env.cluster.nodes.infra.vcpu_model_option (Optional) Configure the CPU model and CPU features exposed to the guest --cpu host env.cluster.nodes.infra.vm_name (Optional) Name of additional infra node VMs. Arbitrary values. This list can be expanded to any number of nodes, minimum 2. Must match the total number of IP addresses and hostnames for infra nodes. Use provided list format. infra-1infra-2 env.cluster.nodes.infra.ip (Optional) IPv4 address of the infra nodes. This list can be expanded to any number of nodes, minimum 2. Use provided list formatting. 192.168.10.10192.168.10.11 env.cluster.nodes.infra.ipv6 (Optional) IPv6 address of the infra nodes. iThis list can be expanded to any number of nodes, minimum 2. Use provided list formatting (if use_ipv6 variable is 'True'). fd00::10fd00::11 env.cluster.nodes.infra.hostname (Optional) Hostnames for infra nodes. Must match the total number of IP addresses for infra nodes. If DNS is hosted on the bastion, this can be anything. If DNS is hosted elsewhere, this must match DNS definition. This will be combined with the metadata_name and base_domain to create a Fully Qualififed Domain Name (FQDN). infra-01infra-02","title":"10 - Infra Nodes"},{"location":"set-variables-group-vars/#11-optional-packages","text":"Variable Name Description Example env.pkgs.galaxy A list of Ansible Galaxy collections that will be installed during the setup playbook. The collections listed are required. Feel free to add more as needed, just make sure to follow the same list format. community.general env.pkgs.controller A list of packages that will be installed on the machine running Ansible during the setup playbook. Feel free to add more as needed, just make sure to follow the same list format. openssh env.pkgs.kvm A list of packages that will be installed on the KVM Host during the setup_kvm_host playbook. Feel free to add more as needed, just make sure to follow the same list format. qemu-kvm env.pkgs.bastion A list of packages that will be installed on the bastion during the setup_bastion playbook. Feel free to add more as needed, just make sure to follow the same list format. haproxy","title":"11 - (Optional) Packages"},{"location":"set-variables-group-vars/#12-openshift-settings","text":"Variable Name Description Example env.install_config.api_version Kubernetes API version for the cluster. These install_config variables will be passed to the OCP install_config file. This file is templated in the get_ocp role during the setup_bastion playbook. To make more fine-tuned adjustments to the install_config, you can find it at roles/get_ocp/templates/install-config.yaml.j2 v1 env.install_config.compute.architecture Computing architecture for the compute nodes. Must be s390x for clusters on IBM zSystems. s390x env.install_config.compute.hyperthreading Enable or disable hyperthreading on compute nodes. Recommended enabled. Enabled env.install_config.control.architecture Computing architecture for the control nodes. Must be s390x for clusters on IBM zSystems, amd64 for Intel or AMD systems, and arm64 for ARM servers. s390x env.install_config.control.hyperthreading Enable or disable hyperthreading on control nodes. Recommended enabled. Enabled env.install_config.cluster_network.cidr IPv4 block in Internal cluster networking in Classless Inter-Domain Routing (CIDR) notation. Recommended to keep as is. 10.128.0.0/14 env.install_config.cluster_network.host_prefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. 23 env.install_config.cluster_network.type The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes (default). OVNKubernetes env.install_config.service_network The IP address block for services. The default value is 172.30.0.0/16. The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. 172.30.0.0/16 env.install_config.machine_network The IP address block for Nodes IP Pool. The default value is 192.168.122.0/24 For NAT Network Mode. In case of MacvTap it will be depend on Inteface IP assignment. An array with an IP address block in CIDR format. 192.168.122.0/24 env.install_config.fips True or False (boolean) for whether or not to use the United States' Federal Information Processing Standards (FIPS). Not yet certified on IBM zSystems. Enclosed in 'single quotes'. 'false'","title":"12 - OpenShift Settings"},{"location":"set-variables-group-vars/#13-optional-proxy","text":"Variable Name Description Example env.proxy.http (Optional) A proxy URL to use for creating HTTP connections outside the cluster. Will be used in the install-config and applied to other Ansible hosts unless set otherwise in no_proxy below. Must follow this pattern: http://username:pswd>@ip:port http://ocp-admin:Pa$sw0rd@9.72.10.1:80 env.proxy.https (Optional) A proxy URL to use for creating HTTPS connections outside the cluster. Will be used in the install-config and applied to other Ansible hosts unless set otherwise in no_proxy below. Must follow this pattern: https://username:pswd@ip:port https://ocp-admin:Pa$sw0rd@9.72.10.1:80 env.proxy.no (Optional) A comma-separated list (no spaces) of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. When using a proxy, all necessary IPs and domains for your cluster will be added automatically. See roles/get_ocp/templates/install-config.yaml.j2 for more details on the template. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all listed destinations. example.com,192.168.10.1","title":"13 - (Optional) Proxy"},{"location":"set-variables-group-vars/#14-optional-misc","text":"Variable Name Description Example env.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here , in the \"Locale\" column of Table 2.1. en_US.UTF-8 env.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here . America/New_York env.keyboard Which keyboard layout would you like Red Hat Enterprise Linux to use? us env.ansible_key_name (Optional) Name of the SSH key that Ansible will use to connect to hosts. ansible-ocpz env.ocp_key_name Comment to describe the SSH key used for OCP. Arbitrary value. OCPZ-01 key env.vnet_name (Optional) Name of the bridged virtual network that will be created on the KVM host if network mode is not set to NAT. In case of NAT network mode the name of the NAT network definition used to create the nodes(usually it is 'default'). If NAT is being used and a jumphost is needed, the parameters network_mode, jumphost.name, jumphost.user and jumphost.pass must be specified, too. For default (NAT) network verify that the configured IP ranges does not interfere with the IPs defined for the controle and compute nodes. Modify the default network (dhcp range setting) to prevent issues with VMs using dhcp and OCP nodes having fixed IPs. Default is create a bridge network. macvtap-net env.network_mode (Optional) In case the network mode will be NAT and the installation will be executed from remote (e.g. your laptop), a jumphost needs to be defined to let the installation access the bastion host. If macvtap for networking is being used this variable should be empty. NAT env.use_ipv6 If ipv6 addresses should be assigned to the controle and compute nodes, this variable should be true (default) and the matching ipv6 settings should be specified. True env.use_dhcp If dhcp service should be used to get an IP address, this variable should be true and the matching mac address must be specified. False env.jumphost.name (Optional) If env.network.mode is set to 'NAT' the name of the jumphost (e.g. the name of KVM host if used as jumphost) should be specified. kvm-host-01 env.jumphost.ip (Optional) The ip of the jumphost. 192.168.10.1 env.jumphost.user (Optional) The user name to login to the jumphost. admin env.jumphost.pass (Optional) The password for user to login to the jumphost. ch4ngeMe! env.jumphost.path_to_keypair (Optional) The absolute path to the public key file on the jumphost to be copied to the bastion. /home/admin/.ssh/id_rsa.pub","title":"14 - (Optional) Misc"},{"location":"set-variables-group-vars/#15-ocp-and-rhcos-coreos","text":"Variable Name Description Example ocp_download_url Link to the mirror for the OpenShift client and installer from Red Hat. https://mirror.openshift.com/pub/openshift-v4/multi/clients/ocp/4.13.1/s390x/ ocp_client_tgz OpenShift client filename (tar.gz). openshift-client-linux.tar.gz ocp_install_tgz OpenShift installer filename (tar.gz). openshift-install-linux.tar.gz rhcos_download_url Link to the CoreOS files to be used for the bootstrap, control and compute nodes. Feel free to change to a different version. https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.12/4.12.3/ rhcos_os_variant CoreOS base OS. Use the OS string as defined in 'osinfo-query os -f short-id' rhel8.6 rhcos_live_kernel CoreOS kernel filename to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-kernel-s390x rhcos_live_initrd CoreOS initramfs to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-initramfs.s390x.img rhcos_live_rootfs CoreOS rootfs to be used for the bootstrap, control and compute nodes. rhcos-4.12.3-s390x-live-rootfs.s390x.img","title":"15 - OCP and RHCOS (CoreOS)"},{"location":"set-variables-group-vars/#16-optional-disconnected-cluster-setup","text":"Variable Name Description Example disconnected.enabled True or False, to enable disconnected mode False disconnected.registry.url String containing url of disconnected registry with or without port and without protocol registry.tt.testing:5000 disconnected.registry.pull_secret String containing pull secret of the disconnected registry to be applied on the cluster . Make sure to enclose pull_secret in 'single quotes' and it has appropriate pull access. '{\"auths\":{\"registry.tt..testing:5000\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"test.user@example.com\"}}}' disconnected.registry.mirror_pull_ecret String containing pull secret to use for mirroring. Contains Red Hat secret and registry pull secret. Make sure to enclose pull_secret in 'single quotes' and must be able to push to mirror registry. '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\", \"registry.tt..testing:5000\":...user@example.com\"}}}' disconnected.registry.ca_trusted True or False to indicate that mirror registry CA is implicitly trusted or needs to be made trusted on mirror host and cluster. False disconnected.registry.ca_cert Multiline string containing the mirror registry CA bundle -----BEGIN CERTIFICATE-----MIIDqDCCApCgAwIBAgIULL+d1HTYsiP+8jeWnqBis3N4BskwDQYJKoZIhvcNAQEF...-----END CERTIFICATE----- disconnected.mirroring.host.name String containing the hostname of the host, which will be used for mirroring mirror-host-1 disconnected.mirroring.host.ip String containing ip of the host, which will be used for mirroring 192.168.10.99 disconnected.mirroring.host.user String containing the username of the host, which will be used for mirroring mirroruser disconnected.mirroring.host.pass String containing the password of the host, which will be used for mirroring mirrorpassword disconnected.mirroring.file_server.clients_dir Directory path relative to the HTTP/FTP accessible directory on env.file_server where client binary tarballs are kept clients disconnected.mirroring.file_server.oc_mirror_tgz Name of oc-mirror tarball on env.file_server in disconnected.mirroring.file_server.clients_dir oc-mirror.tar.gz disconnected.mirroring.legacy.platform True or False if the platform should be mirrored using oc adm release mirror . False disconnected.mirroring.legacy.ocp_quay_release_image_tag The tag of the release image quay.io/openshift-release-dev/ocp-release to mirror and use 4.13.1-s390x disconnected.mirroring.legacy.ocp_org The org part of the repo on the mirror registry where the release image will be pushed ocp4 disconnected.mirroring.legacy.ocp_repo The repo part of the repo on the mirror registry where the release image will be pushed openshift4 disconnected.mirroring.legacy.ocp_tag The tag part of the repo on the mirror registry where the release image will be pushed. Full image would be as below.: disconnected.registry.url/disconnected.mirroring.legacy.ocp_org/disconnected...ocp_repo:disconnected..ocp_tag v4.13.1 disconnected.mirroring.oc_mirror.release_image_tag The ocp release image tag you want to install the cluster with. Used when legacy platform mirroring is disabled and disconnected.mirroring.oc_mirror.image_set contains platform entries. 4.13.1-multi disconnected.mirroring.oc_mirror.oc_mirror_args.continue_on_error True or False to give --continue-on-error flag to oc-mirror False disconnected.mirroring.oc_mirror.oc_mirror_args.source_skip_tls True or False to give --source-skip-tls flag to oc-mirror False disconnected.mirroring.oc_mirror.post_mirror.mapping.replace.enabled True or False to replace values in mapping.txt generated by oc-mirror. This also does a manual repush of the images in mapping.txt . False disconnected.mirroring.oc_mirror.post_mirror.mapping.replace.list List of regexp and replace where every string/regular expression gets replaced by corresponding replace value. regexp: interal-url.com replace: external-url.com disconnected.mirroring.oc_mirror.image_set YAML fields containing a standard oc-mirror image set with some minor changes to schema. Differences are documented as needed. Used to generate final image set. see template disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.enabled True or False to use registry storage backend for pushing mirrored content directly to the registry. Currently only this backend is supported. True disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.imageURL.org The org part of registry imageURL from standard image set. mirror disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.imageURL.repo The repo part of registry imageURL from standard image set. Final imageURL will be as below: disconnected.registry.url/disconnected.mirroring.oc_mirror.image_set.storageConfig .registry.imageURL.org/disconnected...imageURL.repo oc-mirror-metadata disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.skipTLS True of False same purpose served as in standard image set i.e. skip the tls for the registry during mirroring. false disconnected.mirrroing.oc_mirror.image_set.mirror YAML containing a list of what needs to be mirrored. See the oc mirror image set documentation. see oc-mirror image set documentation","title":"16 - (Optional) Disconnected cluster setup"},{"location":"set-variables-group-vars/#17-optional-create-compute-node-in-a-day-2-operation","text":"Variable Name Description Example day2_compute_node.vm_name Name of the compute node VM. compute-4 day2_compute_node.vm_hostname Hostnames for compute node. compute-4 day2_compute_node.vm_vm_ip IPv4 address of the compute node. 192.168.10.99 day2_compute_node.vm_vm_ipv6 IPv6 address of the compute node. fd00::99 day2_compute_node.vm_mac MAC address of the compute node if use_dhcp variable is 'True'. 52:54:00:18:1A:2B day2_compute_node.vm_interface The network interface used for given IP addresses of the compute node. enc1 day2_compute_node.hostname The hostname of the KVM host kvm-host-01 day2_compute_node.host_user KVM host user which is used to create the VM root day2_compute_node.host_arch KVM host architecture. s390x","title":"17 - (Optional) Create compute node in a day-2 operation"},{"location":"set-variables-group-vars/#18-optional-agent-based-installer","text":"Variable Name Description Example abi.flag This is the flag, Will be used to identify during execution. Few checks in the playbook will be depend on this (default value will be False) True abi.ansible_workdir This will be work directory name, it will keep required data that need to be present during or after execution ansible_workdir abi.ocp_installer_version Version will contain value of openshift-installer binary version user desired to be used '4.15.0-rc.8' abi.ocp_installer_url This is the base url of openshift installer binary it will remain same as static value, User Do not need to give value until user wants to change the mirror 'https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/'","title":"18 - (Optional) Agent Based Installer"},{"location":"set-variables-group-vars/#hosted-control-plane-optional","text":"Variable Name Description Example hcp.compute_node_type Select the compute node type for HCP , either zKVM or zVM zvm hcp.mgmt_cluster_nameserver IP Address of Nameserver of Management Cluster 192.168.10.1 hcp.oc_url URL for OC Client that you want to install on the host https://... ..openshift-client-linux-4.13.0-ec.4.tar.gz hcp.ansible_key_name ssh key name ansible-ocpz hcp.pkgs list of packages for different hosts hcp.mce.version version for multicluster-engine Operator 2.4 hcp.mce.instance_name name of the MultiClusterEngine instance engine hcp.mce.delete true or false - deletes mce and related resources while running deletion playbook true hcp.asc.url_for_ocp_release_file Add URL for OCP release.txt File https://... ..../release.txt hcp.asc.db_volume_size DatabaseStorage Volume Size 10Gi hcp.asc.fs_volume_size FileSystem Storage Volume Size 10Gi hcp.asc.ocp_version OCP Version for AgentServiceConfig 4.13.0-ec.4 hcp.asc.iso_url Give URL for ISO image https://... ...s390x-live.s390x.iso hcp.asc.root_fs_url Give URL for rootfs image https://... ... live-rootfs.s390x.img hcp.asc.mce_namespace Namespace where your Multicluster Engine Operator is installed. Recommended Namespace for MCE is 'multicluster-engine'. Change this only if MCE is installed in other namespace. multicluster-engine hcp.control_plane.high_availabiliy Availability for Control Plane true hcp.control_plane.clusters_namespace Namespace for Creating Hosted Control Plane clusters hcp.control_plane.hosted_cluster_name Name for the Hosted Cluster hosted0 hcp.control_plane.basedomain Base domain for Hosted Cluster example.com hcp.control_plane.pull_secret_file Path for the pull secret No need to change this as we are copying the pullsecret to same file /root/ansible_workdir/auth_file /root/ansible_workdir/auth_file hcp.control_plane.ocp_release_image OCP Release version for Hosted Control Cluster and Nodepool 4.13.0-rc.4-multi hcp.control_plane.arch Architecture for InfraEnv and AgentServiceConfig\" s390x hcp.control_plane.additional_flags Any additional flags for creating hcp ( In hcp create cluster agent command ) --fips hcp.control_plane.pull_secret Pull Secret of Management Cluster Make sure to enclose pull_secret in 'single quotes' '{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3Blb...4yQQ==\",\"email\":\"redhat.user@gmail.com\"}}}' hcp.bastion_params.create true or false - create bastion with the provided IP true hcp.bastion_params.ip IPv4 address for bastion of Hosted Cluster 192.168.10.1 hcp.bastion_params.user User for bastion of Hosted Cluster root hcp.bastion_params.host IPv4 address of KVM host (kvm host where you want to run all oc commands and create VMs) 192.168.10.1 hcp.bastion_params.host_user User for KVM host root hcp.bastion_params.hostname Hostname for bastion bastion hcp.bastion_params.base_domain DNS base domain for the bastion. ihost.com hcp.bastion_params.nameserver Nameserver for creating bastion 192.168.10.1 hhcp.bastion_params.gateway Gateway IP for creating bastion This is how it well be used ip= :: : 192.168.10.1 hcp.bastion_params.subnet_mask IPv4 address of subnetmask 255.255.255.0 hcp.bastion_params.interface Interface for bastion enc1 hcp.bastion_params.file_server.ip IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. 192.168.10.201 hcp.bastion_params.file_server.protocol Protocol used to serve the files, either 'ftp' or 'http' http hcp.bastion_params.file_server.iso_mount_dir Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. RHEL/8.7 hcp.bastion_params.os_variant rhel os variant for creating bastion 8.7 hcp.bastion_params.disk rhel os variant for creating bastion 8.7 hcp.bastion_params.network_name rhel os variant for creating bastion 8.7 hcp.bastion_params.networking_device The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. enc1100 hcp.bastion_params.language What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here, in the \"Locale\" column of Table 2.1. en_US.UTF-8 hcp.bastion_params.timezone Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here. America/New_York hcp.bastion_params.keyboard Which keyboard layout would you like Red Hat Enterprise Linux to use? us hcp.data_plane.compute_count Number of agents for the hosted cluster The same number of compute nodes will be attached to Hosted Cotrol Plane 2 hcp.data_plane.vcpus vCPUs for compute nodes 4 hcp.data_plane.memory RAM for compute nodes 16384 hcp.data_plane.nameserver Nameserver for compute nodes 192.168.10.1 hcp.data_plane.storage.type Storage type for KVM guests qcow/dasd qcow hcp.data_plane.storage.qcow.disk_size Disk size for kvm guests 100G hcp.data_plane.storage.qcow.pool_path Storage pool path for creating disks /home/images/ hcp.data_plane.storage.dasd dasd disks for kvm guests /disk hcp.data_plane.kvm.ip_params.static_ip.enabled true or false - use static IPs for agents using NMState true hcp.data_plane.kvm.ip_params.static_ip.ip List of IP addresses for agents 192.168.10.1 hcp.data_plane.kvm.ip_params.static_ip.interface Interface for agents for configuring NMStateConfig eth0 hcp.data_plane.kvm.ip_params.mac List of macaddresses for the agents. Configure in DHCP if you are using dynamic IPs for Agents. - 52:54:00:ba:d3:f7 hcp.data_plane.zvm.network_mode Network mode for zvm nodes Supported modes: vswitch,osa, RoCE vswitch hcp.data_plane.zvm.disk_type Disk type for zvm nodes Supported disk types: fcp, dasd dasd hcp.data_plane.zvm.subnetmask Subnet mask for compute nodes 255.255.255.0 hcp.data_plane.zvm.gateway Gateway for compute nodes 192.168.10.1 hcp.data_plane.zvm.nodes Set of parameters for zvm nodes Give the details of each zvm node here hcp.data_plane.zvm.name Name of the zVM guest m1317002 hcp.data_plane.zvm.nodes.host Host name of the zVM guests which we use to login 3270 console boem1317 hcp.data_plane.zvmnodes.user Username for zVM guests to login m1317002 hcp.data_plane.zvm.nodes.password password for the zVM guests to login password hcp.data_plane.zvm.nodes.interface.ifname Network interface name for zVM guests encbdf0 hcp.data_plane.zvm.nodes.interface.nettype Network type for zVM guests for network connectivity qeth hcp.data_plane.zvm.nodes.interface.subchannels subchannels for zVM guests interfaces 0.0.bdf0,0.0.bdf1,0.0.bdf2 hcp.data_plane.zvm.nodes.interface.options Configurations options layer2=1 hcp.data_plane.zvm.interface.ip IP addresses for to be used for zVM nodes 192.168.10.1 hcp.data_plane.zvm.nodes.dasd.disk_id Disk id for dasd disk to be used for zVM node 4404 hcp.data_plane.zvm.nodes.lun Disk details of fcp disk to be used for zVM node 4404","title":"Hosted Control Plane ( Optional )"},{"location":"set-variables-host-vars/","text":"Step 3: Set Variables (host_vars) # Overview # Similar to the group_vars file, the host_vars files for each LPAR (KVM host) must be filled in. For each KVM host to be acted upon with Ansible, you must have a corresponding host_vars file named .yaml (i.e. ocpz1.yaml, ocpz2.yaml, ocpz3.yaml), so you must copy and rename the templates found in the host_vars folder accordingly. The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in. Many of the variables in these host_vars files are only required if you are NOT using pre-existing LPARs with RHEL installed. See the Important Note below this first section for more details. This is the most important step in the process. Take the time to make sure everything here is correct. Note on YAML syntax : Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file networking does not need to be filled in, but the hostname does. There are X's where input is required to help you with this. Scroll the table to the right to see examples for each variable. 1 - KVM Host # Variable Name Description Example networking.hostname The hostname of the LPAR with RHEL installed natively (the KVM host). kvm-host-01 networking.ip The IPv4 address of the LPAR with RHEL installed natively (the KVM host). 192.168.10.2 networking.ipv6 IPv6 address for the bastion if use_ipv6 variable is 'True'. fd00::3 networking.subnetmask The subnet that the LPAR resides in within your network. 255.255.255.0 networking.gateway The IPv4 address of the gateway to the network where the KVM host resides. 192.168.10.0 networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 networking.ipv6_prefix IPv6 prefix. 64 networking.nameserver1 The IPv4 address from which the KVM host gets its hostname resolved. 192.168.10.200 networking.nameserver2 (Optional) A second IPv4 address from which the KVM host can get its hostname resolved. Used for high availability. 192.168.10.201 networking.device1 The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. enc100 networking.device2 (Optional) Another Linux network interface card. Usually enc and then a number that comes from the dev_num of the second network adapter. enc1 storage.pool_path The absolute path to a directory on your KVM host that will be used to store qcow2 images for the cluster and other installation artifacts. A sub-directory will be created here that matches your clsuter's metadata name that will act as the cluster's libvirt storage pool directory. Note: all directories present in this path will be made executable for the 'qemu' group, as is required. /home/kvm_admin/VirtualMachines Important Note # You can skip the rest of the variables on this page IF you are using existing LPAR(s) that has RHEL already installed. If you are installing an LPAR based cluster then the information below must be provided and are not optional. You must create a host file corresponding to each lpar node. Since this is how most production deployments on-prem are done on IBM zSystems, these variables have been marked as optional. With pre-existing LPARs with RHEL installed, you can also skip 1_create_lpar.yaml and 2_create_kvm_host.yaml playbooks. Make sure to still do 0_setup.yaml first though, then skip to 3_setup_kvm_host.yaml In the scenario of lpar based installation you can skip 1_create_lpar.yaml and 2_create_kvm_host.yaml . You can also optionally skip 3_setup_kvm_host.yaml and 4_create_bastion.yaml unless you are planning on having the bastion on the same host. In case of lpar based installation one is expected to have a tessia live disk accessible by the lpar nodes for network boot. The details of which are to be filled in section #7 below. The steps to create a tessia livedisk can be found here . 2 - (Optional) CPC & HMC # Variable Name Description Example cpc_name The name of the IBM zSystems / LinuxONE mainframe that you are creating a Red Hat OpenShift Container Platform cluster on. Can be found under the \"Systems Management\" tab of the Hardware Management Console (HMC). SYS1 hmc.host The IPv4 address of the HMC you will be connecting to in order to create a Logical Partition (LPAR) on which will act as the Kernel-based Virtual Machine (KVM) host aftering installing and setting up Red Hat Enterprise Linux (RHEL). 192.168.10.1 hmc.user The username that the HMC API call will use to connect to the HMC. Must have access to create LPARs, attach storage groups and networking cards. hmc-user hmc.pass The password that the HMC API call will use to connect to the HMC. Must have access to create LPARs, attach storage groups and networking cards. hmcPas$w0rd! 3 - (Optional) LPAR # Variable Name Description Example lpar.name The name of the Logical Partition (LPAR) that you would like to create/target for the creation of your cluster. This LPAR will act as the KVM host, with RHEL installed natively. OCPKVM1 lpar.description A short description of what this LPAR will be used for, will only be displayed in the HMC next to the LPAR name for identification purposes. KVM host LPAR for RHOCP cluster. lpar.access.user The username that will be created in RHEL when it is installed on the LPAR (the KVM host). kvm-admin lpar.access.pass The password for the user that will be created in RHEL when it is installed on the LPAR (the KVM host). ch4ngeMe! lpar.root_pass The root password for RHEL installed on the LPAR (the KVM host). $ecureP4ass! 4 - (Optional) IFL & Memory # Variable Name Description Example lpar.ifl.count Number of Integrated Facilities for Linux (IFL) processors will be assigned to this LPAR. 6 or more recommended. 6 lpar.ifl.initial memory Initial memory allocation for LPAR to have at start-up (in megabytes). 55000 lpar.ifl.max_memory The most amount of memory this LPAR can be using at any one time (in megabytes). 99000 lpar.ifl.initial_weight For LPAR load balancing purposes, the processing weight this LPAR will have at start-up (1-999). 100 lpar.ifl.min_weight For LPAR load balancing purposes, the minimum weight that this LPAR can have at any one time (1-999). 50 lpar.ifl.max_weight For LPAR load balancing purposes, the maximum weight that this LPAR can have at any one time (1-999). 500 5 - (Optional) Networking # Variable Name Description Example lpar.networking.subnet_cidr The same value as the above variable but in Classless Inter-Domain Routing (CIDR) notation. 23 lpar.networking.nic.card1.name The logical name of the Network Interface Card (NIC) within the HMC. An arbitrary value that is human-readable that points to the NIC. SYS-NIC-01 lpar.networking.nic.card1.adapter The physical adapter name reference to the logical adapter for the LPAR. 10Gb-A lpar.networking.nic.card1.port The port number for the NIC. 0 lpar.networking.nic.card1.dev_num The logical device number for the NIC. In hex format. 0x0100 lpar.networking.nic.card2.name (Optional) The logical name of a second Network Interface Card (NIC) within the HMC. An arbitrary value that is human-readable that points to the NIC. SYS-NIC-02 lpar.networking.nic.card2.adapter (Optional) The physical adapter name of a second NIC. 10Gb-B lpar.networking.nic.card2.port (Optional) The port number for a second NIC. 1 lpar.networking.nic.card2.dev_num (Optional) The logical device number for a second NIC. In hex format. 0x0001 6 - (Optional) Storage # Variable Name Description Example lpar.storage_group_1.name The name of the storage group that will be attached to the LPAR. OCP-storage-01 lpar.storage_group_1.type Storage type. FCP is the only tested type as of now. fcp lpar.storage_group_1.storage_wwpn World-wide port numbers for storage group. Use provided list formatting. 500708680235c3f0 500708680235c3f1 500708680235c3f2 500708680235c3f3 lpar.storage_group_1.dev_num The logical device number of the Host Bus Adapter (HBA) for the storage group. C001 lpar.storage_group_1.lun_name The Logical Unit Numbers (LUN) that points to a specific virtual disk behind the WWPN. 4200569309ahhd240000000000000c001 lpar.storage_group_2.name (Optional) The name of the storage group that will be attached to the LPAR. OCP-storage-01 lpar.storage_group_2.auto_config (Optional) Attempt to automate the addition of the disk space to the existing logical volume. Check out roles/configure_storage/tasks/main.yaml to ensure this will work properly with your setup. True lpar.storage_group_2.type (Optional) Storage type. FCP is the only tested type as of now. fcp lpar.storage_group_2_.storage_wwpn (Optional) World-wide port numbers for storage group. Use provided list formatting. 500708680235c3f0 500708680235c3f1 500708680235c3f2 500708680235c3f3 lpar.storage_group_2_.dev_num (Optional) The logical device number of the Host Bus Adapter (HBA) for the storage group. C001 lpar.storage_group_2_.lun_name (Optional) The Logical Unit Numbers (LUN) that points to a specific virtual disk behind the WWPN. 4200569309ahhd240000000000000c001 7 - (Optional) Livedisk info # Variable Name Description Example lpar.livedisk.livedisktype (Optional) Storage type. DASD and SCSI are tested types as of now. dasd/scsi lpar.livedisk.lun (Required if livedisktype is scsi) The Lunid of the disk when the livedisktype is SCSI. 4003402b00000000 lpar.livedisk.wwpn (Required if livedisktype is scsi) World-wide port number when livedisktype is SCSI. 500507630a1b50a4 lpar.livedisk.devicenr (Optional) the device no of the live disk c6h1 lpar.livedisk.livedisk_root_pass (Optional) root password for the livedisk p@ssword","title":"3 Set Variables (host_vars)"},{"location":"set-variables-host-vars/#step-3-set-variables-host_vars","text":"","title":"Step 3: Set Variables (host_vars)"},{"location":"set-variables-host-vars/#overview","text":"Similar to the group_vars file, the host_vars files for each LPAR (KVM host) must be filled in. For each KVM host to be acted upon with Ansible, you must have a corresponding host_vars file named .yaml (i.e. ocpz1.yaml, ocpz2.yaml, ocpz3.yaml), so you must copy and rename the templates found in the host_vars folder accordingly. The variables marked with an X are required to be filled in. Many values are pre-filled or are optional. Optional values are commented out; in order to use them, remove the # and fill them in. Many of the variables in these host_vars files are only required if you are NOT using pre-existing LPARs with RHEL installed. See the Important Note below this first section for more details. This is the most important step in the process. Take the time to make sure everything here is correct. Note on YAML syntax : Only the lowest value in each hierarchicy needs to be filled in. For example, at the top of the variables file networking does not need to be filled in, but the hostname does. There are X's where input is required to help you with this. Scroll the table to the right to see examples for each variable.","title":"Overview"},{"location":"set-variables-host-vars/#1-kvm-host","text":"Variable Name Description Example networking.hostname The hostname of the LPAR with RHEL installed natively (the KVM host). kvm-host-01 networking.ip The IPv4 address of the LPAR with RHEL installed natively (the KVM host). 192.168.10.2 networking.ipv6 IPv6 address for the bastion if use_ipv6 variable is 'True'. fd00::3 networking.subnetmask The subnet that the LPAR resides in within your network. 255.255.255.0 networking.gateway The IPv4 address of the gateway to the network where the KVM host resides. 192.168.10.0 networking.ipv6_gateway IPv6 of he bastion's gateway server. fd00::1 networking.ipv6_prefix IPv6 prefix. 64 networking.nameserver1 The IPv4 address from which the KVM host gets its hostname resolved. 192.168.10.200 networking.nameserver2 (Optional) A second IPv4 address from which the KVM host can get its hostname resolved. Used for high availability. 192.168.10.201 networking.device1 The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. enc100 networking.device2 (Optional) Another Linux network interface card. Usually enc and then a number that comes from the dev_num of the second network adapter. enc1 storage.pool_path The absolute path to a directory on your KVM host that will be used to store qcow2 images for the cluster and other installation artifacts. A sub-directory will be created here that matches your clsuter's metadata name that will act as the cluster's libvirt storage pool directory. Note: all directories present in this path will be made executable for the 'qemu' group, as is required. /home/kvm_admin/VirtualMachines","title":"1 - KVM Host"},{"location":"set-variables-host-vars/#important-note","text":"You can skip the rest of the variables on this page IF you are using existing LPAR(s) that has RHEL already installed. If you are installing an LPAR based cluster then the information below must be provided and are not optional. You must create a host file corresponding to each lpar node. Since this is how most production deployments on-prem are done on IBM zSystems, these variables have been marked as optional. With pre-existing LPARs with RHEL installed, you can also skip 1_create_lpar.yaml and 2_create_kvm_host.yaml playbooks. Make sure to still do 0_setup.yaml first though, then skip to 3_setup_kvm_host.yaml In the scenario of lpar based installation you can skip 1_create_lpar.yaml and 2_create_kvm_host.yaml . You can also optionally skip 3_setup_kvm_host.yaml and 4_create_bastion.yaml unless you are planning on having the bastion on the same host. In case of lpar based installation one is expected to have a tessia live disk accessible by the lpar nodes for network boot. The details of which are to be filled in section #7 below. The steps to create a tessia livedisk can be found here .","title":"Important Note"},{"location":"set-variables-host-vars/#2-optional-cpc-hmc","text":"Variable Name Description Example cpc_name The name of the IBM zSystems / LinuxONE mainframe that you are creating a Red Hat OpenShift Container Platform cluster on. Can be found under the \"Systems Management\" tab of the Hardware Management Console (HMC). SYS1 hmc.host The IPv4 address of the HMC you will be connecting to in order to create a Logical Partition (LPAR) on which will act as the Kernel-based Virtual Machine (KVM) host aftering installing and setting up Red Hat Enterprise Linux (RHEL). 192.168.10.1 hmc.user The username that the HMC API call will use to connect to the HMC. Must have access to create LPARs, attach storage groups and networking cards. hmc-user hmc.pass The password that the HMC API call will use to connect to the HMC. Must have access to create LPARs, attach storage groups and networking cards. hmcPas$w0rd!","title":"2 - (Optional) CPC & HMC"},{"location":"set-variables-host-vars/#3-optional-lpar","text":"Variable Name Description Example lpar.name The name of the Logical Partition (LPAR) that you would like to create/target for the creation of your cluster. This LPAR will act as the KVM host, with RHEL installed natively. OCPKVM1 lpar.description A short description of what this LPAR will be used for, will only be displayed in the HMC next to the LPAR name for identification purposes. KVM host LPAR for RHOCP cluster. lpar.access.user The username that will be created in RHEL when it is installed on the LPAR (the KVM host). kvm-admin lpar.access.pass The password for the user that will be created in RHEL when it is installed on the LPAR (the KVM host). ch4ngeMe! lpar.root_pass The root password for RHEL installed on the LPAR (the KVM host). $ecureP4ass!","title":"3 - (Optional) LPAR"},{"location":"set-variables-host-vars/#4-optional-ifl-memory","text":"Variable Name Description Example lpar.ifl.count Number of Integrated Facilities for Linux (IFL) processors will be assigned to this LPAR. 6 or more recommended. 6 lpar.ifl.initial memory Initial memory allocation for LPAR to have at start-up (in megabytes). 55000 lpar.ifl.max_memory The most amount of memory this LPAR can be using at any one time (in megabytes). 99000 lpar.ifl.initial_weight For LPAR load balancing purposes, the processing weight this LPAR will have at start-up (1-999). 100 lpar.ifl.min_weight For LPAR load balancing purposes, the minimum weight that this LPAR can have at any one time (1-999). 50 lpar.ifl.max_weight For LPAR load balancing purposes, the maximum weight that this LPAR can have at any one time (1-999). 500","title":"4 - (Optional) IFL & Memory"},{"location":"set-variables-host-vars/#5-optional-networking","text":"Variable Name Description Example lpar.networking.subnet_cidr The same value as the above variable but in Classless Inter-Domain Routing (CIDR) notation. 23 lpar.networking.nic.card1.name The logical name of the Network Interface Card (NIC) within the HMC. An arbitrary value that is human-readable that points to the NIC. SYS-NIC-01 lpar.networking.nic.card1.adapter The physical adapter name reference to the logical adapter for the LPAR. 10Gb-A lpar.networking.nic.card1.port The port number for the NIC. 0 lpar.networking.nic.card1.dev_num The logical device number for the NIC. In hex format. 0x0100 lpar.networking.nic.card2.name (Optional) The logical name of a second Network Interface Card (NIC) within the HMC. An arbitrary value that is human-readable that points to the NIC. SYS-NIC-02 lpar.networking.nic.card2.adapter (Optional) The physical adapter name of a second NIC. 10Gb-B lpar.networking.nic.card2.port (Optional) The port number for a second NIC. 1 lpar.networking.nic.card2.dev_num (Optional) The logical device number for a second NIC. In hex format. 0x0001","title":"5 - (Optional) Networking"},{"location":"set-variables-host-vars/#6-optional-storage","text":"Variable Name Description Example lpar.storage_group_1.name The name of the storage group that will be attached to the LPAR. OCP-storage-01 lpar.storage_group_1.type Storage type. FCP is the only tested type as of now. fcp lpar.storage_group_1.storage_wwpn World-wide port numbers for storage group. Use provided list formatting. 500708680235c3f0 500708680235c3f1 500708680235c3f2 500708680235c3f3 lpar.storage_group_1.dev_num The logical device number of the Host Bus Adapter (HBA) for the storage group. C001 lpar.storage_group_1.lun_name The Logical Unit Numbers (LUN) that points to a specific virtual disk behind the WWPN. 4200569309ahhd240000000000000c001 lpar.storage_group_2.name (Optional) The name of the storage group that will be attached to the LPAR. OCP-storage-01 lpar.storage_group_2.auto_config (Optional) Attempt to automate the addition of the disk space to the existing logical volume. Check out roles/configure_storage/tasks/main.yaml to ensure this will work properly with your setup. True lpar.storage_group_2.type (Optional) Storage type. FCP is the only tested type as of now. fcp lpar.storage_group_2_.storage_wwpn (Optional) World-wide port numbers for storage group. Use provided list formatting. 500708680235c3f0 500708680235c3f1 500708680235c3f2 500708680235c3f3 lpar.storage_group_2_.dev_num (Optional) The logical device number of the Host Bus Adapter (HBA) for the storage group. C001 lpar.storage_group_2_.lun_name (Optional) The Logical Unit Numbers (LUN) that points to a specific virtual disk behind the WWPN. 4200569309ahhd240000000000000c001","title":"6 - (Optional) Storage"},{"location":"set-variables-host-vars/#7-optional-livedisk-info","text":"Variable Name Description Example lpar.livedisk.livedisktype (Optional) Storage type. DASD and SCSI are tested types as of now. dasd/scsi lpar.livedisk.lun (Required if livedisktype is scsi) The Lunid of the disk when the livedisktype is SCSI. 4003402b00000000 lpar.livedisk.wwpn (Required if livedisktype is scsi) World-wide port number when livedisktype is SCSI. 500507630a1b50a4 lpar.livedisk.devicenr (Optional) the device no of the live disk c6h1 lpar.livedisk.livedisk_root_pass (Optional) root password for the livedisk p@ssword","title":"7 - (Optional) Livedisk info"},{"location":"troubleshooting/","text":"Troubleshooting # If you encounter errors while running the main playbook, there are a few things you can do: Double check your variables. Inspect the part that failed by opening the playbook or role at roles/role-name/tasks/main.yaml Google the specific error message. Re-run the role with the verbosity '-v' option to get more debugging information (more v's give more info). For example: ansible-playbook playbooks/setup_bastion.yaml -vvv Use tags To be more selective with what parts of a playbook are run, use tags. To determine what part of a playbook you would like to run, open the playbook you'd like to run and find the roles parameter. Each role has a corresponding tag. There are also occasionally tags for sections of a playbook or within the role themselves. This is especially helpful for troubleshooting. You can add in tags under the name parameter for individual tasks you'd like to run. Here's an example of using a tag: ansible-playbook playbooks/setup_kvm_host.yaml --tags \"section_2,section_3\" This runs only the parts of the setup_kvm_host playbook marked with tags section_2 and section_3. To use more than one tag, they must be quoted (single or double) and comma-separated (with or without spaces between). E-mail Jacob Emery at jacob.emery@ibm.com If it's a problem with an OpenShift verification step: Open the cockpit to monitor the VMs. In a web browser, go to https://kvm-host-IP-here:9090 Sign-in with your credentials set in the variables file Enable administrative access in the top right. Open the 'Virtual Machines' tab from the left side toolbar. Sometimes it just takes a while, especially if it's lacking resources. Give it some time and then re-reun the playbook/role with tags. If that doesn't work, SSH into the bastion as root (\"ssh root@\\\") and then run, \"export KUBECONFIG=/root/ocpinst/auth/kubeconfig\" and then \"oc whoami\" and make sure it ouputs \"system:admin\". Then run the shell command from the role you would like to check on manually: i.e. 'oc get nodes', 'oc get co', etc. Open the .openshift_install.log file for information on what happened and try to debug the issue.","title":"Troubleshooting"},{"location":"troubleshooting/#troubleshooting","text":"If you encounter errors while running the main playbook, there are a few things you can do: Double check your variables. Inspect the part that failed by opening the playbook or role at roles/role-name/tasks/main.yaml Google the specific error message. Re-run the role with the verbosity '-v' option to get more debugging information (more v's give more info). For example: ansible-playbook playbooks/setup_bastion.yaml -vvv Use tags To be more selective with what parts of a playbook are run, use tags. To determine what part of a playbook you would like to run, open the playbook you'd like to run and find the roles parameter. Each role has a corresponding tag. There are also occasionally tags for sections of a playbook or within the role themselves. This is especially helpful for troubleshooting. You can add in tags under the name parameter for individual tasks you'd like to run. Here's an example of using a tag: ansible-playbook playbooks/setup_kvm_host.yaml --tags \"section_2,section_3\" This runs only the parts of the setup_kvm_host playbook marked with tags section_2 and section_3. To use more than one tag, they must be quoted (single or double) and comma-separated (with or without spaces between). E-mail Jacob Emery at jacob.emery@ibm.com If it's a problem with an OpenShift verification step: Open the cockpit to monitor the VMs. In a web browser, go to https://kvm-host-IP-here:9090 Sign-in with your credentials set in the variables file Enable administrative access in the top right. Open the 'Virtual Machines' tab from the left side toolbar. Sometimes it just takes a while, especially if it's lacking resources. Give it some time and then re-reun the playbook/role with tags. If that doesn't work, SSH into the bastion as root (\"ssh root@\\\") and then run, \"export KUBECONFIG=/root/ocpinst/auth/kubeconfig\" and then \"oc whoami\" and make sure it ouputs \"system:admin\". Then run the shell command from the role you would like to check on manually: i.e. 'oc get nodes', 'oc get co', etc. Open the .openshift_install.log file for information on what happened and try to debug the issue.","title":"Troubleshooting"}]} \ No newline at end of file diff --git a/set-variables-group-vars/index.html b/set-variables-group-vars/index.html index 50678ba9..776438d0 100644 --- a/set-variables-group-vars/index.html +++ b/set-variables-group-vars/index.html @@ -864,6 +864,11 @@

12 - OpenShift Settings172.30.0.0/16 +env.install_config.machine_network +The IP address block for Nodes IP Pool. The default value is 192.168.122.0/24 For NAT Network Mode. In case of MacvTap it will be depend on Inteface IP assignment. An array with an IP address block in CIDR format. +192.168.122.0/24 + + env.install_config.fips True or False (boolean) for whether or not to use the United States' Federal Information Processing Standards (FIPS). Not yet certified on IBM zSystems. Enclosed in 'single quotes'. 'false' diff --git a/sitemap.xml b/sitemap.xml index 28172d04..6b3579b5 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,62 +2,62 @@ https://ibm.github.io/Ansible-OpenShift-Provisioning/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/acknowledgements/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/before-you-begin/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/get-info/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/prerequisites/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/run-the-playbooks-for-abi/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/run-the-playbooks-for-disconnected/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/run-the-playbooks-for-hcp/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/run-the-playbooks/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/set-variables-group-vars/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/set-variables-host-vars/ - 2024-07-12 + 2024-08-14 daily https://ibm.github.io/Ansible-OpenShift-Provisioning/troubleshooting/ - 2024-07-12 + 2024-08-14 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index d9f8153dcfefe9534a9d12f2035f3c0c64cbdd79..4a83db050fe4d004dec7720ebac1dc3071e22898 100644 GIT binary patch literal 349 zcmV-j0iymNiwFpS^SfpO|8r?{Wo=<_E_iKh0Ns|sZo?o9hVOfdD0f0y_0Uaes$F*7 zq-_sChyzv=oMF{I87}&v&%m} zUc@%v*ALa?dxnt3V#mAOhA=)!InQ&!7+_E%7vzm(ZR3LFH7U#DF|Qx87>mGVc8;ni zT`wveWw03vgTYQsfh3Ee414qt^(S0zQEDCICntghG4|@z5FJ>=4ic)tyYSH>YjO^Bfc7p7Sv{uiBQgH? vii)H1sA3HZ<7pa~b=o#8aY)fch{wMeutecsJ{RX9o?ZO|(Ut-}cnJUiViKX; literal 349 zcmV-j0iymNiwFn+a*$>M|8r?{Wo=<_E_iKh0Ns{NZo?oDh4(pyqi(uw zrM3sacnqW1jLYD-xqWdGRXIRaiOdEV4Nu=ggP_`d4}Eq3&q!{YuZuFzz-emHnQi|0 z@gg4bUA?L%-!p_P7CYYNHiYp>%6Xm(#sGsFxgc*OYa16VFG*Pzk9obyVk`pZ*(s`? zbiJr>l)+{w3?S209O4b*r#HR`M!yMwGvde@3hH-qGAcYhVBuI5GSNI`SZnG>HZby2S7e z1c9!j#IQ#XQGdea7NyoPesUsM5M!^74bh%O>>!~Uya^vIvL@$12WW4?kkw=AJ`&@P vuc$aEk1E!%FrKDyS*LBo5{DFBgn0am0ZSDA<#TZw;@Q