diff --git a/docs/run-the-playbooks-for-hypershift.md b/docs/run-the-playbooks-for-hcp.md similarity index 79% rename from docs/run-the-playbooks-for-hypershift.md rename to docs/run-the-playbooks-for-hcp.md index 0206f77b..427cc527 100644 --- a/docs/run-the-playbooks-for-hypershift.md +++ b/docs/run-the-playbooks-for-hcp.md @@ -8,7 +8,7 @@ * DNS entry to resolve api.${cluster}.${domain} , api-int.${cluster}.${domain} , *apps.${cluster}.${domain} to a load balancer deployed to redirect incoming traffic to the ingresses pod ( Bastion ). * If using dynamic IP for agents, make sure you have entries in DHCP Server for macaddresses you are using in installation to map to IPv4 addresses and along with this DHCP server should make your IPs to use nameserver which you have configured. ## Note: -* As of now we are supporting only macvtap for hypershift Agent based installation for KVM compute nodes. +* As of now we are supporting only macvtap for Hosted Control Plane Agent based installation for KVM compute nodes. * Supported network modes for zVM : vswitch, OSA, RoCE, Hipersockets ## Step-1: Setup Ansible Vault for Management Cluster Credentials @@ -36,18 +36,18 @@ ansible-vault edit playbooks/secrets.yaml ``` * Make sure you entered Manamegement cluster credenitails properly ,incorrect credentails will cause problem while logging in to the cluster in further steps. -## Step-2: Initial Setup for Hypershift +## Step-2: Initial Setup for Hosted Control Plane * Navigate to the [root folder of the cloned Git repository](https://github.com/IBM/Ansible-OpenShift-Provisioning) in your terminal (`ls` should show [ansible.cfg](https://github.com/IBM/Ansible-OpenShift-Provisioning/blob/main/ansible.cfg)). -* Update variables as per the compute node type (zKVM /zVM) in Section-16 ( Hypershift ) and Section-3 ( File Server : ip , protocol and iso_mount_dir ) in [all.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/inventories/default/group_vars/all.yaml.template) before running the playbooks. -* First playbook to be run is setup_for_hypershift.yaml which will create inventory file for hypershift and will add ssh key to the kvm host. +* Update variables as per the compute node type (zKVM /zVM) in [hcp.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/inventories/default/group_vars/hcp.yaml.template) ( hcp.yaml.template )before running the playbooks. +* First playbook to be run is setup_for_hcp.yaml which will create inventory file for HCP and will add ssh key to the kvm host. * Run this shell command: ``` -ansible-playbook playbooks/setup_for_hypershift.yaml --ask-vault-pass +ansible-playbook playbooks/setup_for_hcp.yaml --ask-vault-pass ``` ## Step-3: Create Hosted Cluster -* Run each part step-by-step by running one playbook at a time, or all at once using [hypershift.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/playbooks/hypershift.yaml). +* Run each part step-by-step by running one playbook at a time, or all at once using [hcp.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/playbooks/hcp.yaml). * Here's the full list of playbooks to be run in order, full descriptions of each can be found further down the page: * create_hosted_cluster.yaml ([code](https://github.com/IBM/Ansible-OpenShift-Provisioning/blob/main/playbooks/create_hosted_cluster.yaml)) * create_agents_and_wait_for_install_complete.yaml ([code](https://github.com/IBM/Ansible-OpenShift-Provisioning/blob/main/playbooks/create_agents_and_wait_for_install_complete.yaml)) @@ -57,16 +57,16 @@ ansible-playbook playbooks/setup_for_hypershift.yaml --ask-vault-pass * Alternatively, to run all the playbooks at once, start the master playbook by running this shell command: * After installation , you can find the details of cluster like kubeconfig and password in the installation directory ( $HOME/ansible_workdir/ ) ``` -ansible-playbook playbooks/hypershift.yaml --ask-vault-pass +ansible-playbook playbooks/hcp.yaml --ask-vault-pass ``` # Description for Playbooks -## setup_for_hypershift Playbook +## setup_for_hcp Playbook ### Overview * First-time setup of the Ansible Controller,the machine running Ansible. ### Outcomes -* Inventory file for hypershift to be created. +* Inventory file for hcp to be created. * SSH key generated for Ansible passwordless authentication. * Ansible SSH key is copied to kvm host. ### Notes @@ -110,12 +110,12 @@ ansible-playbook playbooks/hypershift.yaml --ask-vault-pass * Destroy the Hosted Control Plane and other resources created as part of installation ### Procedure -* Run the playbook [destroy_cluster_hypershift.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/playbooks/destroy_cluster_hypershift.yaml) to destroy all the resources created while installation +* Run the playbook [destroy_cluster_hcp.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/playbooks/destroy_cluster_hcp.yaml) to destroy all the resources created while installation ``` -ansible-playbook playbooks/destroy_cluster_hypershift.yaml --ask-vault-pass +ansible-playbook playbooks/destroy_cluster_hcp.yaml --ask-vault-pass ``` -## destroy_cluster_hypershift Playbook +## destroy_cluster_hcp Playbook ### Overview * Delete all the resources on Hosted Cluster * Destroy the Hosted Control Plane diff --git a/docs/set-variables-group-vars.md b/docs/set-variables-group-vars.md index 210d754d..06ee0cad 100644 --- a/docs/set-variables-group-vars.md +++ b/docs/set-variables-group-vars.md @@ -208,76 +208,7 @@ **rhcos_live_initrd** | CoreOS initramfs to be used for the bootstrap, control and compute nodes. | rhcos-4.12.3-s390x-live-initramfs.s390x.img **rhcos_live_rootfs** | CoreOS rootfs to be used for the bootstrap, control and compute nodes. | rhcos-4.12.3-s390x-live-rootfs.s390x.img -## 16 - Hypershift ( Optional ) -**Variable Name** | **Description** | **Example** -:--- | :--- | :--- -**hypershift.compute_node_type** | Select the compute node type for HCP , either zKVM or zVM | zvm -**hypershift.kvm_host** | IPv4 address of KVM host for hypershift (kvm host where you want to run all oc commands and create VMs)| 192.168.10.1 -**hypershift.kvm_host_user** | User for KVM host | root -**hypershift.bastion_hypershift** | IPv4 address for bastion of Hosted Cluster | 192.168.10.1 -**hypershift.bastion_hypershift_user** | User for bastion of Hosted Cluster | root -**hypershift.create_bastion** | true or false - create bastion with the provided IP (hypershift.bastion_hypershift) | true -**hypershift.networking_device** | The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. | enc1100 -**hypershift.gateway** | IPv4 Address for gateway from where the kvm_host and bastion are reachable This for adding ip route from kvm_host to bastion through gateway | 192.168.10.1 -**hypershift.bastion_parms.interface** | Interface for bastion | enc1 -**hypershift.bastion_parms.hostname** | Hostname for bastion | bastion -**hypershift.bastion_parms.base_domain** | DNS base domain for the bastion. | ihost.com -**hypershift.bastion_parms.os_variant** | rhel os variant for creating bastion | 8.7 -**hypershift.bastion_parms.nameserver** | Nameserver for creating bastion | 192.168.10.1 -**hypershift.bastion_parms.gateway** | Gateway IP for creating bastion This is how it well be used ip=::: | 192.168.10.1 -**hypershift.bastion_parms.subnet_mask** | IPv4 address of subnetmask | 255.255.255.0 -**hypershift.mgmt_cluster_nameserver** | IP Address of Nameserver of Management Cluster | 192.168.10.1 -**hypershift.oc_url** | URL for OC Client that you want to install on the host | https://... ..openshift-client-linux-4.13.0-ec.4.tar.gz -**hypershift.hcp.clusters_namespace** | Namespace for Creating Hosted Control Plane | clusters -**hypershift.hcp.hosted_cluster_name** | Name for the Hosted Cluster | hosted0 -**hypershift.hcp.basedomain** | Base domain for Hosted Cluster | example.com -**hypershift.hcp.pull_secret_file** | Path for the pull secret No need to change this as we are copying the pullsecret to same file /root/ansible_workdir/auth_file | /root/ansible_workdir/auth_file -**hypershift.hcp.ocp_release** | OCP Release version for Hosted Control Cluster and Nodepool | 4.13.0-rc.4-multi -**hypershift.hcp.machine_cidr** | Machines CIDR for Hosted Cluster | 192.168.122.0/24 -**hypershift.hcp.arch** | Architecture for InfraEnv and AgentServiceConfig" | s390x -**hypershift.hcp.additional_flags** | Any additional flags for creating hcp ( In hcp create cluster agent command ) | --fips -**hypershift.hcp.pull_secret** | Pull Secret of Management Cluster Make sure to enclose pull_secret in 'single quotes' | '{"auths":{"cloud.openshift.com":{"auth":"b3Blb...4yQQ==","email":"redhat.user@gmail.com"}}}' -**hypershift.mce.version** | version for multicluster-engine Operator | 2.4 -**hypershift.mce.instance_name** | name of the MultiClusterEngine instance | engine -**hypershift.mce.delete** | true or false - deletes mce and related resources while running deletion playbook | true -**hypershift.asc.url_for_ocp_release_file** | Add URL for OCP release.txt File | https://... ..../release.txt -**hypershift.asc.db_volume_size** | DatabaseStorage Volume Size | 10Gi -**hypershift.asc.fs_volume_size** | FileSystem Storage Volume Size | 10Gi -**hypershift.asc.ocp_version** | OCP Version for AgentServiceConfig | 4.13.0-ec.4 -**hypershift.asc.iso_url** | Give URL for ISO image | https://... ...s390x-live.s390x.iso -**hypershift.asc.root_fs_url** | Give URL for rootfs image | https://... ... live-rootfs.s390x.img -**hypershift.asc.mce_namespace** | Namespace where your Multicluster Engine Operator is installed. Recommended Namespace for MCE is 'multicluster-engine'. Change this only if MCE is installed in other namespace. | multicluster-engine -**hypershift.agents_parms.agents_count** | Number of agents for the hosted cluster The same number of compute nodes will be attached to Hosted Cotrol Plane | 2 -**hypershift.agents_parms.static_ip_parms.static_ip** | true or false - use static IPs for agents using NMState | true -**hypershift.agents_parms.static_ip_parms.ip** | List of IP addresses for agents | 192.168.10.1 -**hypershift.agents_parms.static_ip_parms.interface** | Interface for agents for configuring NMStateConfig | eth0 -**hypershift.agents_parms.agent_mac_addr** | List of macaddresses for the agents. Configure in DHCP if you are using dynamic IPs for Agents. | - 52:54:00:ba:d3:f7 -**hypershift.agents_parms.disk_size** | Disk size for agents | 100G -**hypershift.agents_parms.ram** | RAM for agents | 16384 -**hypershift.agents_parms.vcpus** | vCPUs for agents | 4 -**hypershift.agents_parms.nameserver** | Nameserver to be used for agents | 192.168.10.1 -**hypershift.agents_parms.zvm_parameters.network_mode** | Network mode for zvm nodes Supported modes: vswitch,osa, RoCE | vswitch -**hypershift.agents_parms.zvm_parameters.disk_type** | Disk type for zvm nodes Supported disk types: fcp, dasd | dasd -**hypershift.agents_parms.zvm_parameters.vcpus** | CPUs for each zvm node | 4 -**hypershift.agents_parms.zvm_parameters.memory** | RAM for each zvm node | 16384 -**hypershift.agents_parms.zvm_parameters.nameserver** | Nameserver for compute nodes | 192.168.10.1 -**hypershift.agents_parms.zvm_parameters.subnetmask** | Subnet mask for compute nodes | 255.255.255.0 -**hypershift.agents_parms.zvm_parameters.gateway** | Gateway for compute nodes | 192.168.10.1 -**hypershift.agents_parms.zvm_parameters.nodes** | Set of parameters for zvm nodes Give the details of each zvm node here | -**hypershift.agents_parms.zvm_parameters.nodes.name** | Name of the zVM guest | m1317002 -**hypershift.agents_parms.zvm_parameters.nodes.host** | Host name of the zVM guests which we use to login 3270 console | boem1317 -**hypershift.agents_parms.zvm_parameters.nodes.user** | Username for zVM guests to login | m1317002 -**hypershift.agents_parms.zvm_parameters.nodes.password** | password for the zVM guests to login | password -**hypershift.agents_parms.zvm_parameters.nodes.interface.ifname** | Network interface name for zVM guests | encbdf0 -**hypershift.agents_parms.zvm_parameters.nodes.interface.nettype** | Network type for zVM guests for network connectivity | qeth -**hypershift.agents_parms.zvm_parameters.nodes.interface.subchannels** | subchannels for zVM guests interfaces | 0.0.bdf0,0.0.bdf1,0.0.bdf2 -**hypershift.agents_parms.zvm_parameters.nodes.interface.options** | Configurations options | layer2=1 -**hypershift.agents_parms.zvm_parameters.nodes.interface.ip** | IP addresses for to be used for zVM nodes | 192.168.10.1 -**hypershift.agents_parms.zvm_parameters.nodes.dasd.disk_id** | Disk id for dasd disk to be used for zVM node | 4404 -**hypershift.agents_parms.zvm_parameters.nodes.lun** | Disk details of fcp disk to be used for zVM node | 4404 - - -## 17 - (Optional) Disconnected cluster setup +## 16 - (Optional) Disconnected cluster setup **Variable Name** | **Description** | **Example** :--- | :--- | :--- **disconnected.enabled** | True or False, to enable disconnected mode | False @@ -309,7 +240,7 @@ **disconnected.mirroring.oc_mirror.image_set.storageConfig.registry.skipTLS** | True of False same purpose served as in standard image set i.e. skip the tls for the registry during mirroring.| false **disconnected.mirrroing.oc_mirror.image_set.mirror** | YAML containing a list of what needs to be mirrored. See the oc mirror image set documentation. | see oc-mirror [image set](https://docs.openshift.com/container-platform/latest/installing/disconnected_install/installing-mirroring-disconnected.html#oc-mirror-creating-image-set-config_installing-mirroring-disconnected) documentation -## 18 - (Optional) Create compute node in a day-2 operation +## 17 - (Optional) Create compute node in a day-2 operation **Variable Name** | **Description** | **Example** :--- | :--- | :--- @@ -323,7 +254,7 @@ **day2_compute_node.host_user** | KVM host user which is used to create the VM | root **day2_compute_node.host_arch** | KVM host architecture. | s390x -## 19 - (Optional) Agent Based Installer +## 18 - (Optional) Agent Based Installer **Variable Name** | **Description** | **Example** :--- | :--- | :--- @@ -331,3 +262,81 @@ **abi.ansible_workdir** | This will be work directory name, it will keep required data that need to be present during or after execution | ansible_workdir **abi.ocp_installer_version** | Version will contain value of openshift-installer binary version user desired to be used | '4.15.0-rc.8' **abi.ocp_installer_url** | This is the base url of openshift installer binary it will remain same as static value, User Do not need to give value until user wants to change the mirror | 'https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/' + + +## Hosted Control Plane ( Optional ) +**Variable Name** | **Description** | **Example** +:--- | :--- | :--- +**hcp.compute_node_type** | Select the compute node type for HCP , either zKVM or zVM | zvm +**hcp.mgmt_cluster_nameserver** | IP Address of Nameserver of Management Cluster | 192.168.10.1 +**hcp.oc_url** | URL for OC Client that you want to install on the host | https://... ..openshift-client-linux-4.13.0-ec.4.tar.gz +**hcp.ansible_key_name** | ssh key name | ansible-ocpz +**hcp.pkgs** | list of packages for different hosts | +**hcp.mce.version** | version for multicluster-engine Operator | 2.4 +**hcp.mce.instance_name** | name of the MultiClusterEngine instance | engine +**hcp.mce.delete** | true or false - deletes mce and related resources while running deletion playbook | true +**hcp.asc.url_for_ocp_release_file** | Add URL for OCP release.txt File | https://... ..../release.txt +**hcp.asc.db_volume_size** | DatabaseStorage Volume Size | 10Gi +**hcp.asc.fs_volume_size** | FileSystem Storage Volume Size | 10Gi +**hcp.asc.ocp_version** | OCP Version for AgentServiceConfig | 4.13.0-ec.4 +**hcp.asc.iso_url** | Give URL for ISO image | https://... ...s390x-live.s390x.iso +**hcp.asc.root_fs_url** | Give URL for rootfs image | https://... ... live-rootfs.s390x.img +**hcp.asc.mce_namespace** | Namespace where your Multicluster Engine Operator is installed. Recommended Namespace for MCE is 'multicluster-engine'. Change this only if MCE is installed in other namespace. | multicluster-engine +**hcp.control_plane.high_availabiliy** | Availability for Control Plane | true +**hcp.control_plane.clusters_namespace** | Namespace for Creating Hosted Control Plane | clusters +**hcp.control_plane.hosted_cluster_name** | Name for the Hosted Cluster | hosted0 +**hcp.control_plane.basedomain** | Base domain for Hosted Cluster | example.com +**hcp.control_plane.pull_secret_file** | Path for the pull secret No need to change this as we are copying the pullsecret to same file /root/ansible_workdir/auth_file | /root/ansible_workdir/auth_file +**hcp.control_plane.ocp_release_image** | OCP Release version for Hosted Control Cluster and Nodepool | 4.13.0-rc.4-multi +**hcp.control_plane.arch** | Architecture for InfraEnv and AgentServiceConfig" | s390x +**hcp.control_plane.additional_flags** | Any additional flags for creating hcp ( In hcp create cluster agent command ) | --fips +**hcp.control_plane.pull_secret** | Pull Secret of Management Cluster Make sure to enclose pull_secret in 'single quotes' | '{"auths":{"cloud.openshift.com":{"auth":"b3Blb...4yQQ==","email":"redhat.user@gmail.com"}}}' +**hcp.bastion_params.create** | true or false - create bastion with the provided IP | true +**hcp.bastion_params.ip** | IPv4 address for bastion of Hosted Cluster | 192.168.10.1 +**hcp.bastion_params.user** | User for bastion of Hosted Cluster | root +**hcp.bastion_params.host** | IPv4 address of KVM host (kvm host where you want to run all oc commands and create VMs)| 192.168.10.1 +**hcp.bastion_params.host_user** | User for KVM host | root +**hcp.bastion_params.hostname** | Hostname for bastion | bastion +**hcp.bastion_params.base_domain** | DNS base domain for the bastion. | ihost.com +**hcp.bastion_params.nameserver** | Nameserver for creating bastion | 192.168.10.1 +**hhcp.bastion_params.gateway** | Gateway IP for creating bastion This is how it well be used ip=::: | 192.168.10.1 +**hcp.bastion_params.subnet_mask** | IPv4 address of subnetmask | 255.255.255.0 +**hcp.bastion_params.interface** | Interface for bastion | enc1 +**hcp.bastion_params.file_server.ip** | IPv4 address for the file server that will be used to pass config files and iso to KVM host LPAR(s) and bastion VM during their first boot. | 192.168.10.201 +**hcp.bastion_params.file_server.protocol** | Protocol used to serve the files, either 'ftp' or 'http' | http +**hcp.bastion_params.file_server.iso_mount_dir** | Directory path relative to the HTTP/FTP accessible directory where RHEL ISO is mounted. For example, if the FTP root is at /home/user1 and the ISO is mounted at /home/user1/RHEL/8.7 then this variable would be RHEL/8.7 - no slash before or after. | RHEL/8.7 +**hcp.bastion_params.os_variant** | rhel os variant for creating bastion | 8.7 +**hcp.bastion_params.disk** | rhel os variant for creating bastion | 8.7 +**hcp.bastion_params.network_name** | rhel os variant for creating bastion | 8.7 +**hcp.bastion_params.networking_device** | The network interface card from Linux's perspective. Usually enc and then a number that comes from the dev_num of the network adapter. | enc1100 +**hcp.bastion_params.language** | What language would you like Red Hat Enterprise Linux to use? In UTF-8 language code. Available languages and their corresponding codes can be found here, in the "Locale" column of Table 2.1. | en_US.UTF-8 +**hcp.bastion_params.timezone** | Which timezone would you like Red Hat Enterprise Linux to use? A list of available timezone options can be found here. | America/New_York +**hcp.bastion_params.keyboard** | Which keyboard layout would you like Red Hat Enterprise Linux to use? | us +**hcp.data_plane.compute_count** | Number of agents for the hosted cluster The same number of compute nodes will be attached to Hosted Cotrol Plane | 2 +**hcp.data_plane.vcpus** | vCPUs for compute nodes | 4 +**hcp.data_plane.memory** | RAM for compute nodes | 16384 +**hcp.data_plane.nameserver** | Nameserver for compute nodes | 192.168.10.1 +**hcp.data_plane.storage.type** | Storage type for KVM guests qcow/dasd | qcow +**hcp.data_plane.storage.qcow.disk_size** | Disk size for kvm guests | 100G +**hcp.data_plane.storage.qcow.pool_path** | Storage pool path for creating disks | /home/images/ +**hcp.data_plane.storage.dasd** | dasd disks for kvm guests | /disk +**hcp.data_plane.kvm.ip_params.static_ip.enabled** | true or false - use static IPs for agents using NMState | true +**hcp.data_plane.kvm.ip_params.static_ip.ip** | List of IP addresses for agents | 192.168.10.1 +**hcp.data_plane.kvm.ip_params.static_ip.interface** | Interface for agents for configuring NMStateConfig | eth0 +**hcp.data_plane.kvm.ip_params.mac** | List of macaddresses for the agents. Configure in DHCP if you are using dynamic IPs for Agents. | - 52:54:00:ba:d3:f7 +**hcp.data_plane.zvm.network_mode** | Network mode for zvm nodes Supported modes: vswitch,osa, RoCE | vswitch +**hcp.data_plane.zvm.disk_type** | Disk type for zvm nodes Supported disk types: fcp, dasd | dasd +**hcp.data_plane.zvm.subnetmask** | Subnet mask for compute nodes | 255.255.255.0 +**hcp.data_plane.zvm.gateway** | Gateway for compute nodes | 192.168.10.1 +**hcp.data_plane.zvm.nodes** | Set of parameters for zvm nodes Give the details of each zvm node here | +**hcp.data_plane.zvm.name** | Name of the zVM guest | m1317002 +**hcp.data_plane.zvm.nodes.host** | Host name of the zVM guests which we use to login 3270 console | boem1317 +**hcp.data_plane.zvmnodes.user** | Username for zVM guests to login | m1317002 +**hcp.data_plane.zvm.nodes.password** | password for the zVM guests to login | password +**hcp.data_plane.zvm.nodes.interface.ifname** | Network interface name for zVM guests | encbdf0 +**hcp.data_plane.zvm.nodes.interface.nettype** | Network type for zVM guests for network connectivity | qeth +**hcp.data_plane.zvm.nodes.interface.subchannels** | subchannels for zVM guests interfaces | 0.0.bdf0,0.0.bdf1,0.0.bdf2 +**hcp.data_plane.zvm.nodes.interface.options** | Configurations options | layer2=1 +**hcp.data_plane.zvm.interface.ip** | IP addresses for to be used for zVM nodes | 192.168.10.1 +**hcp.data_plane.zvm.nodes.dasd.disk_id** | Disk id for dasd disk to be used for zVM node | 4404 +**hcp.data_plane.zvm.nodes.lun** | Disk details of fcp disk to be used for zVM node | 4404 \ No newline at end of file diff --git a/inventories/default/group_vars/.gitignore b/inventories/default/group_vars/.gitignore index bb409d93..677aa271 100644 --- a/inventories/default/group_vars/.gitignore +++ b/inventories/default/group_vars/.gitignore @@ -1,3 +1,4 @@ /* !.gitignore -!all.yaml.template \ No newline at end of file +!all.yaml.template +!hcp.yaml.template \ No newline at end of file diff --git a/inventories/default/group_vars/all.yaml.template b/inventories/default/group_vars/all.yaml.template index 729250b0..3b09bf73 100644 --- a/inventories/default/group_vars/all.yaml.template +++ b/inventories/default/group_vars/all.yaml.template @@ -192,7 +192,6 @@ env: controller: [ openssh, expect, sshuttle ] kvm: [ libguestfs, libvirt-client, libvirt-daemon-config-network, libvirt-daemon-kvm, cockpit-machines, libvirt-devel, virt-top, qemu-kvm, python3-lxml, cockpit, lvm2 ] bastion: [ haproxy, httpd, bind, bind-utils, expect, firewalld, mod_ssl, python3-policycoreutils, rsync ] - hypershift: [ make, jq, git, virt-install ] zvm: [ git, python3-pip, python3-devel, openssl-devel, rust, cargo, libffi-devel, wget, tar, jq, gcc, make, x3270, python39 ] # Section 12 - OpenShift Settings @@ -257,143 +256,7 @@ rhcos_live_kernel: "rhcos-4.12.3-s390x-live-kernel-s390x" rhcos_live_initrd: "rhcos-4.12.3-s390x-live-initramfs.s390x.img" rhcos_live_rootfs: "rhcos-4.12.3-s390x-live-rootfs.s390x.img" -# Section 16 - Hypershift ( Optional ) - -hypershift: - compute_node_type: # KVM or zVM - - kvm_host: - kvm_host_user: - bastion_hypershift: - bastion_hypershift_user: - - create_bastion: true - networking_device: enc1100 # Following set of parameters required only if create_bastion is true - gateway: - - bastion_parms: - interface: - hostname: - base_domain: - os_variant: - nameserver: - gateway: - subnet_mask: - - - # Parameters for oc login - - mgmt_cluster_nameserver: - oc_url: - - #Hosted Control Plane Parameters - - hcp: - high_availabiliy: true - clusters_namespace: - hosted_cluster_name: - basedomain: - pull_secret_file: /root/ansible_workdir/auth_file - ocp_release: - machine_cidr: 192.168.122.0/24 - arch: - additional_flags: - # Make sure to enclose pull_secret in 'single quotes' - pull_secret: - - # MultiClusterEngine Parameters - mce: - version: - instance_name: engine - delete: false - - # AgentServiceConfig Parameters - - asc: - db_volume_size: "10Gi" - fs_volume_size: "10Gi" - ocp_version: - iso_url: - mce_namespace: multicluster-engine # This is the Recommended Namespace for Multicluster Engine operator - - agents_parms: - agents_count: - - # KVM specific parameters - KVM on s390x - - static_ip_parms: - static_ip: true - ip: # Required only if static_ip is true - #- - #- - interface: eth0 - # If you want to use specific mac addresses, provide them here - agent_mac_addr: - #- - disk_size: 100G - ram: 16384 - vcpus: 4 - nameserver: - storage: - pool_path: "/var/lib/libvirt/images/" - - - # zVM specific parameters - s390x - - zvm_parameters: - network_mode: vswitch # Supported modes: vswitch,osa, RoCE, Hipersockets - disk_type: # Supported modes: fcp , dasd - vcpus: 4 - memory: 16384 - nameserver: - subnetmask: - gateway: - - nodes: - - name: - host: - user: - password: - interface: - ifname: encbdf0 - nettype: qeth - subchannels: 0.0.bdf0,0.0.bdf1,0.0.bdf2 - options: layer2=1 - ip: - - # Required if disk_type is dasd - dasd: - disk_id: - - # Required if disk_type is fcp - lun: - - id: - paths: - - wwpn: - fcp: - - - name: - host: - user: - password: - interface: - ifname: encbdf0 - nettype: qeth - subchannels: 0.0.bdf0,0.0.bdf1,0.0.bdf2 - options: layer2=1 - ip: - - dasd: - disk_id: - - lun: - - id: - paths: - - wwpn: - fcp: - - -# Section 17 - (Optional) Setup disconnected clusters +# Section 16 - (Optional) Setup disconnected clusters # Warning: currently, the oc-mirror plugin is officially downloadable to amd64 only. disconnected: enabled: False @@ -466,7 +329,7 @@ disconnected: - name: registry.redhat.io/ubi8/ubi:latest helm: {} -# Section 18 - (Optional) Create additional compute node in a day-2 operation +# Section 17 - (Optional) Create additional compute node in a day-2 operation day2_compute_node: vm_name: @@ -480,7 +343,7 @@ day2_compute_node: host_arch: -# Section 19 - Agent Based Installer ( Optional ) +# Section 18 - Agent Based Installer ( Optional ) abi: flag: Flase ansible_workdir: 'ansible_workdir' diff --git a/inventories/default/group_vars/hcp.yaml.template b/inventories/default/group_vars/hcp.yaml.template new file mode 100644 index 00000000..1dc0ece9 --- /dev/null +++ b/inventories/default/group_vars/hcp.yaml.template @@ -0,0 +1,141 @@ +hcp: + compute_node_type: # KVM or zVM + + # Parameters for oc login + mgmt_cluster_nameserver: + oc_url: + + ansible_key_name: ansible-ocpz + pkgs: + kvm: [ libguestfs, libvirt-client, libvirt-daemon-config-network, libvirt-daemon-kvm, cockpit-machines, libvirt-devel, virt-top, qemu-kvm, python3-lxml, cockpit, lvm2 ] + bastion: [ haproxy, httpd, bind, bind-utils, expect, firewalld, mod_ssl, python3-policycoreutils, rsync ] + hcp: [ make, jq, git, virt-install ] + zvm: [ git, python3-pip, python3-devel, openssl-devel, rust, cargo, libffi-devel, wget, tar, jq, gcc, make, x3270, python39 ] + + # MultiClusterEngine Parameters + mce: + version: + instance_name: engine + delete: false + + # AgentServiceConfig Parameters + + asc: + db_volume_size: "10Gi" + fs_volume_size: "10Gi" + ocp_version: + iso_url: + mce_namespace: multicluster-engine # This is the Recommended Namespace for Multicluster Engine operator + + # Hosted Control Plane Parameters + control_plane: + high_availabiliy: true + clusters_namespace: + hosted_cluster_name: + basedomain: + ocp_release_image: + arch: s390x + additional_flags: + # Make sure to enclose pull_secret in 'single quotes' + pull_secret: + + bastion_params: + create: true + ip: + user: + host: + host_user: + hostname: + base_domain: + nameserver: + gateway: + subnet_mask: + interface: + file_server: + ip: + protocol: http + iso_mount_dir: + os_variant: + disk: + network_name: macvtap + networking_device: enc1100 # Device for macvtap network + + language: en_US.UTF-8 + timezone: America/New_York + keyboard: us + + + + data_plane: + compute_count: + vcpus: 4 + memory: 16384 + nameserver: + + kvm: + storage: + type: qcow # Supported types: qcow, dasd + qcow: + disk_size: 100G + pool_path: "/var/lib/libvirt/images/" + dasd: + #- + ip_params: + static_ip: + enabled: true + interface: eth0 + ip: # Required only if static_ip is true + #- + #- + mac: # If you want to use specific mac addresses, provide them here + #- + + + zvm: + network_mode: vswitch # Supported modes: vswitch,osa, RoCE, Hipersockets + disk_type: # Supported modes: fcp , dasd + subnetmask: + gateway: + + nodes: + - name: + host: + user: + password: + interface: + ifname: encbdf0 + nettype: qeth + subchannels: 0.0.bdf0,0.0.bdf1,0.0.bdf2 + options: layer2=1 + ip: + + # Required if disk_type is dasd + dasd: + disk_id: + + # Required if disk_type is fcp + lun: + - id: + paths: + - wwpn: + fcp: + + - name: + host: + user: + password: + interface: + ifname: encbdf0 + nettype: qeth + subchannels: 0.0.bdf0,0.0.bdf1,0.0.bdf2 + options: layer2=1 + ip: + + dasd: + disk_id: + + lun: + - id: + paths: + - wwpn: + fcp: diff --git a/mkdocs.yaml b/mkdocs.yaml index b7408824..49a543e3 100644 --- a/mkdocs.yaml +++ b/mkdocs.yaml @@ -14,7 +14,7 @@ nav: - 3 Set Variables (host_vars): 'set-variables-host-vars.md' - 4 Run the Playbooks: 'run-the-playbooks.md' - Run the Playbooks (Disconnected): 'run-the-playbooks-for-disconnected.md' - - Run the Playbooks (HyperShift): 'run-the-playbooks-for-hypershift.md' + - Run the Playbooks (HostedControlPlane): 'run-the-playbooks-for-hcp.md' - Misc: - Troubleshooting: 'troubleshooting.md' - Acknowledgements: 'acknowledgements.md' diff --git a/playbooks/create_agents_and_wait_for_install_complete.yaml b/playbooks/create_agents_and_wait_for_install_complete.yaml index e855318c..1601fdd1 100644 --- a/playbooks/create_agents_and_wait_for_install_complete.yaml +++ b/playbooks/create_agents_and_wait_for_install_complete.yaml @@ -1,29 +1,36 @@ - name: Create Agents - hosts: kvm_host_hypershift + hosts: kvm_host_hcp become: true roles: - - boot_agents_hypershift + - boot_agents_hcp - name: Boot zvm nodes - hosts: bastion_hypershift + hosts: bastion_hcp tasks: + - name: Getting packages for zvm + set_fact: + env: + pkgs: + zvm: "{{ hcp.pkgs.zvm }}" + when: hcp.compute_node_type | lower == 'zvm' + - name: Install tessia baselib import_role: name: install_tessia_baselib - when: hypershift.compute_node_type | lower == 'zvm' + when: hcp.compute_node_type | lower == 'zvm' - name: Start zvm nodes - include_tasks: ../roles/boot_zvm_nodes_hypershift/tasks/main.yaml - loop: "{{ range(hypershift.agents_parms.agents_count | int) | list }}" - when: hypershift.compute_node_type | lower == 'zvm' + include_tasks: ../roles/boot_zvm_nodes_hcp/tasks/main.yaml + loop: "{{ range(hcp.data_plane.compute_count| int) | list }}" + when: hcp.compute_node_type | lower == 'zvm' - name: Scale Nodepool & Configure Haproxy on bastion for hosted workers - hosts: bastion_hypershift + hosts: bastion_hcp roles: - - scale_nodepool_and_wait_for_compute_hypershift - - add_hc_workers_to_haproxy_hypershift + - scale_nodepool_and_wait_for_compute_hcp + - add_hc_workers_to_haproxy_hcp - name: Wait for all Console operators to come up - hosts: bastion_hypershift + hosts: bastion_hcp roles: - - wait_for_hc_to_complete_hypershift + - wait_for_hc_to_complete_hcp diff --git a/playbooks/create_hosted_cluster.yaml b/playbooks/create_hosted_cluster.yaml index 36a05527..f18d7cbd 100644 --- a/playbooks/create_hosted_cluster.yaml +++ b/playbooks/create_hosted_cluster.yaml @@ -1,87 +1,91 @@ --- - name: Install Prerequisites on kvm_host - hosts: kvm_host_hypershift + hosts: kvm_host_hcp become: true vars_files: - "{{playbook_dir}}/secrets.yaml" tasks: - name: Setting host set_fact: - host: 'kvm_host_hypershift' - when: hypershift.compute_node_type | lower != 'zvm' + host: 'kvm_host_hcp' + when: hcp.compute_node_type | lower != 'zvm' - name: Install Prereqs on host import_role: - name: install_prerequisites_host_hypershift - when: hypershift.compute_node_type | lower != 'zvm' + name: install_prerequisites_host_hcp + when: hcp.compute_node_type | lower != 'zvm' - name: Create macvtap network - hosts: kvm_host_hypershift + hosts: kvm_host_hcp become: true tasks: - name: Setting interface name set_fact: networking: - device1: "{{ hypershift.networking_device }}" - when: hypershift.compute_node_type | lower != 'zvm' + device1: "{{ hcp.bastion_params.networking_device }}" + env: + vnet_name: macvtap + when: hcp.compute_node_type | lower != 'zvm' - name: Creating macvtap network import_role: name: macvtap - when: hypershift.compute_node_type | lower != 'zvm' + when: hcp.compute_node_type | lower != 'zvm' -- name: Create bastion for hypershift - hosts: kvm_host_hypershift +- name: Create bastion for hcp + hosts: kvm_host_hcp become: true vars_files: - "{{playbook_dir}}/secrets.yaml" tasks: - name: Creating Bastion include_role: - name: create_bastion_hypershift + name: create_bastion_hcp when: - - hypershift.create_bastion == true - - hypershift.compute_node_type | lower != 'zvm' + - hcp.bastion_params.create == true + - hcp.compute_node_type | lower != 'zvm' - name: Configuring Bastion - hosts: bastion_hypershift + hosts: bastion_hcp become: true vars_files: - "{{playbook_dir}}/secrets.yaml" tasks: - name: Setting host set_fact: - host: 'bastion_hypershift' + host: 'bastion_hcp' + env: + ansible_key_name: "{{ hcp.ansible_key_name }}" - name: Install Prereqs import_role: - name: install_prerequisites_host_hypershift + name: install_prerequisites_host_hcp - name: Configure Bastion import_role: - name: install_prereqs_bastion_hypershift + name: install_prereqs_bastion_hcp - name: Add ansible SSH key to ssh-agent import_role: name: ssh_agent - name: Create AgentServiceConfig Hosted Control Plane and InfraEnv - hosts: bastion_hypershift + hosts: bastion_hcp vars_files: - "{{playbook_dir}}/secrets.yaml" roles: - install_mce_operator - - create_agentserviceconfig_hypershift - - create_hcp_InfraEnv_hypershift + - create_agentserviceconfig_hcp + - create_hcp_InfraEnv - name: Download Required images for booting Agents - hosts: "{{ 'kvm_host_hypershift' if 'kvm_host_hypershift' in groups['all'] else 'bastion_hypershift' }}" + hosts: "{{ 'kvm_host_hcp' if 'kvm_host_hcp' in groups['all'] else 'bastion_hcp' }}" become: true roles: - - setup_for_agents_hypershift + - setup_for_agents_hcp - name: Configure httpd on bastion for hosting rootfs - hosts: bastion_hypershift + hosts: bastion_hcp roles: - - download_rootfs_hypershift + - download_rootfs_hcp diff --git a/playbooks/destroy_cluster_hypershift.yaml b/playbooks/destroy_cluster_hcp.yaml similarity index 53% rename from playbooks/destroy_cluster_hypershift.yaml rename to playbooks/destroy_cluster_hcp.yaml index 01fd03a6..e8a0095b 100644 --- a/playbooks/destroy_cluster_hypershift.yaml +++ b/playbooks/destroy_cluster_hcp.yaml @@ -1,12 +1,12 @@ - name: Delete Cluster Resources - hosts: bastion_hypershift + hosts: bastion_hcp vars_files: - "{{playbook_dir}}/secrets.yaml" roles: - - delete_resources_bastion_hypershift + - delete_resources_bastion_hcp - name: Delete Resources on kvm host - hosts: kvm_host_hypershift + hosts: kvm_host_hcp become: true roles: - - delete_resources_kvm_host_hypershift + - delete_resources_kvm_host_hcp diff --git a/playbooks/hypershift.yaml b/playbooks/hcp.yaml similarity index 100% rename from playbooks/hypershift.yaml rename to playbooks/hcp.yaml diff --git a/playbooks/setup_for_hypershift.yaml b/playbooks/setup_for_hcp.yaml similarity index 53% rename from playbooks/setup_for_hypershift.yaml rename to playbooks/setup_for_hcp.yaml index 2e68fd99..226a5656 100644 --- a/playbooks/setup_for_hypershift.yaml +++ b/playbooks/setup_for_hcp.yaml @@ -4,5 +4,6 @@ hosts: localhost vars_files: - "{{playbook_dir}}/secrets.yaml" + - "{{playbook_dir}}/../inventories/default/group_vars/hcp.yaml" roles: - - create_inventory_setup_hypershift + - create_inventory_setup_hcp diff --git a/roles/add_hc_workers_to_haproxy_hypershift/tasks/main.yaml b/roles/add_hc_workers_to_haproxy_hcp/tasks/main.yaml similarity index 60% rename from roles/add_hc_workers_to_haproxy_hypershift/tasks/main.yaml rename to roles/add_hc_workers_to_haproxy_hcp/tasks/main.yaml index 08a5416f..95f5425c 100644 --- a/roles/add_hc_workers_to_haproxy_hypershift/tasks/main.yaml +++ b/roles/add_hc_workers_to_haproxy_hcp/tasks/main.yaml @@ -8,17 +8,17 @@ blockinfile: path: /etc/haproxy/haproxy.cfg block: | - listen {{ hypershift.hcp.hosted_cluster_name }}-console + listen {{ hcp.control_plane.hosted_cluster_name }}-console mode tcp - bind {{ hypershift.bastion_hypershift }}:443 - bind {{ hypershift.bastion_hypershift }}:80 + bind {{ hcp.bastion_params.ip }}:443 + bind {{ hcp.bastion_params.ip }}:80 marker: "# console" - name: Add Hosted Cluster Worker IPs to Haproxy lineinfile: path: /etc/haproxy/haproxy.cfg - line: " server {{ hypershift.hcp.hosted_cluster_name }}-worker-{{item}} {{ hosted_workers.stdout_lines[item]}}" - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + line: " server {{ hcp.control_plane.hosted_cluster_name }}-worker-{{item}} {{ hosted_workers.stdout_lines[item]}}" + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: restart haproxy service: diff --git a/roles/boot_agents_hcp/tasks/main.yaml b/roles/boot_agents_hcp/tasks/main.yaml new file mode 100644 index 00000000..cae8f3be --- /dev/null +++ b/roles/boot_agents_hcp/tasks/main.yaml @@ -0,0 +1,38 @@ +--- +- name: Create qemu image for agents + command: "qemu-img create -f qcow2 {{ hcp.data_plane.kvm.storage.qcow.pool_path }}{{ hcp.control_plane.hosted_cluster_name }}-agent{{ item }}.qcow2 {{ hcp.data_plane.kvm.storage.qcow.disk_size }}" + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" + when: hcp.data_plane.kvm.storage.type != 'dasd' + +- name: Boot Agents + shell: | + {% if hcp.data_plane.kvm.ip_params.static_ip.enabled == true %} + mac_address=$(oc get NmStateConfig static-ip-nmstate-config-{{ hcp.control_plane.hosted_cluster_name }}-{{ item }} -n {{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }} -o json | jq -r '.spec.interfaces[] | .macAddress') + {% else %} + mac_address="{{ hcp.data_plane.kvm.ip_params.mac[item] }}" + {% endif %} + {% if hcp.data_plane.kvm.storage.type != "dasd" %} + disk_param="{{ hcp.data_plane.kvm.storage.qcow.pool_path }}{{ hcp.control_plane.hosted_cluster_name }}-agent{{ item }}.qcow2" + {% else %} + disk_param="{{ hcp.data_plane.kvm.storage.dasd[item] }}" + {% endif %} + + virt-install \ + --name "{{ hcp.control_plane.hosted_cluster_name }}-agent-{{ item }}" \ + --osinfo detect=on,require=off \ + --autostart \ + --ram="{{ hcp.data_plane.memory }}" \ + --cpu host \ + --vcpus="{{ hcp.data_plane.vcpus }}" \ + --location "/var/lib/libvirt/images/pxeboot/,kernel=kernel.img,initrd=initrd.img" \ + --disk $disk_param \ + --network network:{{ hcp.bastion_params.network_name }},mac=$mac_address \ + --graphics none \ + --noautoconsole \ + --wait=-1 \ + --extra-args "rd.neednet=1 nameserver={{ hcp.data_plane.nameserver }}" \ + --extra-args "coreos.live.rootfs_url=http://{{ hcp.bastion_params.ip }}:8080/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal" \ + --extra-args "console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" + async: 3600 + poll: 0 + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" \ No newline at end of file diff --git a/roles/boot_agents_hypershift/tasks/main.yaml b/roles/boot_agents_hypershift/tasks/main.yaml deleted file mode 100644 index 35da5b4e..00000000 --- a/roles/boot_agents_hypershift/tasks/main.yaml +++ /dev/null @@ -1,32 +0,0 @@ ---- -- name: Create qemu image for agents - command: "qemu-img create -f qcow2 {{ hypershift.agents_parms.storage.pool_path }}{{ hypershift.hcp.hosted_cluster_name }}-agent{{ item }}.qcow2 {{ hypershift.agents_parms.disk_size }}" - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" - -- name: Boot Agents - shell: | - {% if hypershift.agents_parms.static_ip_parms.static_ip == true %} - mac_address=$(oc get NmStateConfig static-ip-nmstate-config-{{ hypershift.hcp.hosted_cluster_name }}-{{ item }} -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} -o json | jq -r '.spec.interfaces[] | .macAddress') - {% else %} - mac_address="{{ hypershift.agents_parms.agent_mac_addr[item] }}" - {% endif %} - - virt-install \ - --name "{{ hypershift.hcp.hosted_cluster_name }}-agent-{{ item }}" \ - --osinfo detect=on,require=off \ - --autostart \ - --ram="{{ hypershift.agents_parms.ram }}" \ - --cpu host \ - --vcpus="{{ hypershift.agents_parms.vcpus }}" \ - --location "/var/lib/libvirt/images/pxeboot/,kernel=kernel.img,initrd=initrd.img" \ - --disk {{ hypershift.agents_parms.storage.pool_path }}{{ hypershift.hcp.hosted_cluster_name }}-agent{{ item }}.qcow2 \ - --network network:{{ env.vnet_name }},mac=$mac_address \ - --graphics none \ - --noautoconsole \ - --wait=-1 \ - --extra-args "rd.neednet=1 nameserver={{ hypershift.agents_parms.nameserver }}" \ - --extra-args "coreos.live.rootfs_url=http://{{ hypershift.bastion_hypershift }}:8080/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal" \ - --extra-args "console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" - async: 3600 - poll: 0 - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" \ No newline at end of file diff --git a/roles/boot_zvm_nodes_hcp/tasks/main.yaml b/roles/boot_zvm_nodes_hcp/tasks/main.yaml new file mode 100644 index 00000000..3e808c62 --- /dev/null +++ b/roles/boot_zvm_nodes_hcp/tasks/main.yaml @@ -0,0 +1,48 @@ +--- +- name: Creating agents + block: + - name: Getting script for booting + template: + src: ../templates/boot_nodes.py + dest: /root/ansible_workdir/boot_nodes.py + + - name: Debug + debug: + msg: "Booting agent-{{ item }}" + + - name: Booting zvm node + shell: | + python /root/ansible_workdir/boot_nodes.py \ + --zvmname "{{ hcp.data_plane.zvm.nodes[item].name }}" \ + --zvmhost "{{ hcp.data_plane.zvm.nodes[item].host }}" \ + --zvmuser "{{ hcp.data_plane.zvm.nodes[item].user }}" \ + --zvmpass "{{ hcp.data_plane.zvm.nodes[item].password }}" \ + --cpu "{{ hcp.data_plane.vcpus }}" \ + --memory "{{ hcp.data_plane.memory }}" \ + --kernel 'file:///var/lib/libvirt/images/pxeboot/kernel.img' \ + --initrd 'file:///var/lib/libvirt/images/pxeboot/initrd.img' \ + --cmdline "$(cat /root/ansible_workdir/agent-{{ item }}.parm)" \ + --network "{{ hcp.data_plane.zvm.network_mode }}" + + - name: Attaching dasd disk + shell: vmcp attach {{ hcp.data_plane.zvm.nodes[item].dasd.disk_id }} to {{ hcp.data_plane.zvm.nodes[item].name }} + when: "{{ hcp.data_plane.zvm.disk_type | lower == 'dasd' }}" + + - name: Attaching fcp disks + shell: vmcp attach {{ hcp.data_plane.zvm.nodes[item].lun[0].paths[0].fcp.split('.')[-1] }} to {{ hcp.data_plane.zvm.nodes[item].name }} + when: "{{ hcp.data_plane.zvm.disk_type | lower == 'fcp' }}" + + - name: Wait for the agent to come up + shell: oc get agents -n "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" --no-headers -o custom-columns=NAME:.metadata.name,APPROVED:.spec.approved | awk '$2 == "false"' | wc -l + register: agent_count + until: agent_count.stdout | int == 1 + retries: 40 + delay: 10 + + - name: Get the name of agent + shell: oc get agents -n {{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }} --no-headers -o custom-columns=NAME:.metadata.name,APPROVED:.spec.approved | awk '$2 == "false"' + register: agent_name + + - name: Approve agents + shell: oc -n {{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }} patch agent {{ agent_name.stdout.split(' ')[0] }} -p '{"spec":{"approved":true,"hostname":"compute-{{ item }}.{{hcp.control_plane.hosted_cluster_name }}.{{ hcp.control_plane.hcp.basedomain }}"}}' --type merge + diff --git a/roles/boot_zvm_nodes_hypershift/templates/boot_nodes.py b/roles/boot_zvm_nodes_hcp/templates/boot_nodes.py similarity index 85% rename from roles/boot_zvm_nodes_hypershift/templates/boot_nodes.py rename to roles/boot_zvm_nodes_hcp/templates/boot_nodes.py index 0b4858ec..9d14eb65 100644 --- a/roles/boot_zvm_nodes_hypershift/templates/boot_nodes.py +++ b/roles/boot_zvm_nodes_hcp/templates/boot_nodes.py @@ -23,10 +23,10 @@ interfaces=[] if args.network.lower() == 'osa' or args.network.lower() == 'hipersockets': - interfaces=[{ "type": "osa", "id": "{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.subchannels.split(',') | map('regex_replace', '0.0.', '') | join(',') }}"}] + interfaces=[{ "type": "osa", "id": "{{ hcp.data_plane.zvm.nodes[item].interface.subchannels.split(',') | map('regex_replace', '0.0.', '') | join(',') }}"}] elif args.network.lower() == 'roce': - interfaces=[{ "type": "pci", "id": "{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.ifname }}"}] + interfaces=[{ "type": "pci", "id": "{{ hcp.data_plane.zvm.nodes[item].interface.ifname }}"}] guest_parameters = { "boot_method": "network", diff --git a/roles/boot_zvm_nodes_hypershift/tasks/main.yaml b/roles/boot_zvm_nodes_hypershift/tasks/main.yaml deleted file mode 100644 index d4a4e21c..00000000 --- a/roles/boot_zvm_nodes_hypershift/tasks/main.yaml +++ /dev/null @@ -1,48 +0,0 @@ ---- -- name: Creating agents - block: - - name: Getting script for booting - template: - src: ../templates/boot_nodes.py - dest: /root/ansible_workdir/boot_nodes.py - - - name: Debug - debug: - msg: "Booting agent-{{ item }}" - - - name: Booting zvm node - shell: | - python /root/ansible_workdir/boot_nodes.py \ - --zvmname "{{ hypershift.agents_parms.zvm_parameters.nodes[item].name }}" \ - --zvmhost "{{ hypershift.agents_parms.zvm_parameters.nodes[item].host }}" \ - --zvmuser "{{ hypershift.agents_parms.zvm_parameters.nodes[item].user }}" \ - --zvmpass "{{ hypershift.agents_parms.zvm_parameters.nodes[item].password }}" \ - --cpu "{{ hypershift.agents_parms.zvm_parameters.vcpus }}" \ - --memory "{{ hypershift.agents_parms.zvm_parameters.memory }}" \ - --kernel 'file:///var/lib/libvirt/images/pxeboot/kernel.img' \ - --initrd 'file:///var/lib/libvirt/images/pxeboot/initrd.img' \ - --cmdline "$(cat /root/ansible_workdir/agent-{{ item }}.parm)" \ - --network "{{ hypershift.agents_parms.zvm_parameters.network_mode }}" - - - name: Attaching dasd disk - shell: vmcp attach {{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }} to {{ hypershift.agents_parms.zvm_parameters.nodes[item].name }} - when: "{{ hypershift.agents_parms.zvm_parameters.disk_type | lower == 'dasd' }}" - - - name: Attaching fcp disks - shell: vmcp attach {{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].fcp.split('.')[-1] }} to {{ hypershift.agents_parms.zvm_parameters.nodes[item].name }} - when: "{{ hypershift.agents_parms.zvm_parameters.disk_type | lower == 'fcp' }}" - - - name: Wait for the agent to come up - shell: oc get agents -n "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" --no-headers -o custom-columns=NAME:.metadata.name,APPROVED:.spec.approved | awk '$2 == "false"' | wc -l - register: agent_count - until: agent_count.stdout | int == 1 - retries: 40 - delay: 10 - - - name: Get the name of agent - shell: oc get agents -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} --no-headers -o custom-columns=NAME:.metadata.name,APPROVED:.spec.approved | awk '$2 == "false"' - register: agent_name - - - name: Approve agents - shell: oc -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} patch agent {{ agent_name.stdout.split(' ')[0] }} -p '{"spec":{"approved":true,"hostname":"compute-{{ item }}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}"}}' --type merge - diff --git a/roles/create_agentserviceconfig_hypershift/tasks/main.yaml b/roles/create_agentserviceconfig_hcp/tasks/main.yaml similarity index 97% rename from roles/create_agentserviceconfig_hypershift/tasks/main.yaml rename to roles/create_agentserviceconfig_hcp/tasks/main.yaml index cb85d2f1..c98ca98f 100644 --- a/roles/create_agentserviceconfig_hypershift/tasks/main.yaml +++ b/roles/create_agentserviceconfig_hcp/tasks/main.yaml @@ -10,7 +10,7 @@ - name: Downloading ISO for fetching RHCOS version get_url: - url: "{{ hypershift.asc.iso_url }}" + url: "{{ hcp.asc.iso_url }}" dest: /root/ansible_workdir/s390x.iso - name: Mounting ISO diff --git a/roles/create_agentserviceconfig_hypershift/templates/agent_service_config.yaml.j2 b/roles/create_agentserviceconfig_hcp/templates/agent_service_config.yaml.j2 similarity index 60% rename from roles/create_agentserviceconfig_hypershift/templates/agent_service_config.yaml.j2 rename to roles/create_agentserviceconfig_hcp/templates/agent_service_config.yaml.j2 index 482e9864..4990271f 100644 --- a/roles/create_agentserviceconfig_hypershift/templates/agent_service_config.yaml.j2 +++ b/roles/create_agentserviceconfig_hcp/templates/agent_service_config.yaml.j2 @@ -10,15 +10,15 @@ spec: - ReadWriteOnce resources: requests: - storage: "{{ hypershift.asc.db_volume_size}}" + storage: "{{ hcp.asc.db_volume_size}}" filesystemStorage: accessModes: - ReadWriteOnce resources: requests: - storage: "{{ hypershift.asc.fs_volume_size }}" + storage: "{{ hcp.asc.fs_volume_size }}" osImages: - - openshiftVersion: "{{ hypershift.asc.ocp_version }}" + - openshiftVersion: "{{ hcp.asc.ocp_version }}" version: "{{ ocp_release_version.stdout_lines[0] }}" - url: "{{ hypershift.asc.iso_url }}" - cpuArchitecture: "{{ hypershift.hcp.arch }}" + url: "{{ hcp.asc.iso_url }}" + cpuArchitecture: "{{ hcp.control_plane.arch }}" diff --git a/roles/create_agentserviceconfig_hypershift/templates/mirror-config.yml.j2 b/roles/create_agentserviceconfig_hcp/templates/mirror-config.yml.j2 similarity index 86% rename from roles/create_agentserviceconfig_hypershift/templates/mirror-config.yml.j2 rename to roles/create_agentserviceconfig_hcp/templates/mirror-config.yml.j2 index 2e64ac6f..e51df5b9 100644 --- a/roles/create_agentserviceconfig_hypershift/templates/mirror-config.yml.j2 +++ b/roles/create_agentserviceconfig_hcp/templates/mirror-config.yml.j2 @@ -2,7 +2,7 @@ apiVersion: v1 kind: ConfigMap metadata: name: mirror-config - namespace: "{{ hypershift.asc.mce_namespace }}" # please verify that this namespace is where MCE is installed. + namespace: "{{ hcp.asc.mce_namespace }}" labels: app: assisted-service data: diff --git a/roles/create_bastion_hcp/tasks/main.yaml b/roles/create_bastion_hcp/tasks/main.yaml new file mode 100644 index 00000000..d269ed18 --- /dev/null +++ b/roles/create_bastion_hcp/tasks/main.yaml @@ -0,0 +1,97 @@ +--- +- name: Get ssh key of local host + ansible.builtin.shell: cat {{ lookup('env', 'HOME') }}/.ssh/{{ hcp.ansible_key_name }}.pub + register: ssh_output + delegate_to: localhost + +- name: Load ssh_key into a variable + set_fact: + ssh_key: "{{ ssh_output.stdout_lines[0] }}" + +- name: Create Directory for images and bastion.ks + file: + path: /home/libvirt/images/ + recurse: true + state: directory + +- name: Setting vars for bstion.ks file creation + set_fact: + env: + language: "{{ hcp.bastion_params.language }}" + timezone: "{{ hcp.bastion_params.timezone }}" + keyboard: "{{ hcp.bastion_params.keyboard }}" + use_ipv6: False + install_config: + control: + architecture: "s390x" + cluster: + networking: + base_domain: "{{ hcp.bastion_params.base_domain }}" + bastion: + resources: + swap: 4096 + networking: + interface: "{{ hcp.bastion_params.interface }}" + ip: "{{ hcp.bastion_params.ip }}" + gateway: "{{ hcp.bastion_params.gateway }}" + hostname: "{{ hcp.bastion_params.hostname }}" + subnetmask: "{{ hcp.bastion_params.subnet_mask }}" + nameserver1: "{{ hcp.bastion_params.nameserver }}" + file_server: + ip: "{{ hcp.bastion_params.file_server.ip }}" + protocol: "{{ hcp.bastion_params.file_server.protocol }}" + iso_mount_dir: "{{ hcp.bastion_params.file_server.iso_mount_dir }}" + +- name: Create bastion.ks file + template: + src: ../create_bastion/templates/bastion-ks.cfg.j2 + dest: /home/libvirt/bastion.ks + +- name: Adding root password for bastion to bastion.ks + lineinfile: + path: /home/libvirt/bastion.ks + insertafter: '^lang.*' + line: "rootpw {{ bastion_root_pw }}" + +- name: Adding ssh key to bastion + blockinfile: + path: /home/libvirt/bastion.ks + insertafter: '^echo.*' + block: | + mkdir -p /root/.ssh + echo "{{ ssh_key }}" > /root/.ssh/authorized_keys + chmod 0700 /root/.ssh + chmod 0600 /root/.ssh/authorized_keys + +- name: Create qemu image for bastion + command: qemu-img create -f qcow2 {{ hcp.data_plane.kvm.storage.qcow.pool_path }}{{ hcp.control_plane.hosted_cluster_name }}-bastion.qcow2 100G + when: hcp.data_plane.kvm.storage.type != 'dasd' + +- name: Create bastion + shell: | + {% if hcp.data_plane.kvm.storage.type != "dasd" %} + disk_param="{{ hcp.data_plane.kvm.storage.qcow.pool_path }}{{ hcp.control_plane.hosted_cluster_name }}-bastion.qcow2,format=qcow2,bus=virtio,cache=none" + {% else %} + disk_param="{{ hcp.bastion_params.disk }}" + {% endif %} + + virt-install \ + --name {{ hcp.control_plane.hosted_cluster_name }}-bastion \ + --memory 4096 \ + --vcpus sockets=1,cores=4,threads=1 \ + --disk $disk_param \ + --os-variant "rhel{{hcp.bastion_params.os_variant}}" \ + --network network:{{ hcp.bastion_params.network_name }} \ + --location '{{ env.file_server.protocol }}://{{ env.file_server.user + ':' + env.file_server.pass + '@' if env.file_server.protocol == 'ftp' else '' }}{{ env.file_server.ip }}{{ ':' + env.file_server.port if env.file_server.port | default('') | length > 0 else '' }}/{{ env.file_server.iso_mount_dir }}/' \ + --rng=/dev/urandom --initrd-inject /home/libvirt/bastion.ks \ + --extra-args="inst.ks=file:/bastion.ks ip={{ hcp.bastion_params.ip }}::{{ hcp.bastion_params.gateway }}:{{hcp.bastion_params.subnet_mask}}:{{ hcp.bastion_params.hostname }}.{{ hcp.bastion_params.base_domain }}:{{ hcp.bastion_params.interface }}:none console=ttysclp0 nameserver={{ hcp.bastion_params.nameserver }}" \ + --noautoconsole \ + --wait -1 + +- name: Waiting 1 minute for automated bastion installation and configuration to complete + ansible.builtin.pause: + minutes: 1 + +- name: Add route to bastion from kvm_host + command: "ip route add {{ hcp.bastion_params.ip }} via {{ hcp.bastion_params.gateway }}" + ignore_errors: yes diff --git a/roles/create_bastion_hypershift/tasks/main.yaml b/roles/create_bastion_hypershift/tasks/main.yaml deleted file mode 100644 index f19186ca..00000000 --- a/roles/create_bastion_hypershift/tasks/main.yaml +++ /dev/null @@ -1,74 +0,0 @@ ---- -- name: Get ssh key of local host - ansible.builtin.shell: cat {{ lookup('env', 'HOME') }}/.ssh/{{ env.ansible_key_name }}.pub - register: ssh_output - delegate_to: localhost - -- name: Load ssh_key into a variable - set_fact: - ssh_key: "{{ ssh_output.stdout_lines[0] }}" - -- name: Create Directory for images and bastion.ks - file: - path: /home/libvirt/images/ - recurse: true - state: directory - -- name: Removing network configurations - lineinfile: - path: ../create_bastion/templates/bastion-ks.cfg.j2 - state: absent - regexp: '^network*' - -- name: Create bastion.ks file - template: - src: ../create_bastion/templates/bastion-ks.cfg.j2 - dest: /home/libvirt/bastion.ks - -- name: Removing network configurations - lineinfile: - path: /home/libvirt/bastion.ks - state: absent - regexp: '^network*' - -- name: Adding root password for bastion to bastion.ks - lineinfile: - path: /home/libvirt/bastion.ks - insertafter: '^lang.*' - line: "rootpw {{ bastion_root_pw }}" - -- name: Adding ssh key to bastion - blockinfile: - path: /home/libvirt/bastion.ks - insertafter: '^echo.*' - block: | - mkdir -p /root/.ssh - echo "{{ ssh_key }}" > /root/.ssh/authorized_keys - chmod 0700 /root/.ssh - chmod 0600 /root/.ssh/authorized_keys - -- name: Create qemu image for bastion - command: qemu-img create -f qcow2 {{ hypershift.agents_parms.storage.pool_path }}{{ hypershift.hcp.hosted_cluster_name }}-bastion.qcow2 100G - -- name: Create bastion - shell: | - virt-install \ - --name {{ hypershift.hcp.hosted_cluster_name }}-bastion \ - --memory 4096 \ - --vcpus sockets=1,cores=4,threads=1 \ - --disk {{ hypershift.agents_parms.storage.pool_path }}{{ hypershift.hcp.hosted_cluster_name }}-bastion.qcow2,format=qcow2,bus=virtio,cache=none \ - --os-variant "rhel{{hypershift.bastion_parms.os_variant}}" \ - --network network:{{ env.vnet_name }} \ - --location '{{ env.file_server.protocol }}://{{ env.file_server.user + ':' + env.file_server.pass + '@' if env.file_server.protocol == 'ftp' else '' }}{{ env.file_server.ip }}{{ ':' + env.file_server.port if env.file_server.port | default('') | length > 0 else '' }}/{{ env.file_server.iso_mount_dir }}/' \ - --rng=/dev/urandom --initrd-inject /home/libvirt/bastion.ks \ - --extra-args="ks=file:/bastion.ks ip={{ hypershift.bastion_hypershift }}::{{hypershift.bastion_parms.gateway}}:{{hypershift.bastion_parms.subnet_mask}}:{{ hypershift.bastion_parms.hostname }}.{{ hypershift.bastion_parms.base_domain }}:{{ hypershift.bastion_parms.interface }}:none console=ttysclp0 nameserver={{hypershift.bastion_parms.nameserver}}" \ - --noautoconsole \ - --wait -1 - -- name: Waiting 1 minute for automated bastion installation and configuration to complete - ansible.builtin.pause: - minutes: 1 - -- name: Add route to bastion from kvm_host - command: "ip route add {{ hypershift.bastion_hypershift }} via {{ hypershift.gateway }}" - ignore_errors: yes diff --git a/roles/create_hcp_InfraEnv_hypershift/tasks/main.yaml b/roles/create_hcp_InfraEnv/tasks/main.yaml similarity index 64% rename from roles/create_hcp_InfraEnv_hypershift/tasks/main.yaml rename to roles/create_hcp_InfraEnv/tasks/main.yaml index 95d699fd..dd67df5e 100644 --- a/roles/create_hcp_InfraEnv_hypershift/tasks/main.yaml +++ b/roles/create_hcp_InfraEnv/tasks/main.yaml @@ -1,7 +1,7 @@ --- - name: Getting Hosted Control Plane Namespace set_fact: - hosted_control_plane_namespace: "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" + hosted_control_plane_namespace: "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" - name: Check if Hosted Control Plane Namespace exists k8s_info: @@ -20,7 +20,7 @@ when: namespace_check.resources | length == 0 - name: Get ssh key - ansible.builtin.shell: cat ~/.ssh/{{ env.ansible_key_name }}.pub + ansible.builtin.shell: cat ~/.ssh/{{ hcp.ansible_key_name }}.pub register: ssh_output - name: Load ssh_key into a variable @@ -31,14 +31,14 @@ kubernetes.core.k8s_info: api_version: v1 kind: Pod - namespace: "{{ hypershift.asc.mce_namespace }}" + namespace: "{{ hcp.asc.mce_namespace }}" label_selectors: - app= hcp-cli-download register: hcp_pod_name - name: Get hcp.tar.gz file from pod kubernetes.core.k8s_cp: - namespace: "{{ hypershift.asc.mce_namespace }}" + namespace: "{{ hcp.asc.mce_namespace }}" pod: "{{ hcp_pod_name.resources[0].metadata.name }}" remote_path: "/opt/app-root/src/linux/s390x/" local_path: "/root/ansible_workdir" @@ -58,14 +58,14 @@ - name: Create a Hosted Cluster command: > hcp create cluster agent - --name={{ hypershift.hcp.hosted_cluster_name }} - --pull-secret={{ hypershift.hcp.pull_secret_file }} + --name={{ hcp.control_plane.hosted_cluster_name }} + --pull-secret=/root/ansible_workdir/auth_file --agent-namespace={{ hosted_control_plane_namespace }} - --namespace={{ hypershift.hcp.clusters_namespace }} - --base-domain={{ hypershift.hcp.basedomain }} - --api-server-address=api.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }} - --ssh-key ~/.ssh/{{ env.ansible_key_name }}.pub - {% if hypershift.hcp.high_availabiliy == false %} + --namespace={{ hcp.control_plane.clusters_namespace }} + --base-domain={{ hcp.control_plane.basedomain }} + --api-server-address=api.{{ hcp.control_plane.hosted_cluster_name }}.{{ hcp.control_plane.basedomain }} + --ssh-key ~/.ssh/{{ hcp.ansible_key_name }}.pub + {% if hcp.control_plane.high_availabiliy == false %} --control-plane-availability-policy "SingleReplica" {% endif %} --infra-availability-policy "SingleReplica" @@ -74,14 +74,14 @@ {% if release_image is defined and release_image != '' %} --release-image={{ release_image }} {% else %} - --release-image=quay.io/openshift-release-dev/ocp-release:{{ hypershift.hcp.ocp_release }} + --release-image=quay.io/openshift-release-dev/ocp-release:{{ hcp.control_plane.ocp_release_image }} {% endif %} - {% if hypershift.hcp.additional_flags is defined and hypershift.hcp.additional_flags != '' %} - {{ hypershift.hcp.additional_flags }} + {% if hcp.control_plane.additional_flags is defined and hcp.control_plane.additional_flags != '' %} + {{ hcp.control_plane.additional_flags }} {% endif %} - name: Waiting for Hosted Control Plane to be available - command: oc wait --timeout=30m --for=condition=Available --namespace={{ hypershift.hcp.clusters_namespace }} hostedcluster/{{ hypershift.hcp.hosted_cluster_name }} + command: oc wait --timeout=30m --for=condition=Available --namespace={{ hcp.control_plane.clusters_namespace }} hostedcluster/{{ hcp.control_plane.hosted_cluster_name }} - name: Wait for pods to come up in Hosted Cluster Namespace shell: oc get pods -n {{ hosted_control_plane_namespace }} | wc -l @@ -109,39 +109,39 @@ set_fact: agent_mac_addr: [] when: - - hypershift.agents_parms.static_ip_parms.static_ip == true - - hypershift.compute_node_type | lower != 'zvm' + - hcp.data_plane.kvm.ip_params.static_ip.enabled == true + - hcp.compute_node_type | lower != 'zvm' - name: Getting mac addresss for agents set_fact: - agent_mac_addr: "{{ hypershift.agents_parms.agent_mac_addr }}" + agent_mac_addr: "{{ hcp.data_plane.kvm.ip_params.mac }}" when: - - ( hypershift.agents_parms.static_ip_parms.static_ip == true ) and ( hypershift.agents_parms.agent_mac_addr != None ) - - hypershift.compute_node_type | lower != 'zvm' + - ( hcp.data_plane.kvm.ip_params.static_ip.enabled == true ) and ( hcp.data_plane.kvm.ip_params.mac != None ) + - hcp.compute_node_type | lower != 'zvm' - name: Generate mac addresses for agents set_fact: agent_mac_addr: "{{ agent_mac_addr + ['52:54:00' | community.general.random_mac] }}" when: - - ( hypershift.agents_parms.static_ip_parms.static_ip == true ) and ( hypershift.agents_parms.agent_mac_addr == None ) - - hypershift.compute_node_type | lower != 'zvm' - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + - ( hcp.data_plane.kvm.ip_params.static_ip.enabled == true ) and ( hcp.data_plane.kvm.ip_params.mac == None ) + - hcp.compute_node_type | lower != 'zvm' + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: Create NMState Configs template: src: nmStateConfig.yaml.j2 dest: /root/ansible_workdir/nmStateConfig-agent-{{ item }}.yaml when: - - hypershift.agents_parms.static_ip_parms.static_ip == true - - hypershift.compute_node_type | lower != 'zvm' - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + - hcp.data_plane.kvm.ip_params.static_ip.enabled == true + - hcp.compute_node_type | lower != 'zvm' + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: Deploy NMState Configs command: oc apply -f /root/ansible_workdir/nmStateConfig-agent-{{ item }}.yaml when: - - hypershift.agents_parms.static_ip_parms.static_ip == true - - hypershift.compute_node_type | lower != 'zvm' - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + - hcp.data_plane.kvm.ip_params.static_ip.enabled == true + - hcp.compute_node_type | lower != 'zvm' + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: Wait for ISO to generate in InfraEnv shell: oc get InfraEnv -n {{ hosted_control_plane_namespace }} --no-headers diff --git a/roles/create_hcp_InfraEnv/templates/InfraEnv.yaml.j2 b/roles/create_hcp_InfraEnv/templates/InfraEnv.yaml.j2 new file mode 100644 index 00000000..3ddf3eb7 --- /dev/null +++ b/roles/create_hcp_InfraEnv/templates/InfraEnv.yaml.j2 @@ -0,0 +1,15 @@ +apiVersion: agent-install.openshift.io/v1beta1 +kind: InfraEnv +metadata: + name: "{{ hcp.control_plane.hosted_cluster_name }}" + namespace: "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" +spec: +{% if hcp.data_plane.kvm.ip_params.static_ip.enabled == true %} + nmStateConfigLabelSelector: + matchLabels: + infraenv: "static-ip-{{ hcp.control_plane.hosted_cluster_name }}" +{% endif %} + cpuArchitecture: "{{ hcp.control_plane.arch }}" + pullSecretRef: + name: pull-secret + sshAuthorizedKey: "{{ ssh_key }}" diff --git a/roles/create_hcp_InfraEnv_hypershift/templates/icsp.yaml.j2 b/roles/create_hcp_InfraEnv/templates/icsp.yaml.j2 similarity index 100% rename from roles/create_hcp_InfraEnv_hypershift/templates/icsp.yaml.j2 rename to roles/create_hcp_InfraEnv/templates/icsp.yaml.j2 diff --git a/roles/create_hcp_InfraEnv/templates/nmStateConfig.yaml.j2 b/roles/create_hcp_InfraEnv/templates/nmStateConfig.yaml.j2 new file mode 100644 index 00000000..64a01bc1 --- /dev/null +++ b/roles/create_hcp_InfraEnv/templates/nmStateConfig.yaml.j2 @@ -0,0 +1,34 @@ +apiVersion: agent-install.openshift.io/v1beta1 +kind: NMStateConfig +metadata: + name: "static-ip-nmstate-config-{{ hcp.control_plane.hosted_cluster_name }}-{{ item }}" + namespace: "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" + labels: + infraenv: "static-ip-{{ hcp.control_plane.hosted_cluster_name }}" +spec: + config: + interfaces: + - name: "{{ hcp.data_plane.kvm.ip_params.static_ip.interface }}" + type: ethernet + state: up + mac-address: "{{ agent_mac_addr[item] }}" + ipv4: + enabled: true + address: + - ip: "{{ hcp.data_plane.kvm.ip_params.static_ip.ip[item] }}" + prefix-length: 16 + dhcp: false + routes: + config: + - destination: 0.0.0.0/0 + next-hop-address: "{{ hcp.bastion_params.gateway }}" + next-hop-interface: "{{ hcp.data_plane.kvm.ip_params.static_ip.interface }}" + table-id: 254 + dns-resolver: + config: + server: + - "{{ hcp.data_plane.nameserver }}" + + interfaces: + - name: "{{ hcp.data_plane.kvm.ip_params.static_ip.interface }}" + macAddress: "{{ agent_mac_addr[item] }}" diff --git a/roles/create_hcp_InfraEnv_hypershift/templates/InfraEnv.yaml.j2 b/roles/create_hcp_InfraEnv_hypershift/templates/InfraEnv.yaml.j2 deleted file mode 100644 index 1f3b3952..00000000 --- a/roles/create_hcp_InfraEnv_hypershift/templates/InfraEnv.yaml.j2 +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: agent-install.openshift.io/v1beta1 -kind: InfraEnv -metadata: - name: "{{ hypershift.hcp.hosted_cluster_name }}" - namespace: "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" -spec: -{% if hypershift.agents_parms.static_ip_parms.static_ip == true %} - nmStateConfigLabelSelector: - matchLabels: - infraenv: "static-ip-{{ hypershift.hcp.hosted_cluster_name }}" -{% endif %} - cpuArchitecture: "{{ hypershift.hcp.arch }}" - pullSecretRef: - name: pull-secret - sshAuthorizedKey: "{{ ssh_key }}" diff --git a/roles/create_hcp_InfraEnv_hypershift/templates/nmStateConfig.yaml.j2 b/roles/create_hcp_InfraEnv_hypershift/templates/nmStateConfig.yaml.j2 deleted file mode 100644 index b396dbff..00000000 --- a/roles/create_hcp_InfraEnv_hypershift/templates/nmStateConfig.yaml.j2 +++ /dev/null @@ -1,34 +0,0 @@ -apiVersion: agent-install.openshift.io/v1beta1 -kind: NMStateConfig -metadata: - name: "static-ip-nmstate-config-{{ hypershift.hcp.hosted_cluster_name }}-{{ item }}" - namespace: "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" - labels: - infraenv: "static-ip-{{ hypershift.hcp.hosted_cluster_name }}" -spec: - config: - interfaces: - - name: "{{ hypershift.agents_parms.static_ip_parms.interface }}" - type: ethernet - state: up - mac-address: "{{ agent_mac_addr[item] }}" - ipv4: - enabled: true - address: - - ip: "{{ hypershift.agents_parms.static_ip_parms.ip[item] }}" - prefix-length: 16 - dhcp: false - routes: - config: - - destination: 0.0.0.0/0 - next-hop-address: "{{ hypershift.gateway }}" - next-hop-interface: "{{ hypershift.agents_parms.static_ip_parms.interface }}" - table-id: 254 - dns-resolver: - config: - server: - - "{{ hypershift.agents_parms.nameserver }}" - - interfaces: - - name: "{{ hypershift.agents_parms.static_ip_parms.interface }}" - macAddress: "{{ agent_mac_addr[item] }}" diff --git a/roles/create_inventory_setup_hypershift/tasks/main.yaml b/roles/create_inventory_setup_hcp/tasks/main.yaml similarity index 89% rename from roles/create_inventory_setup_hypershift/tasks/main.yaml rename to roles/create_inventory_setup_hcp/tasks/main.yaml index baf196cd..0a50d2bf 100644 --- a/roles/create_inventory_setup_hypershift/tasks/main.yaml +++ b/roles/create_inventory_setup_hcp/tasks/main.yaml @@ -15,15 +15,15 @@ - name: Create inventory template: src: inventory_template.j2 - dest: "{{ find_project.stdout }}{{ find_inventory.stdout }}/inventory_hypershift" + dest: "{{ find_project.stdout }}{{ find_inventory.stdout }}/inventory_hcp" - name: Check if SSH key exists stat: - path: "~/.ssh/{{ env.ansible_key_name }}.pub" + path: "~/.ssh/{{ hcp.ansible_key_name }}.pub" register: ssh_key - name: Generate SSH key - command: ssh-keygen -t rsa -b 4096 -N "" -f "~/.ssh/{{ env.ansible_key_name }}" + command: ssh-keygen -t rsa -b 4096 -N "" -f "~/.ssh/{{ hcp.ansible_key_name }}" when: ssh_key.stat.exists == false - name: Create expect file diff --git a/roles/create_inventory_setup_hcp/templates/inventory_template.j2 b/roles/create_inventory_setup_hcp/templates/inventory_template.j2 new file mode 100644 index 00000000..2e73b9c9 --- /dev/null +++ b/roles/create_inventory_setup_hcp/templates/inventory_template.j2 @@ -0,0 +1,6 @@ +[hcp] +{% if hcp.compute_node_type | lower != 'zvm' %} +kvm_host_hcp ansible_host={{ hcp.bastion_params.host }} ansible_user={{ hcp.bastion_params.host_user }} ansible_become_password={{ kvm_host_password }} +{% endif %} + +bastion_hcp ansible_host={{ hcp.bastion_params.ip }} ansible_user={{ hcp.bastion_params.user }} diff --git a/roles/create_inventory_setup_hcp/templates/ssh-key.exp.j2 b/roles/create_inventory_setup_hcp/templates/ssh-key.exp.j2 new file mode 100644 index 00000000..8a196bc9 --- /dev/null +++ b/roles/create_inventory_setup_hcp/templates/ssh-key.exp.j2 @@ -0,0 +1,12 @@ +#!/usr/bin/expect +{% if hcp.compute_node_type | lower != 'zvm' %} +set password "{{ kvm_host_password }}" +spawn ssh-copy-id -i {{ lookup('env', 'HOME') }}/.ssh/{{ hcp.ansible_key_name }} {{ hcp.bastion_params.host_user }}@{{ hcp.bastion_params.host }} +expect "{{ hcp.bastion_params.host_user }}@{{ hcp.bastion_params.host }}'s password:" +{% else %} +set password "{{ bastion_root_pw }}" +spawn ssh-copy-id -i {{ lookup('env', 'HOME') }}/.ssh/{{ env.ansible_key_name }} {{ hcp.bastion_params.user }}@{{ hcp.bastion_params.ip }} +expect "{{ hcp.bastion_params.user }}@{{ hcp.bastion_params.ip }}'s password:" +{% endif %} +send "$password\r" +expect eof diff --git a/roles/create_inventory_setup_hypershift/templates/inventory_template.j2 b/roles/create_inventory_setup_hypershift/templates/inventory_template.j2 deleted file mode 100644 index 48689443..00000000 --- a/roles/create_inventory_setup_hypershift/templates/inventory_template.j2 +++ /dev/null @@ -1,7 +0,0 @@ -{% if hypershift.compute_node_type | lower != 'zvm' %} -[kvm_host_hypershift] -kvm_host_hypershift ansible_host={{ hypershift.kvm_host }} ansible_user={{ hypershift.kvm_host_user }} ansible_become_password={{ kvm_host_password }} - -{% endif %} -[bastion_hypershift] -bastion_hypershift ansible_host={{ hypershift.bastion_hypershift }} ansible_user={{ hypershift.bastion_hypershift_user }} diff --git a/roles/create_inventory_setup_hypershift/templates/ssh-key.exp.j2 b/roles/create_inventory_setup_hypershift/templates/ssh-key.exp.j2 deleted file mode 100644 index 46e8026a..00000000 --- a/roles/create_inventory_setup_hypershift/templates/ssh-key.exp.j2 +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/expect -{% if hypershift.compute_node_type | lower != 'zvm' %} -set password "{{ kvm_host_password }}" -spawn ssh-copy-id -i {{ lookup('env', 'HOME') }}/.ssh/{{ env.ansible_key_name }} {{ hypershift.kvm_host_user }}@{{ hypershift.kvm_host }} -expect "{{ hypershift.kvm_host_user }}@{{ hypershift.kvm_host }}'s password:" -{% else %} -set password "{{ bastion_root_pw }}" -spawn ssh-copy-id -i {{ lookup('env', 'HOME') }}/.ssh/{{ env.ansible_key_name }} {{ hypershift.bastion_hypershift_user }}@{{ hypershift.bastion_hypershift }} -expect "{{ hypershift.bastion_hypershift_user }}@{{ hypershift.bastion_hypershift }}'s password:" -{% endif %} -send "$password\r" -expect eof diff --git a/roles/delete_resources_bastion_hypershift/tasks/main.yaml b/roles/delete_resources_bastion_hcp/tasks/main.yaml similarity index 59% rename from roles/delete_resources_bastion_hypershift/tasks/main.yaml rename to roles/delete_resources_bastion_hcp/tasks/main.yaml index 8ba78bcf..41e64c2f 100644 --- a/roles/delete_resources_bastion_hypershift/tasks/main.yaml +++ b/roles/delete_resources_bastion_hcp/tasks/main.yaml @@ -4,7 +4,7 @@ command: oc login {{ api_server }} -u {{ user_name }} -p {{ password }} --insecure-skip-tls-verify=true - name: Scale in Nodepool - command: oc -n {{ hypershift.hcp.clusters_namespace }} scale nodepool {{ hypershift.hcp.hosted_cluster_name }} --replicas 0 + command: oc -n {{ hcp.control_plane.clusters_namespace }} scale nodepool {{ hcp.control_plane.hosted_cluster_name }} --replicas 0 - block: - name: Wait for Worker Nodes to Detach @@ -18,15 +18,15 @@ delay: 10 rescue: - name: Getting basedomain - shell: oc get hc {{ hypershift.hcp.hosted_cluster_name }} -n {{ hypershift.hcp.clusters_namespace }} -o json | jq -r '.spec.dns.baseDomain' + shell: oc get hc {{ hcp.control_plane.hosted_cluster_name }} -n {{ hcp.control_plane.clusters_namespace }} -o json | jq -r '.spec.dns.baseDomain' register: base_domain - name: Deleting the compute nodes manually - command: oc delete no compute-{{item}}.{{ hypershift.hcp.hosted_cluster_name }}.{{ base_domain.stdout }} --kubeconfig /root/ansible_workdir/hcp-kubeconfig - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + command: oc delete no compute-{{item}}.{{hcp.control_plane.hosted_cluster_name }}.{{ base_domain.stdout }} --kubeconfig /root/ansible_workdir/hcp-kubeconfig + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: Get machine names - command: oc get machine.cluster.x-k8s.io -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} --no-headers + command: oc get machine.cluster.x-k8s.io -n {{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }} --no-headers register: machines_info - name: Create List for machines @@ -36,11 +36,11 @@ - name: Get the List of machines set_fact: machines: "{{ machines + [machines_info.stdout.split('\n')[item].split(' ')[0]] }}" - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: Patch the machines to remove finalizers - shell: oc patch machine.cluster.x-k8s.io "{{ machines[item] }}" -n "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" -p '{"metadata":{"finalizers":null}}' --type=merge - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + shell: oc patch machine.cluster.x-k8s.io "{{ machines[item] }}" -n "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" -p '{"metadata":{"finalizers":null}}' --type=merge + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: Wait for Agentmachines to delete k8s_info: @@ -61,7 +61,7 @@ delay: 10 - name: Get agent names - command: oc get agents -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} --no-headers + command: oc get agents -n {{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }} --no-headers register: agents_info - name: Create List for agents @@ -71,11 +71,11 @@ - name: Get a List of agents set_fact: agents: "{{ agents + [agents_info.stdout.split('\n')[item].split(' ')[0]] }}" - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: Delete Agents - command: oc delete agent {{ agents[item] }} -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" + command: oc delete agent {{ agents[item] }} -n {{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }} + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" - name: Remove workdir file: @@ -87,26 +87,26 @@ state: absent api_version: agent-install.openshift.io/v1beta1 kind: InfraEnv - name: "{{ hypershift.hcp.hosted_cluster_name }}" - namespace: "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" + name: "{{ hcp.control_plane.hosted_cluster_name }}" + namespace: "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" - name: Destroy Hosted Control Plane - command: hcp destroy cluster agent --name {{ hypershift.hcp.hosted_cluster_name }} --namespace {{ hypershift.hcp.clusters_namespace }} + command: hcp destroy cluster agent --name {{ hcp.control_plane.hosted_cluster_name }} --namespace {{ hcp.control_plane.clusters_namespace }} - name: Delete Clusters Namespace k8s: api_version: v1 kind: Namespace - name: "{{ hypershift.hcp.clusters_namespace }}" + name: "{{ hcp.control_plane.clusters_namespace }}" state: absent - name: Wait for managed cluster resource to be deleted - shell: oc get managedcluster "{{ hypershift.hcp.hosted_cluster_name }}" + shell: oc get managedcluster "{{ hcp.control_plane.hosted_cluster_name }}" register: managedcluster until: managedcluster.rc != 0 retries: 50 delay: 25 - when: hypershift.mce.delete == true + when: hcp.mce.delete == true ignore_errors: yes - fail: @@ -114,7 +114,7 @@ when: managedcluster.rc == 0 and managedcluster.attempts >= 40 - name: Disable local-cluster component in MCE - command: oc patch mce {{ hypershift.mce.instance_name }} -p '{"spec":{"overrides":{"components":[{"name":"local-cluster","enabled":false}]}}}' --type merge + command: oc patch mce {{ hcp.mce.instance_name }} -p '{"spec":{"overrides":{"components":[{"name":"local-cluster","enabled":false}]}}}' --type merge - name: Wait for local-cluster components to be deleted shell: oc get ns local-cluster @@ -122,7 +122,7 @@ until: localcluster.rc != 0 retries: 40 delay: 20 - when: hypershift.mce.delete == true + when: hcp.mce.delete == true ignore_errors: yes - fail: @@ -135,7 +135,7 @@ kind: AgentServiceConfig name: agent state: absent - when: hypershift.mce.delete == true + when: hcp.mce.delete == true - name: Delete Provisioning k8s: @@ -143,67 +143,67 @@ api_version: metal3.io/v1alpha1 kind: Provisioning state: absent - when: hypershift.mce.delete == true + when: hcp.mce.delete == true - name: Delete ClusterImageSet k8s: - name: "img{{ hypershift.hcp.hosted_cluster_name }}-appsub" + name: "img{{ hcp.control_plane.hosted_cluster_name }}-appsub" api_version: hive.openshift.io/v1 kind: ClusterImageSet state: absent - when: hypershift.mce.delete == true + when: hcp.mce.delete == true - name: Delete MCE Instance k8s: - name: "{{ hypershift.mce.instance_name }}" - namespace: "{{ hypershift.asc.mce_namespace }}" + name: "{{ hcp.mce.instance_name }}" + namespace: "{{hcp.asc.mce_namespace }}" api_version: multicluster.openshift.io/v1 kind: MultiClusterEngine state: absent wait: yes wait_timeout: 400 - when: hypershift.mce.delete == true + when: hcp.mce.delete == true - name: Delete MCE Subscription k8s: name: multicluster-engine - namespace: "{{ hypershift.asc.mce_namespace }}" + namespace: "{{ hcp.asc.mce_namespace }}" api_version: operators.coreos.com/v1alpha1 kind: Subscription state: absent - when: hypershift.mce.delete == true + when: hcp.mce.delete == true - name: Delete Operator Group - MCE k8s: name: multicluster-engine - namespace: "{{ hypershift.asc.mce_namespace }}" + namespace: "{{ hcp.asc.mce_namespace }}" api_version: operators.coreos.com/v1 kind: OperatorGroup state: absent - when: hypershift.mce.delete == true + when: hcp.mce.delete == true - name: Delete MCE Namespace k8s: api_version: v1 kind: Namespace - name: "{{ hypershift.asc.mce_namespace }}" + name: "{{ hcp.asc.mce_namespace }}" state: absent - when: hypershift.mce.delete == true + when: hcp.mce.delete == true - name: Delete initrd.img file: path: /var/lib/libvirt/images/pxeboot/initrd.img state: absent - when: hypershift.compute_node_type | lower == 'zvm' + when: hcp.compute_node_type | lower == 'zvm' - name: Delete kernel.img file: path: /var/lib/libvirt/images/pxeboot/kernel.img state: absent - when: hypershift.compute_node_type | lower == 'zvm' + when: hcp.compute_node_type | lower == 'zvm' - name: Delete rootfs.img file: path: /var/www/html/rootfs.img state: absent - when: hypershift.compute_node_type | lower == 'zvm' + when: hcp.compute_node_type | lower == 'zvm' diff --git a/roles/delete_resources_kvm_host_hcp/tasks/main.yaml b/roles/delete_resources_kvm_host_hcp/tasks/main.yaml new file mode 100644 index 00000000..1338deff --- /dev/null +++ b/roles/delete_resources_kvm_host_hcp/tasks/main.yaml @@ -0,0 +1,25 @@ +--- + +- name: Destroy Agent VMs + command: virsh destroy {{ hcp.control_plane.hosted_cluster_name }}-agent-{{ item }} + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" + +- name: Undefine Agents + command: virsh undefine {{ hcp.control_plane.hosted_cluster_name }}-agent-{{ item }} --remove-all-storage + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" + +- name: Delete initrd.img + file: + path: /var/lib/libvirt/images/pxeboot/initrd.img + state: absent + +- name: Delete kernel.img + file: + path: /var/lib/libvirt/images/pxeboot/kernel.img + state: absent + +- name: Destroy bastion + command: virsh destroy {{ hcp.control_plane.hosted_cluster_name }}-bastion + +- name: Undefine bastion + command: virsh undefine {{ hcp.control_plane.hosted_cluster_name }}-bastion --remove-all-storage diff --git a/roles/delete_resources_kvm_host_hypershift/tasks/main.yaml b/roles/delete_resources_kvm_host_hypershift/tasks/main.yaml deleted file mode 100644 index abe809dd..00000000 --- a/roles/delete_resources_kvm_host_hypershift/tasks/main.yaml +++ /dev/null @@ -1,25 +0,0 @@ ---- - -- name: Destroy Agent VMs - command: virsh destroy {{ hypershift.hcp.hosted_cluster_name }}-agent-{{ item }} - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" - -- name: Undefine Agents - command: virsh undefine {{ hypershift.hcp.hosted_cluster_name }}-agent-{{ item }} --remove-all-storage - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" - -- name: Delete initrd.img - file: - path: /var/lib/libvirt/images/pxeboot/initrd.img - state: absent - -- name: Delete kernel.img - file: - path: /var/lib/libvirt/images/pxeboot/kernel.img - state: absent - -- name: Destroy bastion - command: virsh destroy {{ hypershift.hcp.hosted_cluster_name }}-bastion - -- name: Undefine bastion - command: virsh undefine {{ hypershift.hcp.hosted_cluster_name }}-bastion --remove-all-storage diff --git a/roles/download_rootfs_hypershift/tasks/main.yaml b/roles/download_rootfs_hcp/tasks/main.yaml similarity index 84% rename from roles/download_rootfs_hypershift/tasks/main.yaml rename to roles/download_rootfs_hcp/tasks/main.yaml index 17e5d552..2ffddc72 100644 --- a/roles/download_rootfs_hypershift/tasks/main.yaml +++ b/roles/download_rootfs_hcp/tasks/main.yaml @@ -39,7 +39,7 @@ - public - name: Get URL for rootfs.img - shell: oc -n "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" get InfraEnv "{{ hypershift.hcp.hosted_cluster_name }}" -ojsonpath="{.status.bootArtifacts.rootfs}" + shell: oc -n "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" get InfraEnv "{{ hcp.control_plane.hosted_cluster_name }}" -ojsonpath="{.status.bootArtifacts.rootfs}" register: rootfs - name: Download rootfs.img diff --git a/roles/install_mce_operator/tasks/main.yaml b/roles/install_mce_operator/tasks/main.yaml index 9d227d0c..89f31727 100644 --- a/roles/install_mce_operator/tasks/main.yaml +++ b/roles/install_mce_operator/tasks/main.yaml @@ -3,7 +3,7 @@ k8s_info: api_version: v1 kind: Namespace - name: "{{ hypershift.asc.mce_namespace }}" + name: "{{ hcp.asc.mce_namespace }}" register: namespace_check ignore_errors: yes @@ -11,7 +11,7 @@ k8s: api_version: v1 kind: Namespace - name: "{{ hypershift.asc.mce_namespace }}" + name: "{{ hcp.asc.mce_namespace }}" state: present when: namespace_check.resources | length == 0 @@ -32,14 +32,14 @@ command: oc apply -f /root/ansible_workdir/Subscription.yaml - name: Wait for MCE deployment to be created - shell: oc get all -n {{ hypershift.asc.mce_namespace }} | grep -i deployment | grep -i multicluster-engine | wc -l + shell: oc get all -n {{ hcp.asc.mce_namespace }} | grep -i deployment | grep -i multicluster-engine | wc -l register: mce_deploy until: mce_deploy.stdout == '1' retries: 20 delay: 5 - name: Wait for MCE deployment to be available - shell: oc get deployment multicluster-engine-operator -n {{ hypershift.asc.mce_namespace }} -o=jsonpath='{.status.replicas}{" "}{.status.availableReplicas}' + shell: oc get deployment multicluster-engine-operator -n {{ hcp.asc.mce_namespace }} -o=jsonpath='{.status.replicas}{" "}{.status.availableReplicas}' register: mce_pod_status until: mce_pod_status.stdout.split(' ')[0] == mce_pod_status.stdout.split(' ')[1] retries: 20 @@ -61,7 +61,7 @@ delay: 10 - name: Enable hypershift-preview component in MCE - command: oc patch mce {{ hypershift.mce.instance_name }} -p '{"spec":{"overrides":{"components":[{"name":"hypershift-preview","enabled":true}]}}}' --type merge + command: oc patch mce {{ hcp.mce.instance_name }} -p '{"spec":{"overrides":{"components":[{"name":"hypershift-preview","enabled":true}]}}}' --type merge - name: Create ClusterImageSet.yaml template: diff --git a/roles/install_mce_operator/templates/ClusterImageSet.yaml.j2 b/roles/install_mce_operator/templates/ClusterImageSet.yaml.j2 index 1157edcb..8ba81dd1 100644 --- a/roles/install_mce_operator/templates/ClusterImageSet.yaml.j2 +++ b/roles/install_mce_operator/templates/ClusterImageSet.yaml.j2 @@ -1,11 +1,11 @@ apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: - name: img{{ hypershift.hcp.hosted_cluster_name }}-appsub + name: img{{ hcp.control_plane.hosted_cluster_name }}-appsub spec: {% set release_img = lookup('env', 'HCP_RELEASE_IMAGE') %} {% if release_img is defined and release_img != '' %} releaseImage: {{ release_img }} {% else %} - releaseImage: quay.io/openshift-release-dev/ocp-release:{{ hypershift.hcp.ocp_release }} + releaseImage: quay.io/openshift-release-dev/ocp-release:{{ hcp.control_plane.ocp_release_image }} {% endif %} diff --git a/roles/install_mce_operator/templates/MultiClusterEngine.yaml.j2 b/roles/install_mce_operator/templates/MultiClusterEngine.yaml.j2 index 0e38b792..11a4dc89 100644 --- a/roles/install_mce_operator/templates/MultiClusterEngine.yaml.j2 +++ b/roles/install_mce_operator/templates/MultiClusterEngine.yaml.j2 @@ -1,6 +1,6 @@ apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: - name: {{ hypershift.mce.instance_name }} - namespace: "{{ hypershift.asc.mce_namespace }}" + name: {{ hcp.mce.instance_name }} + namespace: "{{ hcp.asc.mce_namespace }}" spec: {} diff --git a/roles/install_mce_operator/templates/OperatorGroup.yaml.j2 b/roles/install_mce_operator/templates/OperatorGroup.yaml.j2 index b4fae7d8..37d556a2 100644 --- a/roles/install_mce_operator/templates/OperatorGroup.yaml.j2 +++ b/roles/install_mce_operator/templates/OperatorGroup.yaml.j2 @@ -2,7 +2,7 @@ apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine - namespace: "{{ hypershift.asc.mce_namespace }}" + namespace: "{{ hcp.asc.mce_namespace }}" spec: targetNamespaces: - - "{{ hypershift.asc.mce_namespace }}" + - "{{ hcp.asc.mce_namespace }}" diff --git a/roles/install_mce_operator/templates/Subscription.yaml.j2 b/roles/install_mce_operator/templates/Subscription.yaml.j2 index e1e1250f..f85b823b 100644 --- a/roles/install_mce_operator/templates/Subscription.yaml.j2 +++ b/roles/install_mce_operator/templates/Subscription.yaml.j2 @@ -2,10 +2,10 @@ apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine - namespace: "{{ hypershift.asc.mce_namespace }}" + namespace: "{{ hcp.asc.mce_namespace }}" spec: sourceNamespace: openshift-marketplace source: redhat-operators - channel: stable-{{ hypershift.mce.version }} + channel: stable-{{ hcp.mce.version }} installPlanApproval: Automatic name: multicluster-engine diff --git a/roles/install_prereqs_bastion_hypershift/tasks/main.yaml b/roles/install_prereqs_bastion_hcp/tasks/main.yaml similarity index 85% rename from roles/install_prereqs_bastion_hypershift/tasks/main.yaml rename to roles/install_prereqs_bastion_hcp/tasks/main.yaml index 72063056..8434d9f1 100644 --- a/roles/install_prereqs_bastion_hypershift/tasks/main.yaml +++ b/roles/install_prereqs_bastion_hcp/tasks/main.yaml @@ -9,7 +9,7 @@ - name: Install Packages on bastion package: - name: "{{ env.pkgs.bastion }}" + name: "{{ hcp.pkgs.bastion }}" state: present # Creating one directory for Storing Files @@ -20,7 +20,7 @@ - name: Copy pull secret to ansible_workdir copy: - content: "{{ hypershift.hcp.pull_secret }}" + content: "{{ hcp.control_plane.pull_secret }}" dest: /root/ansible_workdir/auth_file - name: create /etc/haproxy @@ -53,24 +53,24 @@ blockinfile: path: /etc/haproxy/haproxy.cfg block: | - frontend {{ hypershift.hcp.hosted_cluster_name }}-machine-config-server + frontend {{ hcp.control_plane.hosted_cluster_name }}-machine-config-server mode tcp option tcplog - bind api.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}:22623 - default_backend {{ hypershift.hcp.hosted_cluster_name }}-machine-config-server + bind api.{{ hcp.control_plane.hosted_cluster_name }}.{{ hcp.control_plane.basedomain }}:22623 + default_backend {{ hcp.control_plane.hosted_cluster_name }}-machine-config-server - backend {{ hypershift.hcp.hosted_cluster_name }}-machine-config-server + backend {{ hcp.control_plane.hosted_cluster_name }}-machine-config-server mode tcp balance source marker: "# machine-config-server" - when: hypershift.compute_node_type | lower == 'zvm' + when: hcp.compute_node_type | lower == 'zvm' - name: Add Management Cluster Worker IPs to Haproxy lineinfile: path: /etc/haproxy/haproxy.cfg line: " server worker{{item}} {{ mgmt_workers.stdout_lines[item]}}" loop: "{{ range(mgmt_workers_count.stdout|int) | list }}" - when: hypershift.compute_node_type | lower == 'zvm' + when: hcp.compute_node_type | lower == 'zvm' - name: allow http traffic firewalld: diff --git a/roles/install_prereqs_bastion_hypershift/templates/haproxy.cfg.j2 b/roles/install_prereqs_bastion_hcp/templates/haproxy.cfg.j2 similarity index 79% rename from roles/install_prereqs_bastion_hypershift/templates/haproxy.cfg.j2 rename to roles/install_prereqs_bastion_hcp/templates/haproxy.cfg.j2 index 0dff411b..30b8f5cd 100644 --- a/roles/install_prereqs_bastion_hypershift/templates/haproxy.cfg.j2 +++ b/roles/install_prereqs_bastion_hcp/templates/haproxy.cfg.j2 @@ -28,12 +28,12 @@ defaults timeout check 10s maxconn 3000 -frontend {{ hypershift.hcp.hosted_cluster_name }}-api-server +frontend {{ hcp.control_plane.hosted_cluster_name }}-api-server mode tcp option tcplog - bind {{hypershift.bastion_hypershift}}:30000-33000 - default_backend {{hypershift.hcp.hosted_cluster_name}}-api-server + bind {{ hcp.bastion_params.ip }}:30000-33000 + default_backend {{ hcp.control_plane.hosted_cluster_name }}-api-server -backend {{ hypershift.hcp.hosted_cluster_name }}-api-server +backend {{ hcp.control_plane.hosted_cluster_name }}-api-server mode tcp balance source diff --git a/roles/install_prerequisites_host_hypershift/tasks/main.yaml b/roles/install_prerequisites_host_hcp/tasks/main.yaml similarity index 78% rename from roles/install_prerequisites_host_hypershift/tasks/main.yaml rename to roles/install_prerequisites_host_hcp/tasks/main.yaml index 6f404bc5..a1220ab1 100644 --- a/roles/install_prerequisites_host_hypershift/tasks/main.yaml +++ b/roles/install_prerequisites_host_hcp/tasks/main.yaml @@ -1,12 +1,12 @@ --- - name: Check if SSH key exists stat: - path: "~/.ssh/{{ env.ansible_key_name }}.pub" + path: "~/.ssh/{{ hcp.ansible_key_name }}.pub" register: ssh_key - name: Generate an OpenSSH keypair with the default values (4096 bits, RSA) community.crypto.openssh_keypair: - path: "~/.ssh/{{ env.ansible_key_name }}" + path: "~/.ssh/{{ hcp.ansible_key_name }}" passphrase: "" comment: "Ansible-OpenShift-Provisioning SSH key" regenerate: full_idempotence @@ -27,17 +27,17 @@ name: - "{{ item }}" state: present - loop: "{{ env.pkgs.kvm }}" + loop: "{{ hcp.pkgs.kvm }}" when: - - host != 'bastion_hypershift' + - host != 'bastion_hcp' - ( rhel_version.stdout| float < 9.0 ) or rhel_version.stdout| float >= 9.0 and 'devel' not in item -- name: Install Packages for Hypershift +- name: Install Packages for HCP package: name: - "{{ item }}" state: present - loop: "{{ env.pkgs.hypershift }}" + loop: "{{ hcp.pkgs.hcp }}" - name: Check if OC installed command: oc @@ -46,12 +46,12 @@ - name: Download OC Client get_url: - url: "{{ hypershift.oc_url }}" + url: "{{ hcp.oc_url }}" dest: /root/ansible_workdir/ when: oc_installed.rc != 0 - name: tar oc - command: tar -vxzf /root/ansible_workdir/{{ hypershift.oc_url.split('/')[-1] }} + command: tar -vxzf /root/ansible_workdir/{{ hcp.oc_url.split('/')[-1] }} when: oc_installed.rc != 0 - name: Copy oc to /usr/local/bin/ @@ -66,7 +66,7 @@ lineinfile: dest: /etc/resolv.conf insertbefore: BOF - line: nameserver {{ hypershift.mgmt_cluster_nameserver }} + line: nameserver {{ hcp.mgmt_cluster_nameserver }} - name: Login to Management Cluster command: oc login {{ api_server }} -u {{ user_name }} -p {{ password }} --insecure-skip-tls-verify=true diff --git a/roles/scale_nodepool_and_wait_for_compute_hcp/tasks/main.yaml b/roles/scale_nodepool_and_wait_for_compute_hcp/tasks/main.yaml new file mode 100644 index 00000000..0f630acc --- /dev/null +++ b/roles/scale_nodepool_and_wait_for_compute_hcp/tasks/main.yaml @@ -0,0 +1,73 @@ +--- + +- name: Wait for agents to join the cluster + k8s_info: + api_version: agent-install.openshift.io/v1beta1 + kind: Agent + register: agents + until: agents.resources | length == {{ hcp.data_plane.compute_count }} + retries: 30 + delay: 10 + when: hcp.compute_node_type | lower != 'zvm' + +- name: Get agent names + command: oc get agents -n {{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }} --no-headers + register: agents_info + when: hcp.compute_node_type | lower != 'zvm' + +- name: Create List for agents + set_fact: + agents: [] + when: hcp.compute_node_type | lower != 'zvm' + +- name: Get a List of agents + set_fact: + agents: "{{ agents + [agents_info.stdout.split('\n')[item].split(' ')[0]] }}" + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" + when: hcp.compute_node_type | lower != 'zvm' + +- name: Patch Agents + shell: oc -n {{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }} patch agent {{ agents[item] }} -p '{"spec":{"installation_disk_id":"/dev/vda","approved":true,"hostname":"compute-{{item}}.{{ hcp.control_plane.hosted_cluster_name }}.{{ hcp.control_plane.basedomain }}"}}' --type merge + loop: "{{ range(hcp.data_plane.compute_count|int) | list }}" + when: hcp.compute_node_type | lower != 'zvm' + +- name: Scale Nodepool + command: oc -n {{ hcp.control_plane.clusters_namespace }} scale nodepool {{ hcp.control_plane.hosted_cluster_name }} --replicas {{ hcp.data_plane.compute_count }} + +- name: Wait for Agentmachines to create + k8s_info: + api_version: capi-provider.agent-install.openshift.io/v1alpha1 + kind: AgentMachine + register: agent_machines + until: agent_machines.resources | length == {{ hcp.data_plane.compute_count }} + retries: 30 + delay: 10 + +- name: Wait for Machines to create + k8s_info: + api_version: cluster.x-k8s.io/v1beta1 + kind: Machine + register: machines + until: machines.resources | length == {{ hcp.data_plane.compute_count }} + retries: 30 + delay: 10 + +- name: Create Kubeconfig for Hosted Cluster + shell: hcp create kubeconfig --namespace {{ hcp.control_plane.clusters_namespace }} --name {{ hcp.control_plane.hosted_cluster_name }} > /root/ansible_workdir/hcp-kubeconfig + +- name: Wait for Worker Nodes to Join + k8s_info: + api_version: v1 + kind: Node + kubeconfig: "/root/ansible_workdir/hcp-kubeconfig" + register: nodes + until: nodes.resources | length == {{ hcp.data_plane.compute_count }} + retries: 300 + delay: 10 + +- name: Wait for Worker nodes to be Ready + shell: oc get no --kubeconfig=/root/ansible_workdir/hcp-kubeconfig --no-headers | grep -i 'NotReady' | wc -l + register: node_status + until: node_status.stdout == '0' + retries: 50 + delay: 15 diff --git a/roles/scale_nodepool_and_wait_for_compute_hypershift/tasks/main.yaml b/roles/scale_nodepool_and_wait_for_compute_hypershift/tasks/main.yaml deleted file mode 100644 index 1a3247ce..00000000 --- a/roles/scale_nodepool_and_wait_for_compute_hypershift/tasks/main.yaml +++ /dev/null @@ -1,73 +0,0 @@ ---- - -- name: Wait for agents to join the cluster - k8s_info: - api_version: agent-install.openshift.io/v1beta1 - kind: Agent - register: agents - until: agents.resources | length == {{ hypershift.agents_parms.agents_count }} - retries: 30 - delay: 10 - when: hypershift.compute_node_type | lower != 'zvm' - -- name: Get agent names - command: oc get agents -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} --no-headers - register: agents_info - when: hypershift.compute_node_type | lower != 'zvm' - -- name: Create List for agents - set_fact: - agents: [] - when: hypershift.compute_node_type | lower != 'zvm' - -- name: Get a List of agents - set_fact: - agents: "{{ agents + [agents_info.stdout.split('\n')[item].split(' ')[0]] }}" - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" - when: hypershift.compute_node_type | lower != 'zvm' - -- name: Patch Agents - shell: oc -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} patch agent {{ agents[item] }} -p '{"spec":{"installation_disk_id":"/dev/vda","approved":true,"hostname":"compute-{{item}}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}"}}' --type merge - loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}" - when: hypershift.compute_node_type | lower != 'zvm' - -- name: Scale Nodepool - command: oc -n {{ hypershift.hcp.clusters_namespace }} scale nodepool {{ hypershift.hcp.hosted_cluster_name }} --replicas {{ hypershift.agents_parms.agents_count }} - -- name: Wait for Agentmachines to create - k8s_info: - api_version: capi-provider.agent-install.openshift.io/v1alpha1 - kind: AgentMachine - register: agent_machines - until: agent_machines.resources | length == {{ hypershift.agents_parms.agents_count }} - retries: 30 - delay: 10 - -- name: Wait for Machines to create - k8s_info: - api_version: cluster.x-k8s.io/v1beta1 - kind: Machine - register: machines - until: machines.resources | length == {{ hypershift.agents_parms.agents_count }} - retries: 30 - delay: 10 - -- name: Create Kubeconfig for Hosted Cluster - shell: hcp create kubeconfig --namespace {{ hypershift.hcp.clusters_namespace }} --name {{ hypershift.hcp.hosted_cluster_name }} > /root/ansible_workdir/hcp-kubeconfig - -- name: Wait for Worker Nodes to Join - k8s_info: - api_version: v1 - kind: Node - kubeconfig: "/root/ansible_workdir/hcp-kubeconfig" - register: nodes - until: nodes.resources | length == {{ hypershift.agents_parms.agents_count }} - retries: 300 - delay: 10 - -- name: Wait for Worker nodes to be Ready - shell: oc get no --kubeconfig=/root/ansible_workdir/hcp-kubeconfig --no-headers | grep -i 'NotReady' | wc -l - register: node_status - until: node_status.stdout == '0' - retries: 50 - delay: 15 diff --git a/roles/setup_for_agents_hypershift/tasks/main.yaml b/roles/setup_for_agents_hcp/tasks/main.yaml similarity index 55% rename from roles/setup_for_agents_hypershift/tasks/main.yaml rename to roles/setup_for_agents_hcp/tasks/main.yaml index bcde6dea..6484ec01 100644 --- a/roles/setup_for_agents_hypershift/tasks/main.yaml +++ b/roles/setup_for_agents_hcp/tasks/main.yaml @@ -6,7 +6,7 @@ mode: '0755' - name: Get URL for initrd.img - shell: oc -n "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" get InfraEnv "{{ hypershift.hcp.hosted_cluster_name }}" -ojsonpath="{.status.bootArtifacts.initrd}" + shell: oc -n "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" get InfraEnv "{{ hcp.control_plane.hosted_cluster_name }}" -ojsonpath="{.status.bootArtifacts.initrd}" register: initrd - name: Download initrd.img @@ -16,7 +16,7 @@ validate_certs: false - name: Get URL for kernel.img - shell: oc -n "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" get InfraEnv "{{ hypershift.hcp.hosted_cluster_name }}" -ojsonpath="{.status.bootArtifacts.kernel}" + shell: oc -n "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" get InfraEnv "{{ hcp.control_plane.hosted_cluster_name }}" -ojsonpath="{.status.bootArtifacts.kernel}" register: kernel - name: Download kernel.img @@ -29,5 +29,5 @@ template: src: parm-file.parm.j2 dest: /root/ansible_workdir/agent-{{ item }}.parm - when: hypershift.compute_node_type | lower == 'zvm' - loop: "{{ range(hypershift.agents_parms.agents_count | int) | list }}" + when: hcp.compute_node_type | lower == 'zvm' + loop: "{{ range(hcp.data_plane.compute_count | int) | list }}" diff --git a/roles/setup_for_agents_hcp/templates/parm-file.parm.j2 b/roles/setup_for_agents_hcp/templates/parm-file.parm.j2 new file mode 100644 index 00000000..46df9609 --- /dev/null +++ b/roles/setup_for_agents_hcp/templates/parm-file.parm.j2 @@ -0,0 +1 @@ +rd.neednet=1 ai.ip_cfg_override=1 console=ttysclp0 coreos.live.rootfs_url=http://{{ hcp.bastion_params.ip }}:8080/rootfs.img ip={{ hcp.data_plane.zvm.nodes[item].interface.ip }}::{{ hcp.data_plane.zvm.gateway }}:{{ hcp.data_plane.zvm.subnetmask }}{% if hcp.data_plane.zvm.network_mode | lower != 'roce' %}::{{ hcp.data_plane.zvm.nodes[item].interface.ifname }}:none{% endif %} nameserver={{ hcp.data_plane.nameserver }} zfcp.allow_lun_scan=0 {% if hcp.data_plane.zvm.network_mode | lower != 'roce' %}rd.znet={{ hcp.data_plane.zvm.nodes[item].interface.nettype }},{{ hcp.data_plane.zvm.nodes[item].interface.subchannels }},{{ hcp.data_plane.zvm.nodes[item].interface.options }}{% endif %} {% if hcp.data_plane.zvm.disk_type | lower != 'fcp' %}rd.dasd=0.0.{{ hcp.data_plane.zvm.nodes[item].dasd.disk_id }}{% else %}rd.zfcp=0.0.{{ hcp.data_plane.zvm.nodes[item].lun[0].paths[0].fcp}},{{ hcp.data_plane.zvm.nodes[item].lun[0].paths[0].wwpn }},{{ hcp.data_plane.zvm.nodes[item].lun[0].id }} {% endif %} random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" \ No newline at end of file diff --git a/roles/setup_for_agents_hypershift/templates/parm-file.parm.j2 b/roles/setup_for_agents_hypershift/templates/parm-file.parm.j2 deleted file mode 100644 index 73fe6947..00000000 --- a/roles/setup_for_agents_hypershift/templates/parm-file.parm.j2 +++ /dev/null @@ -1 +0,0 @@ -rd.neednet=1 ai.ip_cfg_override=1 console=ttysclp0 coreos.live.rootfs_url=http://{{ hypershift.bastion_hypershift }}:8080/rootfs.img ip={{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.ip }}::{{ hypershift.agents_parms.zvm_parameters.gateway }}:{{ hypershift.agents_parms.zvm_parameters.subnetmask }}{% if hypershift.agents_parms.zvm_parameters.network_mode | lower != 'roce' %}::{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.ifname }}:none{% endif %} nameserver={{ hypershift.agents_parms.zvm_parameters.nameserver }} zfcp.allow_lun_scan=0 {% if hypershift.agents_parms.zvm_parameters.network_mode | lower != 'roce' %}rd.znet={{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.nettype }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.subchannels }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.options }}{% endif %} {% if hypershift.agents_parms.zvm_parameters.disk_type | lower != 'fcp' %}rd.dasd=0.0.{{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }}{% else %}rd.zfcp=0.0.{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].fcp}},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].wwpn }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].id }} {% endif %} random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" diff --git a/roles/wait_for_hc_to_complete_hypershift/tasks/main.yaml b/roles/wait_for_hc_to_complete_hcp/tasks/main.yaml similarity index 85% rename from roles/wait_for_hc_to_complete_hypershift/tasks/main.yaml rename to roles/wait_for_hc_to_complete_hcp/tasks/main.yaml index cd08df91..101016b5 100644 --- a/roles/wait_for_hc_to_complete_hypershift/tasks/main.yaml +++ b/roles/wait_for_hc_to_complete_hcp/tasks/main.yaml @@ -8,7 +8,7 @@ delay: 20 - name: Wait for Hosted Control Plane to Complete - shell: oc get hc -n {{ hypershift.hcp.clusters_namespace }} --no-headers | awk '{print $4}' + shell: oc get hc -n {{ hcp.control_plane.clusters_namespace }} --no-headers | awk '{print $4}' register: hc_status until: hc_status.stdout == "Completed" retries: 40 @@ -19,7 +19,7 @@ register: console_url - name: Get Password for Hosted Cluster - shell: oc get secret kubeadmin-password -n "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" -o yaml | grep -i 'password:' + shell: oc get secret kubeadmin-password -n "{{ hcp.control_plane.clusters_namespace }}-{{ hcp.control_plane.hosted_cluster_name }}" -o yaml | grep -i 'password:' register: cluster_password_encoded - name: Decode the Password