This set of scripts and automation can be used in conjunction with Wazi Image Builder to create a custom image for use with Wazi-aaS Virtual Server Instances, in IBM Cloud. Wazi Image Builder, after selecting a set of z/OS volumes, will upload them to Cloud Object Storage as ECKD files. These scripts can then process these files, split them between boot and data volumes, and create a qcow2 image from the boot volumes. This qcow2 image is then uploaded back to the Cloud Object Storage bucket, to be used to create a custom image in a VPC environment.
- Install Terraform
- NOTE: you need at least version 1.2
- Duplicate the Terraform variables template file:
cp my-settings.auto.tfvars-template my-settings.auto.tfvars
- Adjust my-settings.auto.tfvars
- set
ibmcloud_api_key=<your API key>
- this will likelly require a paying account
- you can create an API account by visiting the IBM Cloud API keys page. Ensure you have selected the account you want to use before creating the key as the key will be associtated to the account you have selected at the time of creation.
- If you have downloaded your
apikey.json
file from the IBM Cloud UI you may use this command:export IC_API_KEY=$(cat ~/apikey.json | jq -r .apikey)
- set
- Clone this repo to your local machine
- Run
terraform init
from within this repo's directory
NOTE: If you are using Hyper Protect Crypto Services to encrypt the VPC Block storage volumes, please set up a Service-Service authorization in IAM between Hyper Protect Crypto Services and VPC Block Storage prior to moving ahead.
-
Use Wazi Image Builder to upload your z/OS image to IBM Cloud Object Store (COS)
-
Adjust my-settings.auto.tfvars with the name of the COS bucket
- NOTE: there is currently a bug in the Image builder not uploading the
image-metadata.json
as json. As circumvention you can use the COS UI to download it and upload it again. This will correct the format.
- NOTE: there is currently a bug in the Image builder not uploading the
-
Run:
terraform apply
This will create the VSI with the required data volumes. You might want to use the VSI serial console: the progress logs are written there by cloud init.
Once completed successfully, the following can be observed as output:
- A bootable qcow2 image is uploaded to the IBM Cloud Object Storage bucket
- A VPC block storage device, storing data volumes from the z/OS image, is created
Create the z/OS image from the wazi-custom-image
qcow2 file in your IBM Cloud Object Storage bucket, and snapshots out of the remaining wazi-custom-image-data
data volume. TBD: this will be done by terraform apply
in following versions.
It is important to destroy the data mover after it has completed so you do not get unneeded charges for the data mover VSI and its volumes:
-
If you want to keep the current custom image and its corresponding data volume snapshot you need to remove them from Terraform control before destroying the Terraform env. You can do this with the following commands:
terraform state rm ibm_is_image.custom_image terraform state rm ibm_is_snapshot.custom_image_data
Please notice that if you run
terraform apply
afterwards with the samecustom_image_name
you will get a conflict. This can be solved by:- using a different
custom_image_name
- renaming the custom image and data volume snapshot with the UI/CLI
- deleting the custom image and data volume snapshot with the UI/CLI
- import the existing resources:
You can get the UIDs from the UI or CLI. The typically start with
terraform import ibm_is_image.custom_image <custom image UID> terraform import ibm_is_snapshot.custom_image_data <snapshot UID>
- using a different
-
Destroy the remaining resources (e.g., VSI, boot volume, data volume) used by the data mover:
terraform destroy
Comming
- Go to the create VSI UI
- Select
IBM Z
as platform - Give the VSI a name
- Select
Custom image as OS
. By default the custom image will be calledwazi-custom-image
- Add a new data volume
- Select
Import from Snapshot
- The snapshot is called
wazi-custom-image-data
by default
- Select
- Click on
Create VSI
- you can ssh into the daza mover VSI with
ssh -i private_key root@<floating IP of the VSI>
- to build the qcow2 and data volume manualy:
cd /data-mover;./data-mover.py
- to upload the qcow2 to COS manualy:
cd /data-mover;./upload.py
- to build the qcow2 and data volume manualy:
- you can copy the cloud-init log output with
ssh -i private_key root@<floating IP of the VSI> cat /var/log/cloud-init-output.log