Skip to content

Commit

Permalink
added README information for usage of crc-cloud based on pulumi
Browse files Browse the repository at this point in the history
  • Loading branch information
adrianriobo committed Feb 7, 2023
1 parent cb4020e commit 96394ac
Showing 1 changed file with 69 additions and 54 deletions.
123 changes: 69 additions & 54 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,6 @@ We had a meeting and they gave me all the detailed instructions on how to run th

## Cloud Providers
For the moment only AWS is supported. Other will be added soon.
<br/>
<br/>
**Note:** AWS AMIs (Amazon Machine Images) are regional resources so,for the moment, the only supported region is **us-west-2**. In the next few days the AMI will be copied to other regions, please be patient, it will take a while.

## Usage

Expand All @@ -30,92 +27,110 @@ The basic requirements to run a single-node OpenShift cluster with **CRC-Cloud**
<br/>
<br/>

The AWS instance type of choice is *c6in.2xlarge* with 8vcpu and 16 GB of RAM.
This instance will cost ~0.45$ per hour (price may vary depending from the region) and will take ~11 minutes to have a working cluster.
Increasing or decreasing the resources will affect the deployment time together with the price per hour. If you want to change instance type keep in mind that the minimum hardware requirements to run CRC (on which this solution is based) are 4vcpus and 8GB of RAM, please refer to the [documentation](https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers) for further informations.
Also there is a restriction b/c of fixed ec2 instance type

**WARNING:** Running VM instances will cost you **real money** so be extremely careful to verify that all the resources instantiated are **removed** once you're done and remember that you're running them at **your own risk and cost**
For current PoC use us-east-1 us-west-2 or eu-west-1 regions

The AWS instance type of choice is *c6a.2xlarge* with 8vcpu and 16 GB of RAM. This will be customizable in the future, for the moment this fixed type imposes some [restrictions](https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-ec2-m6a-c6a-instances-additional-regions/) on available regions to run crc cloud, those regions are:

- us-east-1 and us-east-2
- us-west-1 and us-west-2
- ap-south-1, ap-southeast-1, ap-southeast-2 and ap-northeast-1
- eu-west-1, eu-central-1 and eu-west-2

This instance will cost ~0.306$ per hour (price may vary depending from the region) and will take ~11 minutes to have a working cluster.
Increasing or decreasing the resources will affect the deployment time together with the price per hour. If you want to change instance type keep in mind that the minimum hardware requirements to run CRC (on which this solution is based) are 4vcpus and 8GB of RAM, please refer to the [documentation](https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers) for further informations.

**WARNING:** Running VM instances will cost you **real money** so be extremely careful to verify that all the resources instantiated are **removed** once you're done and remember that you're running them at **your own risk and cost**

### Containers (the easy way)
### Container

Running **CRC-Cloud** from a container (podman/docker) is strongly recommended for the following reasons:

- Compatible with any platform (Linux/MacOs/Windows)
- No need to satisfy any software dependency in you're OS since everything is packed into the container
- In CI/CD systems (eg. Jenkins) won't be necessary to propagate dependencies to the agents (only podman/docker needed)
- In Cloud Native CI/CD systems (eg. Tekton) everything runs in container so that's the natural choice

#### Working directory
<a name="workdir"></a>
In the working directory that will be mounted into the container, **CRC-Cloud** will store all the cluster metadata including those needed to teardown the cluster once you'll be done.
Per each run **CRC-Cloud** will create a folder named with the run timestamp, this folder name will be referred as *TEARDOWN_RUN_ID* and will be used to cleanup the cluster in teardown mode and to store all the logs and the infos related to the cluster deployment.
When executing **crc-cloud** then state of the infrastructure will be store according to the `project-name` and `backed-url` parameters. The location from `backed-url` should be keept for `destroy` operation. `project-name` allows to create multiple executions stored at same location.

Please **be careful** on deleting the working directory content because without the metadata **CRC-Cloud** won't be able to teardown the cluster and associated resources from AWS.
### Operations

#### Single node cluster creation
```
#### import

Import operation allows to create the required AMI on the region on which later on **crc-cloud** will create the cluster. This operation requires certain disk capacity (60G) due to internal transformations required for create the AMI, the disk usage is dismissed once the import operation has finished.

Run this command to import the AMI on region of choice:

```bash
podman run -d --rm \
-v ${PWD}:/workspace:z \
-e AWS_ACCESS_KEY_ID=XXX \
-e AWS_SECRET_ACCESS_KEY=XXX \
-e AWS_DEFAULT_REGION=eu-west-1 \
quay.io/ariobolo/crc-cloud:v0.0.1 create \
--project-name "crc-ocp412" \
quay.io/crcont/crc-cloud:v0.0.2 import \
--project-name "ami-ocp412" \
--backed-url "file:///workspace" \
--output "/workspace" \
--provider "aws" \
--aws-ami-id "ami-0ab26eb25f41697ef" \
--pullsecret-filepath "/workspace/pullsecret" \
--key-filepath "/workspace/id_ecdsa"
--bundle-url "https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/bundles/openshift/4.12.0/crc_libvirt_4.12.0_amd64.crcbundle" \
--bundle-shasumfile-url "https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/bundles/openshift/4.12.0/sha256sum.txt"
```

#### Single node cluster teardown
```
Previous command mount the current folder into the container `/worspace` which then it is used as `output` for resulting assets for the import operation. The assets generated will be:

- `id_ecdsa` key required to spin the instance. This will be used within the create operation
- `image-id` ami id on the region. The value within the file will be passed to the create operation.

The import operation is a one time operation. It is required to get a valid AMI on the region of choice. So in case of multiple regions or multiple openshift versions it should be executed once per pair.

#### create

Create operation create the instance on the region of choice (with a pre loaded AMI trhough the import operation).

Sample command for create operation:

```bash
podman run -d --rm \
-v ${PWD}:/workspace:z \
-e AWS_ACCESS_KEY_ID=XXX \
-e AWS_SECRET_ACCESS_KEY=XXX \
-e AWS_DEFAULT_REGION=eu-west-1 \
quay.io/ariobolo/crc-cloud:v0.0.1 destroy \
quay.io/crcont/crc-cloud:v0.0.2 create aws \
--project-name "crc-ocp412" \
--backed-url "file:///workspace" \
--provider "aws"
--output "/workspace" \
--aws-ami-id "ami-xxxx" \
--pullsecret-filepath "/workspace/pullsecret" \
--key-filepath "/workspace/id_ecdsa"
```
(check [here](#workdir) for **TEARDOWN_RUN_ID** infos and **WORKDIR** setup instructions )

#### Environment variables
Environment variables will be passed to the container from the command line invocation with the ```-e VARIABLE=VALUE``` option that you can find above.
##### Mandatory Variables

**Cluster creation**

| VARIABLE | DESCRIPTION |
|---|---|
| WORKING_MODE | C (creation mode) |
| PULL_SECRET | base64 string of the Red Hat account pull secret ( it is recommended to use the command substitution to generate the string as described above) |
| AWS_ACCESS_KEY_ID | AWS access key (infos [here](#prereq)) |
| AWS_SECRET_ACCESS_KEY | AWS secret access key (infos [here](#prereq)) |
| AWS_DEFAULT_REGION | AWS region where the cluster will be deployed ( currently us-west-2 is the only supported)
For create operation there are some mandatory parameters (beyond AWS envs) and state related params (`project-name` and `backed-url`):

- `aws-ami-id` AMI id for the region (output from import operation)
- `pullsecret-filepath` filepath for pullsecret
- `key-filepath` filepath for initial key (output from import operation)
- `output` path for resulting assets from create operation

**Cluster teardown**
The resulting assets are the ones required to connect with the instance / cluster. Create operation will generate at output folder the following files:

| VARIABLE | DESCRIPTION |
| --- | ---|
| WORKING_MODE | T (teardown) |
| TEARDOWN_ID | the name (unix timestamp format) of the folder created inside the working directory, containing all the metadata needed to teardown the cluster |
| AWS_ACCESS_KEY_ID | AWS access key (infos [here](#prereq)) |
| AWS_SECRET_ACCESS_KEY | AWS secret access key (infos [here](#prereq)) |
| AWS_DEFAULT_REGION | AWS region where the cluster has been deployed ( currently us-west-2 is the only supported) |
- `host` file with host/ip to connect with the instance
- `id_rsa` file with private key
- `username` file with username
- `password` file with self generated password for developer and kubeadmin to connect with the cluster

#### Optional Variables
#### destroy

Execute from same folder as create operation or make use of `backed-url` from create operation and same `project-name`. This operation will remove all resources created during create operation.

| VARIABLE | DESCRIPTION |
|---|---|
| PASS_DEVELOPER | overrides the default password (developer) for developer account |
| PASS_KUBEADMIN | overrides the default password (kubeadmin) for kubeadmin account |
| PASS_REDHAT | overrides the default password (redhat) for redhat account |
| INSTANCE_TYPE | overrides the default AWS instance type (c6in.2xlarge, infos [here](#prereq)) |

```bash
podman run -d --rm \
-v ${PWD}:/workspace:z \
-e AWS_ACCESS_KEY_ID=XXX \
-e AWS_SECRET_ACCESS_KEY=XXX \
-e AWS_DEFAULT_REGION=eu-west-1 \
quay.io/crcont/crc-cloud:v0.0.2 destroy \
--project-name "crc-ocp412" \
--backed-url "file:///workspace" \
--provider "aws"
```

0 comments on commit 96394ac

Please sign in to comment.