-
Notifications
You must be signed in to change notification settings - Fork 20
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
added README information for usage of crc-cloud based on pulumi
- Loading branch information
1 parent
cb4020e
commit 2336778
Showing
3 changed files
with
133 additions
and
76 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,121 +1,174 @@ | ||
# CRC Cloud - Runs Containers in the Cloud | ||
|
||
### Disposable OpenShift instances on cloud in minutes | ||
## Disposable OpenShift instances on cloud in minutes | ||
|
||
![CRC Cloud](assets/crc-cloud.png) | ||
|
||
This project is stumbled upon [OpenSpot](https://github.com/ksingh7/openspot) made by [@ksingh7](https://github.com/ksingh7) and all the improvements made by [@tsebastiani](https://github.com/tsebastiani) creating the next generation for openspot were he got rid off bare metal hard requirement for running the single-node cluster on the cloud. | ||
|
||
## Disclaimer | ||
This project has been developed for **experimental** purpose only and it's not **absolutely** meant to run production clusters. The author is not responsible in any manner of any cost on which the user may incur for inexperience or software failure. | ||
|
||
This project has been developed for **experimental** purpose only and it's not **absolutely** meant to run production clusters. | ||
|
||
The authors are not responsible in any manner of any cost on which the user may incur for inexperience or software failure. | ||
|
||
Before running the script **be sure** to have an adequate experience to safely create and destroy resources on AWS or any other cloud provider that will be supported **without** the help of this software in order to recover **manually** from possible issues. | ||
|
||
## Why? (TL;DR) | ||
I needed to test a chaos engineering tool (https://github.com/redhat-chaos/krkn) against a disposable (single-node) OpenShift cluster that could have been setup and destroyed from inside a CI/CD pipeline as quick as possible, unattended and with a reasonable cost per run. | ||
I stumbled upon OpenSpot (https://github.com/ksingh7/openspot) made by my colleagues [@ksingh7](https://github.com/ksingh7) and [@praveenkumar](https://github.com/praveenkumar). I found the idea amazing, unfortunately was relying on EC2 Spot Instances, that, if from a cost perspective are more affordable, do not guarantee that the machine is instantiated. | ||
Moreover the solution was based on CRC that creates a qemu VM to run the (single-node) cluster, so bare metal instances were needed and the startup time was too long for the purpose. | ||
We had a meeting and they gave me all the detailed instructions on how to run the qemu image directly on AWS standard EC2 instances and configure properly the OpenShift single-node cluster, only the code was missing.... | ||
## Overview | ||
|
||
This is a side project of [`Openshift Local` formerly `CRC`](https://github.com/crc-org), while `CRC` and `crc cli` main purpose is spin `Openshift Single Node` clusters on local development environments (it works multi-platform and multi-arch), `crc-cloud` will offer those clusters on cloud (multi-provider). | ||
|
||
The following diagram shows what is the expected interaction between an user of `crc-cloud` and the assets provided by `CRC`: | ||
|
||
## Cloud Providers | ||
For the moment only AWS is supported. Other will be added soon. | ||
<br/> | ||
<br/> | ||
**Note:** AWS AMIs (Amazon Machine Images) are regional resources so,for the moment, the only supported region is **us-west-2**. In the next few days the AMI will be copied to other regions, please be patient, it will take a while. | ||
![crc-cloud flow](docs/crc-cloud-flow.svg?raw=true) | ||
|
||
## Usage | ||
|
||
#### Prerequisites | ||
<a name="prereq"></a> | ||
The basic requirements to run a single-node OpenShift cluster with **CRC-Cloud** are: | ||
- register a Red Hat account and get a pull secret from https://console.redhat.com/openshift/create/local | ||
- create an access key for your AWS account and grab the *ACCESS_KEY_ID* and the *SECRET_ACCESS_KEY* (instructions can be found [here](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html)) | ||
<br/> | ||
<br/> | ||
To facilite the usage of `crc-cloud`, a [container image](https://quay.io/repository/crcont/crc-cloud) is offered with all required dependecies. Using the container all 3 supported operation can be executed | ||
|
||
### Authetication | ||
|
||
All operations require to set the authentication mechanism in place. As so any `aws` authentication mechanism is supported by `crc-cloud`: | ||
|
||
- long term credentials `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` as environment variables | ||
- short lived credentials (in addition to `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` would require `AWS_SESSION_TOKEN`) | ||
- credentials on config file (default file ~/.aws/config), in case of multiple profiles it will also accepts `AWS_PROFILE` | ||
|
||
### Restrictions | ||
|
||
The `import` operation downloads and transform the bundle offered by crc into an image supported by `AWS`, as so there are some disk demanding operation. So there is a requirement of at least 70G free on disk to run this operation. | ||
|
||
The AWS instance type of choice is *c6in.2xlarge* with 8vcpu and 16 GB of RAM. | ||
This instance will cost ~0.45$ per hour (price may vary depending from the region) and will take ~11 minutes to have a working cluster. | ||
Increasing or decreasing the resources will affect the deployment time together with the price per hour. If you want to change instance type keep in mind that the minimum hardware requirements to run CRC (on which this solution is based) are 4vcpus and 8GB of RAM, please refer to the [documentation](https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers) for further informations. | ||
The AWS instance type of choice is *c6a.2xlarge* with 8vcpu and 16 GB of RAM. This will be customizable in the future, for the moment this fixed type imposes some [restrictions](https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-ec2-m6a-c6a-instances-additional-regions/) on available regions to run crc cloud, those regions are: | ||
|
||
**WARNING:** Running VM instances will cost you **real money** so be extremely careful to verify that all the resources instantiated are **removed** once you're done and remember that you're running them at **your own risk and cost** | ||
- us-east-1 and us-east-2 | ||
- us-west-1 and us-west-2 | ||
- ap-south-1, ap-southeast-1, ap-southeast-2 and ap-northeast-1 | ||
- eu-west-1, eu-central-1 and eu-west-2 | ||
|
||
### Operations | ||
|
||
#### Import | ||
|
||
### Containers (the easy way) | ||
`import` operation uses crc official bundles, transform them and import as an AMI on the user account. It is required to run `import` operation on each region where the user wants to sping the cluster. | ||
|
||
Running **CRC-Cloud** from a container (podman/docker) is strongly recommended for the following reasons: | ||
- Compatible with any platform (Linux/MacOs/Windows) | ||
- No need to satisfy any software dependency in you're OS since everything is packed into the container | ||
- In CI/CD systems (eg. Jenkins) won't be necessary to propagate dependencies to the agents (only podman/docker needed) | ||
- In Cloud Native CI/CD systems (eg. Tekton) everything runs in container so that's the natural choice | ||
Usage: | ||
|
||
#### Working directory | ||
<a name="workdir"></a> | ||
In the working directory that will be mounted into the container, **CRC-Cloud** will store all the cluster metadata including those needed to teardown the cluster once you'll be done. | ||
Per each run **CRC-Cloud** will create a folder named with the run timestamp, this folder name will be referred as *TEARDOWN_RUN_ID* and will be used to cleanup the cluster in teardown mode and to store all the logs and the infos related to the cluster deployment. | ||
```bash | ||
import crc cloud image | ||
|
||
Please **be careful** on deleting the working directory content because without the metadata **CRC-Cloud** won't be able to teardown the cluster and associated resources from AWS. | ||
Usage: | ||
crc-cloud import [flags] | ||
|
||
#### Single node cluster creation | ||
Flags: | ||
--backed-url string backed for stack state. Can be a local path with format file:///path/subpath or s3 s3://existing-bucket | ||
--bundle-shasumfile-url string custom url to download the shasum file to verify the bundle artifact | ||
--bundle-url string custom url to download the bundle artifact | ||
-h, --help help for import | ||
--output string path to export assets | ||
--project-name string project name to identify the instance of the stack | ||
--provider string target cloud provider | ||
``` | ||
|
||
Outputs: | ||
|
||
- `image-id` file with the ami-id of the imported image | ||
- `id_ecdsa` this is key required to spin the image. (It will be required on `create` operation, is user responsability to store this key) | ||
|
||
Sample | ||
|
||
```bash | ||
podman run -d --rm \ | ||
-v ${PWD}:/workspace:z \ | ||
-e AWS_ACCESS_KEY_ID=XXX \ | ||
-e AWS_SECRET_ACCESS_KEY=XXX \ | ||
-e AWS_ACCESS_KEY_ID=${access_key_value} \ | ||
-e AWS_SECRET_ACCESS_KEY=${secret_key_value} \ | ||
-e AWS_DEFAULT_REGION=eu-west-1 \ | ||
quay.io/ariobolo/crc-cloud:v0.0.1 create \ | ||
--project-name "crc-ocp412" \ | ||
quay.io/crcont/crc-cloud:v0.0.2 import \ | ||
--project-name "ami-ocp412" \ | ||
--backed-url "file:///workspace" \ | ||
--output "/workspace" \ | ||
--provider "aws" \ | ||
--aws-ami-id "ami-0ab26eb25f41697ef" \ | ||
--pullsecret-filepath "/workspace/pullsecret" \ | ||
--key-filepath "/workspace/id_ecdsa" | ||
--bundle-url "https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/bundles/openshift/4.12.0/crc_libvirt_4.12.0_amd64.crcbundle" \ | ||
--bundle-shasumfile-url "https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/bundles/openshift/4.12.0/sha256sum.txt" | ||
|
||
``` | ||
|
||
#### Single node cluster teardown | ||
#### Create | ||
|
||
`create` operation is responsible for create all required resources on the cloud provider to spin the Openshift Single Node Cluster. | ||
|
||
Usage: | ||
|
||
```bash | ||
create crc cloud instance on AWS | ||
|
||
Usage: | ||
crc-cloud create aws [flags] | ||
|
||
Flags: | ||
--aws-ami-id string AMI identifier | ||
-h, --help help for aws | ||
|
||
Global Flags: | ||
--backed-url string backed for stack state. Can be a local path with format file:///path/subpath or s3 s3://existing-bucket | ||
--key-filepath string path to init key obtained when importing the image | ||
--output string path to export assets | ||
--project-name string project name to identify the instance of the stack | ||
--pullsecret-filepath string path for pullsecret file | ||
``` | ||
|
||
Outputs: | ||
|
||
- `host` file containing host address running the cluster | ||
- `username` file containing the username to connect the remote host | ||
- `id_rsa` key to connect the remote host | ||
- `password` password generated for `kubeadmin` and `developer` default cluster users | ||
|
||
Sample | ||
|
||
```bash | ||
podman run -d --rm \ | ||
-v ${PWD}:/workspace:z \ | ||
-e AWS_ACCESS_KEY_ID=XXX \ | ||
-e AWS_SECRET_ACCESS_KEY=XXX \ | ||
-e AWS_ACCESS_KEY_ID=${access_key_value} \ | ||
-e AWS_SECRET_ACCESS_KEY=${secret_key_value} \ | ||
-e AWS_DEFAULT_REGION=eu-west-1 \ | ||
quay.io/ariobolo/crc-cloud:v0.0.1 destroy \ | ||
quay.io/crcont/crc-cloud:v0.0.2 create aws \ | ||
--project-name "crc-ocp412" \ | ||
--backed-url "file:///workspace" \ | ||
--provider "aws" | ||
--output "/workspace" \ | ||
--aws-ami-id "ami-xxxx" \ | ||
--pullsecret-filepath "/workspace/pullsecret" \ | ||
--key-filepath "/workspace/id_ecdsa" | ||
``` | ||
(check [here](#workdir) for **TEARDOWN_RUN_ID** infos and **WORKDIR** setup instructions ) | ||
|
||
#### Environment variables | ||
Environment variables will be passed to the container from the command line invocation with the ```-e VARIABLE=VALUE``` option that you can find above. | ||
##### Mandatory Variables | ||
|
||
**Cluster creation** | ||
|
||
| VARIABLE | DESCRIPTION | | ||
|---|---| | ||
| WORKING_MODE | C (creation mode) | | ||
| PULL_SECRET | base64 string of the Red Hat account pull secret ( it is recommended to use the command substitution to generate the string as described above) | | ||
| AWS_ACCESS_KEY_ID | AWS access key (infos [here](#prereq)) | | ||
| AWS_SECRET_ACCESS_KEY | AWS secret access key (infos [here](#prereq)) | | ||
| AWS_DEFAULT_REGION | AWS region where the cluster will be deployed ( currently us-west-2 is the only supported) | ||
#### Destroy | ||
|
||
`destroy` operation will remove any resource created at the cloud provider, it uses the files holding the state of the infrastructure which has been store at location defined by parameter `backed-url` on `create` operation. | ||
|
||
**Cluster teardown** | ||
Usage: | ||
|
||
| VARIABLE | DESCRIPTION | | ||
| --- | ---| | ||
| WORKING_MODE | T (teardown) | | ||
| TEARDOWN_ID | the name (unix timestamp format) of the folder created inside the working directory, containing all the metadata needed to teardown the cluster | | ||
| AWS_ACCESS_KEY_ID | AWS access key (infos [here](#prereq)) | | ||
| AWS_SECRET_ACCESS_KEY | AWS secret access key (infos [here](#prereq)) | | ||
| AWS_DEFAULT_REGION | AWS region where the cluster has been deployed ( currently us-west-2 is the only supported) | | ||
```bash | ||
destroy crc cloud instance | ||
|
||
#### Optional Variables | ||
Usage: | ||
crc-cloud destroy [flags] | ||
|
||
Flags: | ||
--backed-url string backed for stack state. Can be a local path with format file:///path/subpath or s3 s3://existing-bucket | ||
-h, --help help for destroy | ||
--project-name string project name to identify the instance of the stack | ||
--provider string target cloud provider | ||
``` | ||
|
||
| VARIABLE | DESCRIPTION | | ||
|---|---| | ||
| PASS_DEVELOPER | overrides the default password (developer) for developer account | | ||
| PASS_KUBEADMIN | overrides the default password (kubeadmin) for kubeadmin account | | ||
| PASS_REDHAT | overrides the default password (redhat) for redhat account | | ||
| INSTANCE_TYPE | overrides the default AWS instance type (c6in.2xlarge, infos [here](#prereq)) | | ||
Sample | ||
|
||
```bash | ||
podman run -d --rm \ | ||
-v ${PWD}:/workspace:z \ | ||
-e AWS_ACCESS_KEY_ID=${access_key_value} \ | ||
-e AWS_SECRET_ACCESS_KEY=${secret_key_value} \ | ||
-e AWS_DEFAULT_REGION=eu-west-1 \ | ||
quay.io/crcont/crc-cloud:v0.0.2 destroy \ | ||
--project-name "crc-ocp412" \ | ||
--backed-url "file:///workspace" \ | ||
--provider "aws" | ||
``` |
Large diffs are not rendered by default.
Oops, something went wrong.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.