Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infrastructure Deployment API implementation #16

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
82 changes: 60 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,49 @@ I stumbled upon OpenSpot (https://github.com/ksingh7/openspot) made by my collea
Moreover the solution was based on CRC that creates a qemu VM to run the (single-node) cluster, so bare metal instances were needed and the startup time was too long for the purpose.
We had a meeting and they gave me all the detailed instructions on how to run the qemu image directly on AWS standard EC2 instances and configure properly the OpenShift single-node cluster, only the code was missing....

## Cloud Providers
For the moment only AWS is supported. Other will be added soon.
## Infrastructure Deployers
<a name="deployer"></a>
In order to abstract the Infrastructure and the OpenShift Instance provisioning has been developed an **Infrastructure Deployer API**. If you're interested on how to implement a new Infrastructure Deployer please refer to the [documentation](api/deployer/README.md).

### Available Deployers
| Name | Status|
--- | ---|
| bash-aws| Stable (Default)|

### bash-aws
<a name="bash-aws-deployer"></a>
This deployer is designed to deploy **CRC-Cloud** on AWS. It's build on top the AWS CLI v2 and it's logic relies on bash scripting.

#### Prerequisites
<a name="bash-aws-deployer-prereq"></a>
- create an access key for your AWS account and grab the *ACCESS_KEY_ID* and the *SECRET_ACCESS_KEY* (instructions can be found [here](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html))
- AWS CLI Installed and in $PATH
<br/>
<br/>

The AWS instance type of choice is *c6in.2xlarge* with 8vcpu and 16 GB of RAM.
This instance will cost ~0.45$ per hour (price may vary depending from the region) and will take ~11 minutes to have a working cluster.
Increasing or decreasing the resources will affect the deployment time together with the price per hour. If you want to change instance type keep in mind that the minimum hardware requirements to run CRC (on which this solution is based) are 4vcpus and 8GB of RAM, please refer to the [documentation](https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers) for further informations.


#### CLI arguments
| Argument | Description | Mandatory |
--- | --- | --- |
| -a | AMI ID (Amazon Machine Image) from which the VM will be Instantiated | false |
| -i | EC2 Instance Type | false

#### Container variables
| Variable | Description | Mandatory |
| --- | --- | --- |
| AWS_ACCESS_KEY_ID | AWS access key (infos [here](#bash-aws-deployer-prereq)) | true |
| AWS_SECRET_ACCESS_KEY | AWS secret access key (infos [here](#bash-aws-deployer-prereq)) | true |
| AWS_DEFAULT_REGION | AWS region where the cluster will be deployed ( currently us-west-2 is the only supported) | true |
| INSTANCE_TYPE | AWS EC2 Instance Type | false |
| AMI_ID | AMI ID (Amazon Machine Image) from which the VM will be Instantiated | false |

<br/>
<br/>

**Note:** AWS AMIs (Amazon Machine Images) are regional resources so,for the moment, the only supported region is **us-west-2**. In the next few days the AMI will be copied to other regions, please be patient, it will take a while.

## Usage
Expand All @@ -27,15 +66,9 @@ For the moment only AWS is supported. Other will be added soon.
<a name="prereq"></a>
The basic requirements to run a single-node OpenShift cluster with **CRC-Cloud** are:
- register a Red Hat account and get a pull secret from https://console.redhat.com/openshift/create/local
- create an access key for your AWS account and grab the *ACCESS_KEY_ID* and the *SECRET_ACCESS_KEY* (instructions can be found [here](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html))
<br/>
<br/>

The AWS instance type of choice is *c6in.2xlarge* with 8vcpu and 16 GB of RAM.
This instance will cost ~0.45$ per hour (price may vary depending from the region) and will take ~11 minutes to have a working cluster.
Increasing or decreasing the resources will affect the deployment time together with the price per hour. If you want to change instance type keep in mind that the minimum hardware requirements to run CRC (on which this solution is based) are 4vcpus and 8GB of RAM, please refer to the [documentation](https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers) for further informations.

**WARNING:** Running VM instances will cost you **real money** so be extremely careful to verify that all the resources instantiated are **removed** once you're done and remember that you're running them at **your own risk and cost**
**WARNING:** Running VM instances on cloud will cost you **real money** so be extremely careful to verify that all the resources instantiated are **removed** once you're done and remember that you're running them at **your own risk and cost**



Expand All @@ -57,7 +90,7 @@ Please **be careful** on deleting the working directory content because without
**NOTE (podman only):** In order to make the mounted workdir read-write accessible from the container is need to change the SELinux security context related to the folder with the following command
```chcon -Rt svirt_sandbox_file_t <HOST_WORKDIR_PATH>```

#### Single node cluster creation
#### Single node cluster creation ([bash-aws](#bash-aws-deployer) Infrastructure deployer)
```
<podman|docker> run -v <HOST_WORKDIR_PATH>:/workdir\
-e WORKING_MODE=C\
Expand All @@ -68,7 +101,7 @@ Please **be careful** on deleting the working directory content because without
-ti quay.io/crcont/crc-cloud
```

#### Single node cluster teardown
#### Single node cluster teardown ([bash-aws](#bash-aws-deployer) Infrastructure deployer)
```
<podman|docker> run -v <HOST_WORKDIR_PATH>:/workdir\
-e WORKING_MODE=T\
Expand All @@ -82,6 +115,8 @@ Please **be careful** on deleting the working directory content because without

#### Environment variables
Environment variables will be passed to the container from the command line invocation with the ```-e VARIABLE=VALUE``` option that you can find above.
**NOTE:** Every deployer may have its own environment variables please refer to the [Infrastructure Deployer Section](#deployer) for further details.

##### Mandatory Variables

**Cluster creation**
Expand All @@ -90,9 +125,7 @@ Environment variables will be passed to the container from the command line invo
|---|---|
| WORKING_MODE | C (creation mode) |
| PULL_SECRET | base64 string of the Red Hat account pull secret ( it is recommended to use the command substitution to generate the string as described above) |
| AWS_ACCESS_KEY_ID | AWS access key (infos [here](#prereq)) |
| AWS_SECRET_ACCESS_KEY | AWS secret access key (infos [here](#prereq)) |
| AWS_DEFAULT_REGION | AWS region where the cluster will be deployed ( currently us-west-2 is the only supported)



**Cluster teardown**
Expand All @@ -114,6 +147,8 @@ Environment variables will be passed to the container from the command line invo
| PASS_KUBEADMIN | overrides the default password (kubeadmin) for kubeadmin account |
| PASS_REDHAT | overrides the default password (redhat) for redhat account |
| INSTANCE_TYPE | overrides the default AWS instance type (c6in.2xlarge, infos [here](#prereq)) |
| DEPLOYER_API | selects the infrastructure deployer ( please refer to [deployer API documentation](api/deployer/README.md)




Expand All @@ -122,13 +157,14 @@ Environment variables will be passed to the container from the command line invo
To run **CRC-Cloud** from your command line you must be on Linux, be sure to have installed and configured the following programs in your box

- bash (>=v4)
- AWS CLI
- jq
- md5sum
- curl
- head
- ssh-keygen
- GNU sed
- GNU grep
- cat
- nc (netcat)
- ssh client
- scp
Expand All @@ -148,15 +184,16 @@ at the end of the process the script will print the public address of the consol
Below you'll find all the options available

```
./crc-cloud.sh -C -p pull secret path [-d developer user password] [-k kubeadmin user password] [-r redhat user password] [-a AMI ID] [-t Instance type]
./crc-cloud.sh -C -p pull secret path [-D infrastructure_deployer] [-d developer user password] [-k kubeadmin user password] [-r redhat user password] [-a AMI ID] [-t Instance type]
where:
-D Infrastructure Deployer (default: $DEFAULT_DEPLOYER) *NOTE* Must match with the folder name placed in api/deployer (please refer to the deployer documentation in api/deployer/README.md)
-C Cluster Creation mode
-p pull secret file path (download from https://console.redhat.com/openshift/create/local)
-d developer user password (optional, default: developer)
-k kubeadmin user password (optional, default: kubeadmin)
-r redhat user password (optional, default: redhat)
-a AMI ID (Amazon Machine Image) from which the VM will be Instantiated (optional, default: ami-0569ce8a44f2351be)
-i EC2 Instance Type (optional, default; c6in.2xlarge)
-d developer user password (optional, default: $PASS_DEVELOPER)
-k kubeadmin user password (optional, default: $PASS_KUBEADMIN)
-r redhat user password (optional, default: $PASS_REDHAT)
-a AMI ID (Amazon Machine Image) from which the VM will be Instantiated (optional, default: $AMI_ID)
-i EC2 Instance Type (optional, default; $INSTANCE_TYPE)
-h show this help text
```
#### Single node cluster teardown
Expand All @@ -165,7 +202,8 @@ To teardown the single node cluster the basic command is
this will refer to the *latest* run found in ```<openspot_path>/workspace```, if you have several run folders in your workspace, you can specify the one you want to teardown with the parameter ```-v <run_id>``` where ```<run_id>``` corresponds to the numeric folder name containing the metadata of the cluster that will be deleted

```
./crc-cloud.sh -T [-v run id]
./crc-cloud.sh -T [-D infrastructure_deployer] [-v run id]
-D Infrastructure Deployer (default: $DEFAULT_DEPLOYER) *NOTE* Must match with the folder name placed in api/deployer (please refer to the deployer documentation in api/deployer/README.md)
-T Cluster Teardown mode
-v The Id of the run that is gonna be destroyed, corresponds with the numeric name of the folders created in workdir (optional, default: latest)
-h show this help text
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v1.0
v1.1
52 changes: 52 additions & 0 deletions api/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# CRC-Cloud Infrastructure Deployer API

The Infrastructure Deployer API has been designed to abstract the Infrastructure provisioning from the OpenShift instance provisioning. The first version of CRC-Cloud was relying on AWS and AWS CLI, but as soon as the project was gaining interest, we started considering to support other cloud providers so we decided to implement this abstraction to easily implement other deployment technologies such as IaC tools like [Ansible](https://www.redhat.com/it/engage/delivery-with-ansible-20170906?sc_cid=7013a000002w14JAAQ&gclid=EAIaIQobChMIwLPlpZG9_AIVA5zVCh2EPw9VEAAYASAAEgJJXfD_BwE&gclsrc=aw.ds), [Terraform](https://terraform.io), [Pulumi](https://https://www.pulumi.com/) etc.

## Plugin loading and API implementation
In order to be loaded, an Infrastructure Deployer Plugin must have a folder in ```plugin/deployer```. This folder must have the name of the plugin that will be passed to ```crc-cloud.sh``` with the ```-D``` option, so for example, if you want to create a plugin named ```my-deployer``` the plugin code and resources will be stored in ```<openspot_path>/plugin/deployer/my-deployer```.
You can find an example implementation from which start to develop a new plugin in ```plugin/deployer/example``` (and you can even run it!!).
The plugin folder must contain a ```main.sh``` script that is the entrypoint of the plugin. The ```main.sh``` must implement the following methods:

```
deployer_create() {
#all the command line args will be passed to that function
pr_info "creates the infrastructure"
exit 0
}

deployer_teardown() {
#all the command line args will be passed to that function
pr_info "destroys infrastructure"
exit 0
}

deployer_get_eip() {
echo "return the external (public ip) of the VM"
}

deployer_get_iip() {
echo "return internal ip (within the cloud infrastructure) of the VM"
}

deployer_usage() {
echo "prints the usage of the infrastructure deployer plugin including all its cli parameters"
}
```

The **CRC-Cloud** engine will check if these methods are implemented and will eventually exit it they're not.
All the plugin specific resources (other scripts, IaC definitions etc.) should be kept into the plugin folder and the paths that refer to them must be valorized accordingly by the developer.

## Global Variables

The **CRC-Cloud** engine will expose to the plugin some variables that must be used to keep the logic consistent with the engine itself

| Variable | Type| Description |
| --- | --- | --- |
| $CONTAINER | Boolean | It's valorized if the script is running inside a container |
| $RANDOM_SUFFIX | String | It's the random suffix applied to the resources created inside the cloud provider in order to avoid conflicts with other **CRC-Cloud** instances running in the same namespace (can be ignored if the deployment methods provides it's own logic) |
| $WORKDIR | String | That's the folder where all the deployment status infos must be stored (will be created by the engine) |
| $PLUGIN_ROOT_FOLDER | String | That's the folder containing the loaded plugin, this can be used as starting path for plugin resources |

## *Private* methods names conventions

In order to increase code readability, non interface method names (kinda private) must start with an underscore "_"
95 changes: 95 additions & 0 deletions api/common.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
#!/bin/sh

pr_info() {
if [[ $WORKING_MODE == "C" ]]
then
echo "[INF] $1" | (tee -a $LOG_FILE 2>/dev/null)
else
echo "[INF] $1" | (tee -a $TEARDOWN_LOGFILE 2>/dev/null)
fi
}

pr_error() {
if [[ $WORKING_MODE == "C" ]]
then
echo "[ERR] $1" | (tee -a $LOG_FILE 2>/dev/null)
else
echo "[ERR] $1" | (tee -a $TEARDOWN_LOGFILE 2>/dev/null)
fi
}

pr_end() {
if [[ $WORKING_MODE == "C" ]]
then
echo "[END] $1" | (tee -a $LOG_FILE 2>/dev/null)
else
echo "[END] $1" | (tee -a $TEARDOWN_LOGFILE 2>/dev/null)
fi
}

stop_if_failed(){
EXIT_CODE=$1
MESSAGE=$2
if [[ $EXIT_CODE != 0 ]]
then
pr_error "$MESSAGE"
exit $EXIT_CODE
fi
}

check_ssh(){
$NC -z $1 $SSH_PORT > /dev/null 2>&1
return $?
}

wait_instance_readiness(){
RES=1
while [[ $RES != 0 ]]
do
check_ssh $1
RES=$?
sleep 1
pr_info "waiting sshd to become ready on $1, hang on...."
done
}


api_load_deployer() {
[ ! -d "$PLUGIN_DEPLOYER_FOLDER/$1" ] && stop_if_failed 1 "Deployer API $1 folder not found in $PLUGIN_DEPLOYER_FOLDER/$1, please refer api/README.md for API specifications"
[ ! -f "$PLUGIN_DEPLOYER_FOLDER/$1/main.sh" ] && stop_if_failed 1 "main.sh not found for deployer $1 in folder $PLUGIN_DEPLOYER_FOLDER/$1, please refer api/README.md for API specifications"
source $PLUGIN_DEPLOYER_FOLDER/$1/main.sh
[[ ! `declare -F deployer_create` ]] &&\
stop_if_failed 1 "deployer_create method not found in main.sh implementation for $1 infrastructure deployer api, please refer api/README.md for API specifications"

[[ ! `declare -F deployer_teardown` ]] &&\
stop_if_failed 1 "deployer_teardown method not found in main.sh implementation for $1 infrastructure deployer api, please refer api/README.md for API specifications"

[[ ! `declare -F deployer_get_eip` ]] &&\
stop_if_failed 1 "deployer_get_eip method not found in main.sh implementation for $1 infrastructure deployer api, please refer api/README.md for API specifications"

[[ ! `declare -F deployer_get_iip` ]] &&\
stop_if_failed 1 "deployer_get_iip method not found in main.sh implementation for $1 infrastructure deployer api, please refer api/README.md for API specifications"

[[ ! `declare -F deployer_usage` ]] &&\
stop_if_failed 1 "deployer_usage method not found in main.sh implementation for $1 infrastructure deployer api, please refer api/README.md for API specifications"


# CHECKS FOR NOT INTERFACE METHDODS NAME NOT STARTING WITH UNDERSCORE, IF THE INTERFACE WILL BE EXTENDED ADD THEM TO THE SWITCH CASE
for i in `$CAT $PLUGIN_DEPLOYER_FOLDER/$1/main.sh | $GREP -P "^\s*.+\s*\(\)\s*\{"|$SED -r 's/(.+)\(\)\s*\{/\1/'`
do
case "$i" in
deployer_create);;
deployer_teardown);;
deployer_get_eip);;
deployer_get_iip);;
deployer_usage);;
*)
[[ ${i::1} != '_' ]] && stop_if_failed 1 "$i() is not a valid private method name, non interface method must start with underscore '_'"
;;
esac
done

PLUGIN_ROOT_FOLDER=$PLUGIN_DEPLOYER_FOLDER/$1
pr_info "successfully loaded $1 deployer plugin"
}

73 changes: 0 additions & 73 deletions common.sh

This file was deleted.

Loading