From db2fcd7d4092149f6c5985b24ceb2001c12331b6 Mon Sep 17 00:00:00 2001 From: Matt Jadud Date: Sun, 5 Jan 2025 10:03:28 -0500 Subject: [PATCH] Adding a demo workflow Going to see if I can do this in a branch. --- .github/workflows/demo.yaml | 18 +++++ terraform/Makefile | 2 +- terraform/README.md | 127 ++++++++++++++++++++++++++++++++---- 3 files changed, 134 insertions(+), 13 deletions(-) create mode 100644 .github/workflows/demo.yaml diff --git a/.github/workflows/demo.yaml b/.github/workflows/demo.yaml new file mode 100644 index 0000000..e5bb161 --- /dev/null +++ b/.github/workflows/demo.yaml @@ -0,0 +1,18 @@ +name: GitHub Actions Demo +run-name: ${{ github.actor }} is testing out GitHub Actions 🚀 +on: [push] +jobs: + Explore-GitHub-Actions: + runs-on: ubuntu-latest + steps: + - run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event." + - run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!" + - run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}." + - name: Check out repository code + uses: actions/checkout@v4 + - run: echo "💡 The ${{ github.repository }} repository has been cloned to the runner." + - run: echo "🖥️ The workflow is now ready to test your code on the runner." + - name: List files in the repository + run: | + ls ${{ github.workspace }} + - run: echo "🍏 This job's status is ${{ job.status }}." \ No newline at end of file diff --git a/terraform/Makefile b/terraform/Makefile index ad23616..a6ccd4d 100644 --- a/terraform/Makefile +++ b/terraform/Makefile @@ -1,6 +1,6 @@ .PHONY: default default: - echo "make (dev|staging|production)" + @echo "make (dev|staging|production)" .PHONY: dev dev: diff --git a/terraform/README.md b/terraform/README.md index 7705537..5d45785 100644 --- a/terraform/README.md +++ b/terraform/README.md @@ -6,18 +6,26 @@ https://stackoverflow.com/a/74655690 Because we're a fundamentally small application (or, a small number of services), we're going to take a simpler "dev/staging/prod" approach to organization. -## layout +# establishing the org/space + +The TF included in this repository does *not* attempt to stand up everything from scratch. That is, if we suffer a catastrophic failure, and lose all of Cloud.gov, and have to rebuild from scratch... there will be manual steps. + +## create the org and space + +We are currently in the org `gsa-tts-usagov`, and that may change. + +We intend to have three spaces: `dev`, `staging`, and `production`. -The `terraform` folder has folders for each environment. The `sandbox` environment (if we create it) is code that is intended to run locally on a per-developer basis for deploying the application to a cloud.gov sandbox (1GB RAM). +* Every merge to `main` will push to `dev` and run tests. +* Every morning we will push to `staging` and run E2E tests. +* Pushes to `production` will be via release cuts, and be manual. -# deploying -This is a very early README. At this point, it assume only one person is running deploys. We have work to do still. -## credentials +### obtaining local credentials (remove) -First, you need to set up deployment creds. +This is only during local dev... this goes away once we're in Github Actions. https://cloud.gov/docs/services/cloud-gov-service-account/ @@ -29,16 +37,111 @@ cf_password = "" api_key = "" ``` -(Where does the API key come from? ...) -## make +## launching the stack + +``` +make dev +``` + +at the top of the tree will deploy the `dev` stack. More work needs to be done in order to store the TF state in S3, so that we can run this from Github Actions. For now, this is not complete; if different devs deploy, they will have to completely destroy (tear down) the state of the other devs. This will become... annoying... once we start storing data in buckets. (Buckets must be empty in order to be torn down.) + +So, the deploy to Cloud.gov is still a work-in-progress. But, it is possible, while testing/developing, to do a deploy from a local machine. Once we have GH Actions in place, we will *never* do a deploy from a local machine. We will always do our deploys from an action. + +## layout + +At the top of the `terraform` directory are two files that matter: + +* Makefile +* developers.tf + +`developers.tf` will become part of our onboarding. This file is where devs add themselves as an initial commit so that they gain access to the Cloud.gov environment. We will control access to Cgov through this file. (This wiring is not in place yet, but the file is there. The access controls have to be implemented as scripts executed in a Github Action that call the CF API on Cloud.gov.) + +Cgov deployments are organized into `organizations` and `spaces`. An organization might be `gsa-tts-search`, and a space might be `dev`, `staging`, or `production`. + +There are two directories (currently) that contain the Terraform deploy scripts: + +* dev +* shared + +`dev` contains the variables and drivers for deploying to our (eventual) `dev` space. Every service that we deploy will get a section in this file: + +``` +module "fetch" { + source = "../shared/services/fetch" + # disk_quota = 256 + # memory = 128 + # instances = 1 + space_name = data.cloudfoundry_space.app_space.name + app_space_id = data.cloudfoundry_space.app_space.id + domain_id = data.cloudfoundry_domain.public.id + databases = module.databases.ids + buckets = module.buckets.ids +} +``` + +I have not yet determined if this can be made reusable between spaces (meaning, avoiding the boilerplate-ness of this). Each service has to be wired up to the correct databases and S3 buckets _in its space_ in order to execute. Further, we might want to allocate different amounts of RAM, disk, and instances to services in the different spaces. That is, we might one 1 instance of `fetch` in the `dev` environment, but 3 instances of `fetch` in `production`. Because we only have one pool of RAM for all of the spaces combined, we will probably run light in lower environments, and run a fuller stack in `production`. + +The service itself is defined in `shared/services/`. We apparently have to include the provider (?), define the variables for the module, the outputs, and the module itself. Put another way: -Next, +* `providers.tf` is boilerplate. It will need to change when we switch to the official `cloudfoundry/cloudfoundry` provider. +* `variables.tf` defines the variables that the service needs to have defined in order to execute. For example, when instantiating the module, we need to provide the amount of RAM, disk, and the number of instances the service will be created with. +* `service.tf` defines the service itself. + +We can see the `fetch` service: ``` -make apply_all +resource "cloudfoundry_app" "fetch" { + name = "fetch" + space = var.app_space_id # data.cloudfoundry_space.app_space.id + buildpacks = ["https://github.com/cloudfoundry/apt-buildpack", "https://github.com/cloudfoundry/binary-buildpack.git"] + path = "${path.module}/../app.tar.gz" + source_code_hash = filesha256("${path.module}/../app.tar.gz") + disk_quota = var.disk_quota + memory = var.memory + instances = var.instances + strategy = "rolling" + timeout = 200 + health_check_type = "port" + health_check_timeout = 180 + health_check_http_endpoint = "/api/heartbeat" + + service_binding { + service_instance = var.databases.queues + } + + service_binding { + service_instance = var.databases.work + } + + service_binding { + service_instance = var.buckets.fetch + } +} ``` -which will run a deploy from start to finish. +All of the services get the entire codebase; this is because we then launch, on a per-instance basis, different code from `cmd`. + +Variables include the ID of the space we are deploying to (e.g. we do not deploy to `dev`, but to a UUID4 value representing `dev`), the disk, memory, and instances, and more importantly, bindings to the databases and S3 buckets. + +### buckets and databases + +In `shared/cloudgov` are module definitions for our databases and S3 buckets. + +In `dev/main.tf`, we instantiate these as follows: + +``` +module "databases" { + source = "../shared/cloudgov/databases" + cf_org = local.cf_org + cf_space = local.cf_space + queue_db_plan_name = "micro-psql" + search_db_plan_name = "micro-psql" + work_db_plan_name = "micro-psql" +} +``` + +For `dev`, we might only use `micro` instances. For production, however, we might instantiate `xl` instances. This lets us configure the databases on a per-space basis. (S3 buckets are all the same, so there is no configuration.) + +This module has outputs. Once instantiated, we can refer to `module.databases` as a `map(string)` and reference the `id` of each of the databases (or buckets). In this way, we can pass the entire map of IDs to the services, and they can then bind to the correct databases/S3 buckets. Most (all?) services will want to bind to the queues databases; only some need to bind to `work`, and some need to bind to `serve`. -It always deletes everything before proceeding. Once we have an S3 backend, this will change. \ No newline at end of file