Skip to content

Commit

Permalink
Adding a demo workflow
Browse files Browse the repository at this point in the history
Going to see if I can do this in a branch.
  • Loading branch information
jadudm committed Jan 5, 2025
1 parent 0d80973 commit db2fcd7
Show file tree
Hide file tree
Showing 3 changed files with 134 additions and 13 deletions.
18 changes: 18 additions & 0 deletions .github/workflows/demo.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: GitHub Actions Demo
run-name: ${{ github.actor }} is testing out GitHub Actions 🚀
on: [push]
jobs:
Explore-GitHub-Actions:
runs-on: ubuntu-latest
steps:
- run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
- run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
- run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
- name: Check out repository code
uses: actions/checkout@v4
- run: echo "💡 The ${{ github.repository }} repository has been cloned to the runner."
- run: echo "🖥️ The workflow is now ready to test your code on the runner."
- name: List files in the repository
run: |
ls ${{ github.workspace }}
- run: echo "🍏 This job's status is ${{ job.status }}."
2 changes: 1 addition & 1 deletion terraform/Makefile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
.PHONY: default
default:
echo "make (dev|staging|production)"
@echo "make (dev|staging|production)"

.PHONY: dev
dev:
Expand Down
127 changes: 115 additions & 12 deletions terraform/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,26 @@ https://stackoverflow.com/a/74655690

Because we're a fundamentally small application (or, a small number of services), we're going to take a simpler "dev/staging/prod" approach to organization.

## layout
# establishing the org/space

The TF included in this repository does *not* attempt to stand up everything from scratch. That is, if we suffer a catastrophic failure, and lose all of Cloud.gov, and have to rebuild from scratch... there will be manual steps.

## create the org and space

We are currently in the org `gsa-tts-usagov`, and that may change.

We intend to have three spaces: `dev`, `staging`, and `production`.

The `terraform` folder has folders for each environment. The `sandbox` environment (if we create it) is code that is intended to run locally on a per-developer basis for deploying the application to a cloud.gov sandbox (1GB RAM).
* Every merge to `main` will push to `dev` and run tests.
* Every morning we will push to `staging` and run E2E tests.
* Pushes to `production` will be via release cuts, and be manual.


# deploying

This is a very early README. At this point, it assume only one person is running deploys. We have work to do still.

## credentials
### obtaining local credentials (remove)

First, you need to set up deployment creds.
This is only during local dev... this goes away once we're in Github Actions.

https://cloud.gov/docs/services/cloud-gov-service-account/

Expand All @@ -29,16 +37,111 @@ cf_password = ""
api_key = ""
```

(Where does the API key come from? ...)

## make
## launching the stack

```
make dev
```

at the top of the tree will deploy the `dev` stack. More work needs to be done in order to store the TF state in S3, so that we can run this from Github Actions. For now, this is not complete; if different devs deploy, they will have to completely destroy (tear down) the state of the other devs. This will become... annoying... once we start storing data in buckets. (Buckets must be empty in order to be torn down.)

So, the deploy to Cloud.gov is still a work-in-progress. But, it is possible, while testing/developing, to do a deploy from a local machine. Once we have GH Actions in place, we will *never* do a deploy from a local machine. We will always do our deploys from an action.

## layout

At the top of the `terraform` directory are two files that matter:

* Makefile
* developers.tf

`developers.tf` will become part of our onboarding. This file is where devs add themselves as an initial commit so that they gain access to the Cloud.gov environment. We will control access to Cgov through this file. (This wiring is not in place yet, but the file is there. The access controls have to be implemented as scripts executed in a Github Action that call the CF API on Cloud.gov.)

Cgov deployments are organized into `organizations` and `spaces`. An organization might be `gsa-tts-search`, and a space might be `dev`, `staging`, or `production`.

There are two directories (currently) that contain the Terraform deploy scripts:

* dev
* shared

`dev` contains the variables and drivers for deploying to our (eventual) `dev` space. Every service that we deploy will get a section in this file:

```
module "fetch" {
source = "../shared/services/fetch"
# disk_quota = 256
# memory = 128
# instances = 1
space_name = data.cloudfoundry_space.app_space.name
app_space_id = data.cloudfoundry_space.app_space.id
domain_id = data.cloudfoundry_domain.public.id
databases = module.databases.ids
buckets = module.buckets.ids
}
```

I have not yet determined if this can be made reusable between spaces (meaning, avoiding the boilerplate-ness of this). Each service has to be wired up to the correct databases and S3 buckets _in its space_ in order to execute. Further, we might want to allocate different amounts of RAM, disk, and instances to services in the different spaces. That is, we might one 1 instance of `fetch` in the `dev` environment, but 3 instances of `fetch` in `production`. Because we only have one pool of RAM for all of the spaces combined, we will probably run light in lower environments, and run a fuller stack in `production`.

The service itself is defined in `shared/services/<service-name>`. We apparently have to include the provider (?), define the variables for the module, the outputs, and the module itself. Put another way:

Next,
* `providers.tf` is boilerplate. It will need to change when we switch to the official `cloudfoundry/cloudfoundry` provider.
* `variables.tf` defines the variables that the service needs to have defined in order to execute. For example, when instantiating the module, we need to provide the amount of RAM, disk, and the number of instances the service will be created with.
* `service.tf` defines the service itself.

We can see the `fetch` service:

```
make apply_all
resource "cloudfoundry_app" "fetch" {
name = "fetch"
space = var.app_space_id # data.cloudfoundry_space.app_space.id
buildpacks = ["https://github.com/cloudfoundry/apt-buildpack", "https://github.com/cloudfoundry/binary-buildpack.git"]
path = "${path.module}/../app.tar.gz"
source_code_hash = filesha256("${path.module}/../app.tar.gz")
disk_quota = var.disk_quota
memory = var.memory
instances = var.instances
strategy = "rolling"
timeout = 200
health_check_type = "port"
health_check_timeout = 180
health_check_http_endpoint = "/api/heartbeat"
service_binding {
service_instance = var.databases.queues
}
service_binding {
service_instance = var.databases.work
}
service_binding {
service_instance = var.buckets.fetch
}
}
```

which will run a deploy from start to finish.
All of the services get the entire codebase; this is because we then launch, on a per-instance basis, different code from `cmd`.

Variables include the ID of the space we are deploying to (e.g. we do not deploy to `dev`, but to a UUID4 value representing `dev`), the disk, memory, and instances, and more importantly, bindings to the databases and S3 buckets.

### buckets and databases

In `shared/cloudgov` are module definitions for our databases and S3 buckets.

In `dev/main.tf`, we instantiate these as follows:

```
module "databases" {
source = "../shared/cloudgov/databases"
cf_org = local.cf_org
cf_space = local.cf_space
queue_db_plan_name = "micro-psql"
search_db_plan_name = "micro-psql"
work_db_plan_name = "micro-psql"
}
```

For `dev`, we might only use `micro` instances. For production, however, we might instantiate `xl` instances. This lets us configure the databases on a per-space basis. (S3 buckets are all the same, so there is no configuration.)

This module has outputs. Once instantiated, we can refer to `module.databases` as a `map(string)` and reference the `id` of each of the databases (or buckets). In this way, we can pass the entire map of IDs to the services, and they can then bind to the correct databases/S3 buckets. Most (all?) services will want to bind to the queues databases; only some need to bind to `work`, and some need to bind to `serve`.

It always deletes everything before proceeding. Once we have an S3 backend, this will change.

0 comments on commit db2fcd7

Please sign in to comment.