Deploying GitHub self-hosted runners to apply migrations to AWS RDS for MySQL.
Architecture overview:
Create the .auto.tfvars
from the template:
cp samples/sample.tfvars .auto.tfvars
Set the EC2 user data file according to your requirements:
# Available files: ubuntu-nodejs.sh, ubuntu-docker.sh
gh_runner_user_data = "ubuntu-nodejs.sh"
If you wish to create the application cluster as well, change the variable to true
:
create_application_cluster = true
Create the infrastructure:
terraform init
terraform apply -auto-approve
Tip
Device names from EC2 can be different from the actual device. Check the documentation
In this project, the EBS volume device name will be /dev/sdf
, and the block device should be /dev/nvme1n1
. More about this in the naming documentation.
Login as root
to list the drives:
fdisk -l
List the available disks with lsblk
:
lsblk
To determine if the volume is formatted or not, use the -f
option. If the FSTYPE
column for the volume (e.g., /dev/nvme1n1
) is empty, the volume is not formatted. If it shows a file system type (e.g., ext4
, xfs
), the volume is already formatted.
lsblk -f
To check it directly, use command below. If the output says data
, the volume is not formatted. If the output shows a file system type (e.g., ext4
or xfs
), the volume is formatted.
file -s /dev/nvme1n1
Check the mount:
df -h
mount -a
Follow the documentation to format and mount the partition.
Connect to the GitHub Runner host.
aws ssm start-session --target i-00000000000000000
If creating a new environment, verify that the userdata
executed correctly and reboot to apply kernel upgrades:
Should
reboot
automatically
cloud-init status
Switch to root:
sudo su -
Enter the /opt
directory, this is where we'll install the runner agent:
cd /opt
Enable the runner scripts to run as root:
export RUNNER_ALLOW_RUNASROOT="1"
Access the repository Actions section and create a new runner.
Make sure you select the appropriate architecture, which should be Linux
and ARM64
.
Once done, stop the agent and install the runner agent as a service:
./svc.sh install
./svc.sh start
./svc.sh status
This repository contains examples of pipelines in the .github/workflows
directory.
Check out the guidelines for Prisma migrations deployment, or for your preferred migration tool.
Start bu running a MySQL instance:
docker run -d \
-e MYSQL_DATABASE=mysqldb \
-e MYSQL_ROOT_PASSWORD=cxvxc2389vcxzv234r \
-p 3306:3306 \
--name mysql-prisma-local \
mysql:8.0
Special privileges are required by Prisma to apply shadow databases.
Enter the application directory:
cd app
Apply the migrations:
Whenever you update your Prisma schema, you will have to update your database schema using either
prisma migrate dev
orprisma db push
. This will keep your database schema in sync with your Prisma schema. The commands will also regenerate Prisma Client.
# This calls generate under the hood
npx prisma migrate dev --name init
Run the application locally:
npm run dev
Check if the schema and database connections are working:
curl localhost:3000/prisma
To verify that the Docker image,
docker compose up
Add the DATABASE_URL
environment variable:
export DATABASE_URL='mysql://root:cxvxc2389vcxzv234r@localhost:3306/mysqldb'
Deploy the migration:
npx prisma migrate deploy